You are here

Smashing Magazine

Subscribe to Smashing Magazine feed
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 4 hours 27 min ago

4 Lessons Web App Designers Can Learn From Google

Wed, 08/12/2020 - 03:30
4 Lessons Web App Designers Can Learn From Google 4 Lessons Web App Designers Can Learn From Google Suzanne Scacca 2020-08-12T10:30:00+00:00 2020-08-12T12:35:03+00:00

Whenever I’m curious about what more we could be doing to improve our users’ experiences, the first place I look to is Google. More specifically, I go to the Google Developers site or Think with Google to pull the latest consumer data.

But I was thinking today, “Why don’t we just copy what Google does?”

After all, Google has to walk the walk. If not, how would it ever convince anyone to adhere to its SEO and UX recommendations and guidelines?

The only thing is, Google’s sites and apps aren’t very attractive. They’re practical and intuitive, that’s for sure. But designs worth emulating? Eh.

That doesn’t really matter though. The basic principles for building a good web app exist across each of its platforms. So, if we’re looking for a definitive answer on what will provide SaaS users with the best experience, I think we need to start by dissecting Google’s platforms.

What Google Teaches Us About Good Web App Design

What we want to focus on are the components that make Google’s products so easy to use time and time again. By replicating these features within your own app, you’ll effectively reduce (if not altogether remove) the friction your users would otherwise encounter.

1. Make the First Thing They See Their Top Priority

When users enter your dashboard, the last thing you want is for them to be overwhelmed. Their immediate impression whenever they enter your app or return to the dashboard should be:

“I’m exactly where I need to be.”


“What the heck is going on here? Where do I find X?”

Now, depending on the purpose of your app, there are usually one or two things your users are going to be most concerned with.

Let’s say you have an app like Google Translate that has a clear utilitarian purpose. There’s absolutely no excuse for cluttering the main page. They’ve come here to do one thing:

Google Translate users don’t have to hunt around for the translator tool. (Source: Google Translate) (Large preview)

So, don’t waste their time. Place the tool front and center and let all other pages, settings or notices appear as secondary features of the app.

Something else this example teaches us is how you should configure your tool for users. Google could easily just leave this open-ended, but it defaults to:

Default Language —> English

Google’s data likely shows that this is the most popular way users use this app.

Although you can’t see it in the desktop app, you can see it on mobile. The formula goes like this:

Default Language —> Recent Language

I suspect that, for first-time users, Google will set the translation to the user’s native language (as indicated in their Google user settings).

If you have the data available, use it to configure defaults that reduce the number of steps your users have to take, too.

Not every web app provides users with a hands-on tool for solving a problem. In some cases, apps enable users to streamline and automate complex processes, which means their primary concern is going to be how well those processes are performing.

For that, we can look at a product like Google Search Console, which connects users to data on how their sites perform in Google search as well as insights into problems that might be holding them back.

It’s no surprise then that the first thing they see upon entering it is this:

The Google Search Console overview page shows users stats on Performance and Coverage. (Source: Google Search Console) (Large preview)

Performance (the number of clicks in Google search) and Coverage (number of pages indexed without error) are above the fold. Below it is another chart that displays recommended enhancements to improve core web vitals, mobile usability and sitelinks searchbox visibility.

Bottom line: The Overview page isn’t littered with charts depicting every data point collected by Google Search Console. Instead, it displays only the top priorities so users can get a bird’s-eye view of what’s going on and not get lost in data they don’t need at that time.

2. Create a Useful and Simple Navigation Wherever Relevant

This one seems like a no-brainer, but I’ll show you why I bring it up.

Zoom is a great video conferencing app. There’s no arguing that. However, when users want to schedule a meeting from their browser, this is what they see:

The Zoom web app complicates things with multiple menus. (Source: Zoom) (Large preview)

The “Join Meeting” and “Host Meeting” options are fine as they both eventually push the user into the desktop app. However, the “Schedule Meeting” in-browser experience isn’t great because it leaves the website navigation bars in place, which only serves as a distraction from the app’s sidebar on the left.

Once your users have created a login and have access to your app, they don’t need to see your site anymore. Ditch the website navigation and let them be submersed in the app.

Or do as Google Hangouts does. Lay your app out the way users expect an app to be laid out:

  • Primary navigation along the left side,
  • Hamburger menu button and/or More (…) button contain the secondary navigation,
  • Wide open space for users to play in the app.
A look inside Google Hangouts and its distraction-free interface and navigation. (Source: Google Hangouts) (Large preview)

But Google Hangouts doesn’t do away with the website completely. For users that want to quickly navigate to one of Google’s other products, they can use the grid-shaped icon in the top-right corner. So, if you feel it’s necessary for your users to be able to visit your website once again, you can build it into the app that way.

This example also demonstrates how important it is to keep your navigation as simple as possible.

Google Hangouts’ primary navigation uses symbols to represent each of the app’s tabs/options:

Google Hangouts uses icons to represent the tabs of its primary navigation. (Source: Google Hangouts) (Large preview)

While I think it’s okay for Google Hangouts to get away with this icon-only menu design, be careful with this approach. Unless the icons are universally understood (like the hamburger menu, search magnifying glass, or the plus sign), you can’t risk introducing icons that create more confusion.

As NNG points out, there’s a difference between an icon being recognizable and its meaning being indisputable.

So, one way you can get around this is to make the outward appearance of the menu icon-only. But upon hover, the labels appear so that users have additional context for what each means.

As for any secondary navigation you might need — including a Settings navigation — you can write out the labels since it will only appear upon user activation.

The Google Hangouts secondary navigation uses an icon and label for each tab. (Source: Google Hangouts) (Large preview)

Although some of the icons would be easy enough to identify, not all of them would instantly be recognizable (like “Invites” and “Hangouts Dialer”). If even one tab in your secondary navigation is rarely seen across other apps, spell them all out.

One last thing: The divider lines in this menu are a great choice. Rather than jam 10 tabs/options into this navigation bar together, they’re logically grouped, making it easier for users to find what they’re looking for.

3. Provide Users with Predictive Search Functionality

Every app should have a search bar. It might be there to help users sift through content, to find the contact they’re looking for from a long list, or to ask a question about something in the app.

The more complex your app is, the more critical a role internal search is going to play. But if you want to improve your users’ search experience even more, you’ll want to power yours with predictive search functionality.

Even though I’m sure you have a Support line, maybe a chatbot and perhaps an FAQs or Knowledgebase to help users find what they need, a smart search bar can connect them to what they’re really looking for (even if they don’t know how to articulate it).

Google has this search functionality baked into most of its products.

You’re familiar with autocomplete within the Google search engine itself. But here are some other use cases for smart search capabilities.

Google Drive connects users to documents (of all types — Docs, Sheets, Slides and more) as well as collaborators that match the search query.

An example search for 'speed' within Google Drive. (Source: Google Drive) (Large preview)

Users can, of course, be taken to a full search results page. However, the search bar itself predicts which content is the most relevant for the query. In this case, these are the most recent pieces of content I’ve written that include the term “speed” in the title.

Google Maps is a neat use case as it pulls data from a variety of connected (Google) sources to try and predict what its users are looking for.

Google Maps pulls from a variety of sources to predict where users want to travel to. (Source: Google Maps) (Large preview)

In this example, I typed in “Alicia”. Now, Google Maps knows me pretty well, so the first result is actually the address of one of my contacts. The remaining results are for addresses or businesses within a 45-mile radius containing the word “Alicia”.

It doesn’t just pull from there though. This is one of those cases where the more enjoyable you make the in-app experience, the more your users will engage with it — which means more data.

For example, this is what I see when I search for “Three”:

Google Maps will provide 'Favorite' locations in search results when relevant. (Source: Google Maps) (Large preview)

The very first thing it pulls up is a restaurant called Three Sisters (which is a fantastic restaurant in the city of Providence, by the way). If you look just above the center of the map where the red heart is, that’s the restaurant. This means that I’ve added it to my Favorite places and Google Maps actually calls it out as such in my search results.

Imagine how much more your users would love your app if it wasn’t always a struggle to get to the content, data or page they were looking for. Or to perform a desired action. When you give your users the ability to personalize their experience like this, use the information they’ve given you to improve their search experience, too.

4. Enable Users to Change the Design and Layout of the App

As a designer, you can do your best to design a great experience for your users. But let’s face it:

You’re never going to please everyone.

Unlike a website, though, which is pretty much what-you-see-is-what-you-get, SaaS users have the ability to change the design and layout of what they’re interacting with — if you let them. And you should.

There are many different ways this might apply to the app you’ve built.

Google Calendar, for example, has a ton of customization options available.

Google Calendar allows users to customize the look and view of their calendars. (Source: Google Calendar) (Large preview)

On the far left is a list of “My calendars”. Users can click which calendars and associated events they want to see within the app.

In the bottom-right corner is an arrowhead. This enables users to hide the Google apps side panel and give them more room to focus on upcoming events and appointments.

In the top-right, users have two places where they can customize their calendar:

  • The Settings bar allows them to adjust the color and density of the calendar.
  • The “Month” dropdown allows them to adjust how much of the calendar is seen at once.

These customizations would all be useful for any sort of project management, planning or appointment scheduling app.

For other apps, I’d recommend looking at Gmail. It’s chock full of customizations that you could adapt for your app.

Previously, if users clicked the Settings widget, it would move them out of the app and into the dedicated settings panel. To be honest, it was annoying, especially if you just wanted to make a small tweak.

Gmail’s Settings reveals a list of design and layout customization options. (Source: Gmail) (Large preview)

Now, the Settings button opens this panel within Gmail. It enables users to adjust things like:

  • Line spacing,
  • Background theme,
  • Inbox sorting priorities,
  • Reading pane layout,
  • Conversation view on/off.

This is a recent update to Gmail’s settings, which probably means these are the most commonly used design customizations its users actually use.

For any customizations users want to make that they can’t find in this new panel, they can click “See all settings” and customize the in-app design and layout (among other things) even further.

Other customizations you might find value in enabling in your app are:

  • Keyboard control,
  • Dark mode,
  • Color-blind mode,
  • Text resizing,
  • List/grid view toggling,
  • Widget and banner hiding,
  • Columns displayed.

Not only do these design and layout controls enable users to create an interface they enjoy looking at and that works better for their purposes, it can also help with accessibility.

Wrapping Up

There’s a reason why Google dominates market share with many of its products. It gets the user experience. Of course, this is due largely to the fact that it has access to more user data than most companies.

And while we should be designing solutions for our specific audiences, there’s no denying that Google’s products can help us set a really strong base for any audience — if we just pay attention to the trends across its platforms.

Further Reading on SmashingMag: (ra, yk, il)
Categories: Design

How To Configure Application Color Schemes With CSS Custom Properties

Tue, 08/11/2020 - 03:00
How To Configure Application Color Schemes With CSS Custom Properties How To Configure Application Color Schemes With CSS Custom Properties Artur Basak 2020-08-11T10:00:00+00:00 2020-08-11T12:34:36+00:00

Variables are a basic tool that help organize colors on a project. For a long time, front-end engineers used preprocessor variables to configure colors on a project. But now, many developers prefer the modern native mechanism for organizing color variables: CSS Custom Properties. Their most important advantage over preprocessor variables is that they work in realtime, not at the compilation stage of the project, and have support for the cascade model which allows you to use inheritance and redefinition of values on the fly.

When you’re trying to organize an application color scheme, you can always place all custom properties that relate to color in the root section, name them, and use it in all needed places.

See the Pen [Custom Properties for Colors]( by Artur Basak.

See the Pen Custom Properties for Colors by Artur Basak.

That’s an option, but does it help you to resolve issues of application theming, white labeling, a brand refresh, or organizing a light or dark mode? What if you need to adjust the color scheme to increase contrast? With the current approach, you will have to update each value in your variables.

In this article, I want to suggest a more flexible and resistant approach on how to split color variables using custom properties, which can solve many of these issues.

Setup Color Palette

The coloring of any website begins with the setup of a color scheme. Such a scheme is based on the color wheel. Usually, only a few primary colors form the basis of a palette, the rest are derived colors — tones and mid-tones. Most often, the palette is static and does not change while the web application is running.

According to the color theory, there are only a few options for color schemes:

  • Monochromatic scheme (one primary color)
  • Complementary scheme (two primary colors)
  • Triad scheme (three primary colors)
  • Tetradic scheme (four primary colors)
  • Adjacent pattern (two or three primary colors)

For my example, I will generate a triad color scheme using the Paletton service:

Paletton Service: Triadic Color Scheme. (Large preview)

I now have three main colors. On the basis of these, I will calculate the tones and mid-tones (the HSL format in combination with the calc function is a very useful tool for this). By changing the lightness value, I can generate several additional colors for the palette.

See the Pen [HSL Palette]( by Artur Basak.

See the Pen HSL Palette by Artur Basak.

Now if the palette is modified, then it will be necessary to change only the value of the primary colors. The rest will be recalculated automatically.

If you prefer HEX or RGB formats, then it does not matter; a palette can be formed at the stage of compiling the project with the corresponding functions of the preprocessor (e.g. with SCSS and the color-adjust function). As I’ve mentioned before, this layer is mostly static; it’s extremely rare that the palette may be changed in a running application. That’s why we can calculate it with preprocessors.

Note: I recommend also generating both HEX literal and RGB for each color. This will allow playing with the alpha channel in the future.

See the Pen [SCSS Palette]( by Artur Basak.

See the Pen SCSS Palette by Artur Basak.

The palette level is the only level where the color is encoded directly in the variable names, i.e. we can uniquely identify the color by reading the name.

Define Theme Or Functional Colors

Once the palette is done, the next step is the level of functional colors. At this level, the value of the color is not so important as its purpose, the function it performs, and what it exactly colorizes. For example, the primary or app brand color, border color, color of the text on a dark background, the color of the text on a light background, button background color, link color, hover link color, hint text color, and so on.

These are extremely common things for almost any website or application. We can say that such colors are responsible for a certain color theme of the application. Also, the values of such variables are taken strictly from the palette. Thus, we can easily change application themes by simply operating with different color palettes.

Below, I have created three typical UI controls: a button, a link, and an input field. They are colored using functional variables that contain values from the palette that I previously generated above. The main functional variable that is responsible for the application theme (conditional brand) is the primary color variable.

Using the three buttons at the top, you can switch themes (change the brand color for controls). The change occurs by using the appropriate CSSOM API (setProperty).

See the Pen [Functional Colors]( by Artur Basak.

See the Pen Functional Colors by Artur Basak.

This approach is convenient not only for theming but also for configuring individual web pages. For example, on the website, I used a common stylesheet and a functional variable --page-color to colorize the logo, headings, controls, and text selection for all pages. And in the own styles of each page, I just redefined this variable to set the page its individual primary color.

ZUBRY.BY website where each page has individual primary color. (Large preview) Use Component Colors

Large web projects always contain decomposition; we split everything into small components and reuse them in many places. Each component usually has its own style meaning it doesn’t matter what we used to decompose BEM or CSS Modules, or another approach; it’s important that each such piece of code can be called local scope and reused.

In general, I see the point in using color variables at the component level in two cases.

The first is when components that according to application style guide are repeated with different settings, e.g. buttons for different needs like primary (brand) button, secondary button, tertiary, and so on.

Tispr application styleguide. Buttons. (Large preview)

The second is when components that have several states with different colors, e.g. button hover, active and focus states; normal and invalid states for input or select field, and so on.

A more rare case when component variables may come in handy is the functionality of a “white label”. The “white label” is a service feature that allows the user to customize or brand some part of the user interface to improve the experience of interacting with their clients. For example, electronic documents that a user shares with his customers through a service or email templates. In this case, the variables at the component level will help to configure certain components separately from the rest of the color theme of the application.

In the example below, I’ve now added controls for customizing colors of the primary (brand) button. Using color variables of the component level we can configure UI controls separately from each other.

See the Pen [Component Colors]( by Artur Basak.

See the Pen Component Colors by Artur Basak. How To Determine What Level A Variable Has?

I came across the question of how to understand what can be put in the root (theme or functional level), and what to leave at the level of a component. This is an excellent question that is difficult to answer without seeing the situation you are working with.

Unfortunately, the same approach as in programming does not work with colors and styles, if we see three identical pieces of code then we need to refactor it.

Color can be repeated from component to component, but this does not mean that it is a rule. There can be no relation between such components. For example, the border of the input field and the background of the primary button. Yes, in my example above that’s the case, but let’s check following example:

See the Pen [Color Split: Only Palette]( by Artur Basak.

See the Pen Color Split: Only Palette by Artur Basak.

The dark gray color is repeated — this is the border of the input field, the fill color of the close icon, and the background of the secondary button. But these components are in no way connected with each other. If the border color of the input field changes, then we will not change the background of the secondary button. For such a case we must keep here just the variable from the palette.

Application style guide example. (Large preview)

What about green? We can clearly define it as the primary or brand color, most likely, if the color of the main button changes, then the color of the link and header of the first level will also change.

What about red? Invalid state of input fields, error messages, and the destructive buttons will have the same color at the whole application level. This is a pattern. Now I can define several common functional variables in the root section:

See the Pen [Color Split: Functional Level]( by Artur Basak.

See the Pen Color Split: Functional Level by Artur Basak.

Regarding the level of component colors, we can easily identify components that can be customized using custom properties.

The button is repeated with different settings, the background color and text for different use cases change — primary, secondary, tertiary, destructive or negative case.

The input field has two states — incorrect and normal, where the background and border colors differ. And so, let’s put these settings into color variables at the level of the corresponding components.

For the rest of the components, it is not necessary to define local color variables, this will be redundant.

See the Pen [Color Split: Component Level]( by Artur Basak.

See the Pen Color Split: Component Level by Artur Basak.

You need to dive into the pattern language of your project, which is, probably, being developed by the design team and UX. Engineers must fully understand the whole concept of a visual language, only then we can determine what is common and should live on a functional level, and what should remain in the local scope of visibility.

But everything is not so complicated, there are obvious things. The general background of the page, the background, and color of the main text, in most cases this is what sets the theme of your application. It is extremely convenient to collect such things that are responsible for the configuration of a particular mode (like dark or light mode).

Why Not Put Everything In The Root Section?

I had such an experience. On Lition project, the team and I were faced with the fact that we needed to support IE11 for the web application, but not for the website and landings. A common UI Kit was used between the projects, and we decided to put all the variables in the root, this will allow us to redefine them at any level.

And also with this approach for the web application and IE11 case, we simply passed the code through the following post-processor plugin and transformed these variables into literals for all UI components in the project. This trick possible only if all variables were defined in the root section because the post-processor can’t understand the specifics of the cascade model.

Lition SSR web-site. All variables in the root section. (Large preview)

Now I understand that this was not the right way. Firstly, if you put component colors into the root section, then you break the separation of concerns principle. As a result, you can end up with redundant CSS in the stylesheet. For example, you have the folder of components where each component has its own styles. You also have a common stylesheet where you describe color variables in the root section. You decide to remove the button component; in this case, you must remember to also remove the variables associated with the button from the common styles file.

Secondly, this is not the best solution in terms of performance. Yes, a color change causes only the process of a repaint, not reflow/layout, this in itself is not too costly, but when you make some changes at the highest level, you will use more resources to check the entire tree than when these changes are in a small local area. I recommend reading the performance benchmark of CSS variables from Lisi Linhart for more details.

On my current project Tispr, the team and I use split and do not dump everything in the root, on the high level only a palette and functional colors. Also, we are not afraid of IE11, because this problem is solved by the corresponding polyfill. Just install npm module ie11-custom-properties and import library into your application JS bundle:

// Use ES6 syntax import "ie11-custom-properties"; // or CommonJS require('ie11-custom-properties');

Or add module by script tag:

<script async src="./node_modules/ie11-custom-properties/ie11CustomProperties.js">

Also, you can add the library without npm via CDN. The work of this polyfill is based on the fact that IE11 has minimal support for custom properties, where properties can be defined and read based on the cascade. This is not possible with properties starting with double dashes, but possibly with a single dash (the mechanism similar to vendor prefixes). You can read more about this in the repository documentation, as well as get acquainted with some limits. Other browsers will ignore this polyfill.

Below is a palette of the Tispr web application as well as the controls of the “white label” functionality for the e-documents (such as user contracts, invoices, or proposals).

Tispr Styleguide: Color Palette. (Large preview) Tispr Styleguide: Brand Picker for White Label functionality. (Large preview) Why Not Store Color Variables On The JavaScript Side?

Another reasonable question: why not store the palette and function variables in JavaScript code? This can also be dynamically changed and later redefined colors through inline styles. This could be an option, but most likely this approach would be less optimal since you need to have access to certain elements and change their color properties. With CSS variables, you will only change a single property, i.e. the variable value.

In JavaScript, there are no native functions or API for working with colors. In the CSS Color Module 5, there will be many opportunities to make derived colors or somehow calculate them. From the perspective of the future, CSS Custom Properties are richer and more flexible than JS variables. Also, with JS variables, there will be no possibility to use inheritance in cascade and that’s the main disadvantage.


Splitting colors into three levels (palette, functional, and component) can help you be more adaptive to changes and new requirements while working on a project. I believe that CSS Custom Properties are the right tool for organizing color split — it does not matter what you use for styling: pure CSS, preprocessors, or CSS-in-JS approach.

I came to this approach through my own experience, but I’m not alone. Sara Soueidan described in her article a similar approach in which she split variables into global and component levels.

I would also like to suggest reading the Lea Verou’s article where she describes possible cases of applying CSS variables (not only in terms of color).

(ra, yk, il)
Categories: Design

Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless?

Mon, 08/10/2020 - 22:00
Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless? Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless? Drew McLellan 2020-08-11T05:00:00+00:00 2020-08-11T06:04:21+00:00

Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.

Show Notes Weekly Update Transcript

Drew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?

Chris Coyier: Hey, I’m smashing.

Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.

Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?

Drew: Oh, Plus, yeah.

Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…

Drew: I think so, yeah-

Chris: Yeah.

Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.

Chris: Mm (affirmative).

Drew: This is something you’ve been learning sort of more about for a little while. Is that right?

Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.

Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.

Chris: Yeah, yeah. That’s it.

Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?

Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.

Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…

Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have [email protected], which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.

Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.

Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.

Chris: Yeah. Sure.

Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.

Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.

Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-

Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.

Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.

Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?

Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?

Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.

Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”

Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.

Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.

Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?

Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.

Drew: Mm (affirmative).

Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.

Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…

Chris: Yeah.

Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.

Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?

Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-

Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.

Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?

Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.

Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.

Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.

Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.

Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.

Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.

Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.

Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.

Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?

Chris: Mm (affirmative).

Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?

Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?

Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that -

Chris: It’s nice.

Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.

Chris: Mm (affirmative).

Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?

Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.

Drew: Are there ways to mitigate that, that are particularly -

Chris: I don’t know.

Drew: … suited to this sort of approach, that you’ve come across?

Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.

Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.

Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.

Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.

Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.

Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”

Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?

Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.

Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.

Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.

Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.

Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?

Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.

Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.

Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time -

Drew: Absolutely.

Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.

Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.

Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.

Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?

Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-

Drew: Yeah, that sounds smart. Yep.

Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.

Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.

Chris: Easily.

Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?

Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.

Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.

Drew: Yeah, I think that’s the way that Netlify manage it.

Chris: They all do, you know?

Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.

Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”

Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?

Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.

Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.

Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.

Drew: Yeah.

Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a [email protected] that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?

Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.

Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.

Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?

Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.

Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…

Chris: That’s all it is, pretty much, is lists of technology. Yeah.

Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.

Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.

Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”

Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s So, I’ve been learning all about serverless. What have you been learning about lately, Chris?

Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.

Drew: So billing by the second. Yeah.

Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.

Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.

Drew: Really?

Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.

Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.

Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.

Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?

Drew: That’s fascinating-

Chris: Pretty cool.

Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at and CodePen at But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at Thanks for joining us today, Chris. Do you have any parting words?

Chris: I hope that’s the real URL.

Categories: Design

Better Error Handling In NodeJS With Error Classes

Mon, 08/10/2020 - 03:00
Better Error Handling In NodeJS With Error Classes Better Error Handling In NodeJS With Error Classes Kelvin Omereshone 2020-08-10T10:00:00+00:00 2020-08-10T12:07:39+00:00

Error handling is one of those parts of software development that don’t quite get the amount of attention it really deserves. However, building robust applications requires dealing with errors properly.

You can get by in NodeJS without properly handling errors but due to the asynchronous nature of NodeJS, improper handling or errors can cause you pain soon enough — especially when debugging applications.

Before we proceed, I would like to point out the type of errors we’ll be discussing how to utilize error classes.

Operational Errors

These are errors discovered during the run time of a program. Operational errors are not bugs and can occur from time to time mostly because of one or a combination of several external factors like a database server timing out or a user deciding to make an attempt on SQL injection by entering SQL queries in an input field.

Below are more examples of operational errors:

  • Failed to connect to a database server;
  • Invalid inputs by the user (server responds with a 400 response code);
  • Request timeout;
  • Resource not found (server responds with a 404 response code);
  • Server returns with a 500 response.

It’s also worthy of note to briefly discuss the counterpart of Operational Errors.

Programmer Errors

These are bugs in the program which can be resolved by changing the code. These types of errors can not be handled because they occur as a result of the code being broken. Example of these errors are:

  • Trying to read a property on an object that is not defined.
const user = { firstName: 'Kelvin', lastName: 'Omereshone', } console.log(user.fullName) // throws 'undefined' because the property fullName is not defined
  • Invoking or calling an asynchronous function without a callback.
  • Passing a string where a number was expected.

This article is about Operational Error handling in NodeJS. Error handling in NodeJS is significantly different from error handling in other languages. This is due to the asynchronous nature of JavaScript and the openness of JavaScript with errors. Let me explain:

In JavaScript, instances of the error class is not the only thing you can throw. You can literally throw any data type this openness is not allowed by other languages.

For example, a JavaScript developer may decide to throw in a number instead of an error object instance, like so:

// bad throw 'Whoops :)'; // good throw new Error('Whoops :)')

You might not see the problem in throwing other data types, but doing so will result in a harder time debugging because you won’t get a stack trace and other properties that the Error object exposes which are needed for debugging.

Let’s look at some incorrect patterns in error handling, before taking a look at the Error class pattern and how it is a much better way for error handling in NodeJS.

Bad Error Handling Pattern #1: Wrong Use Of Callbacks

Real-world scenario: Your code depends on an external API requiring a callback to get the result you expect it to return.

Let’s take the below code snippet:

'use strict'; const fs = require('fs'); const write = function () { fs.mkdir('./writeFolder'); fs.writeFile('./writeFolder/foobar.txt', 'Hello World'); } write();

Until NodeJS 8 and above, the above code was legitimate, and developers would simply fire and forget commands. This means developers weren’t required to provide a callback to such function calls, and therefore could leave out error handling. What happens when the writeFolder hasn’t been created? The call to writeFile won’t be made and we wouldn’t know anything about it. This might also result in race condition because the first command might not have finished when the second command started again, you wouldn’t know.

Let’s start solving this problem by solving the race condition. We would do so by giving a callback to the first command mkdir to ensure the directory indeed exists before writing to it with the second command. So our code would look like the one below:

'use strict'; const fs = require('fs'); const write = function () { fs.mkdir('./writeFolder', () => { fs.writeFile('./writeFolder/foobar.txt', 'Hello World!'); }); } write();

Though we solved the race condition, we are not done quite yet. Our code is still problematic because even though we used a callback for the first command, we have no way of knowing if the folder writeFolder was created or not. If the folder wasn’t created, then the second call will fail again but still, we ignored the error yet again. We solve this by…

Error Handling With Callbacks

In order to handle error properly with callbacks, you must make sure you always use the error-first approach. What this means is that you should first check if there is an error returned from the function before going ahead to use whatever data(if any) was returned. Let’s see the wrong way of doing this:

'use strict'; // Wrong const fs = require('fs'); const write = function (callback) { fs.mkdir('./writeFolder', (err, data) => { if (data) fs.writeFile('./writeFolder/foobar.txt', 'Hello World!'); else callback(err) }); } write(console.log);

The above pattern is wrong because sometimes the API you are calling might not return any value or might return a falsy value as a valid return value. This would make you end up in an error case even though you might apparently have a successful call of the function or API.

The above pattern is also bad because it’s usage would eat up your error(your errors won’t be called even though it might have happened). You will also have no idea of what is happening in your code as a result of this kind of error handling pattern. So the right way for the above code would be:

'use strict'; // Right const fs = require('fs'); const write = function (callback) { fs.mkdir('./writeFolder', (err, data) => { if (err) return callback(err) fs.writeFile('./writeFolder/foobar.txt', 'Hello World!'); }); } write(console.log); Wrong Error Handling Pattern #2: Wrong Use Of Promises

Real-world scenario: So you discovered Promises and you think they are way better than callbacks because of callback hell and you decided on promisifying some external API your code base depended upon. Or you are consuming a promise from an external API or a browser API like the fetch() function.

These days we don’t really use callbacks in our NodeJS codebases, we use promises. So let’s reimplement our example code with a promise:

'use strict'; const fs = require('fs').promises; const write = function () { return fs.mkdir('./writeFolder').then(() => { fs.writeFile('./writeFolder/foobar.txt', 'Hello world!') }).catch((err) => { // catch all potential errors console.error(err) }) }

Let’s put the above code under a microscope — we can see that we are branching off the fs.mkdir promise into another promise chain(the call to fs.writeFile) without even handling that promise call. You might think a better way to do it would be:

'use strict'; const fs = require('fs').promises; const write = function () { return fs.mkdir('./writeFolder').then(() => { fs.writeFile('./writeFolder/foobar.txt', 'Hello world!').then(() => { // do something }).catch((err) => { console.error(err); }) }).catch((err) => { // catch all potential errors console.error(err) }) }

But the above would not scale. This is because if we have more promise chain to call, we would end up with something similar to the callback hell which promises were made to solve. This means our code will keep indenting to the right. We would have a promise hell on our hands.

Promisifying A Callback-Based API

Most times you would want to promisify a callback-based API on your own in order to better handle errors on that API. However, this is not really easy to do. Let’s take an example below to explain why.

function doesWillNotAlwaysSettle(arg) { return new Promise((resolve, reject) => { doATask(foo, (err) => { if (err) { return reject(err); } if (arg === true) { resolve('I am Done') } }); }); }

From the above, if arg is not true and we don’t have an error from the call to the doATask function then this promise will just hang out which is a memory leak in your application.

Swallowed Sync Errors In Promises

Using the Promise constructor has several difficulties one of these difficulties is; as soon as it is either resolved or rejected it cannot get another state. This is because a promise can only get a single state — either it is pending or it is resolved/rejected. This means we can have dead zones in our promises. Let’s see this in code:

function deadZonePromise(arg) { return new Promise((resolve, reject) => { doATask(foo, (err) => { resolve('I’m all Done'); throw new Error('I am never reached') // Dead Zone }); }); }

From the above we see as soon as the promise is resolved, the next line is a dead zone and will never be reached. This means any following synchronous error handling perform in your promises will just be swallowed and will never be thrown.

Real-World Examples

The examples above help explain poor error handling patterns, let’s take a look at the sort of problems you might see in real-life.

Real World Example #1 — Transforming Error To String

Scenario: You decided the error returned from an API is not really good enough for you so you decided to add your own message to it.

'use strict'; function readTemplate() { return new Promise(() => { databaseGet('query', function(err, data) { if (err) { reject('Template not found. Error: ', + err); } else { resolve(data); } }); }); } readTemplate();

Let’s look at what is wrong with the above code. From the above we see the developer is trying to improve the error thrown by the databaseGet API by concatenating the returned error with the string “Template not found”. This approach has a lot of downsides because when the concatenation was done, the developer implicitly runs toString on the error object returned. This way he loses any extra information returned by the error(say goodbye to stack trace). So what the developer has right now is just a string that is not useful when debugging.

A better way is to keep the error as it is or wrap it in another error that you’ve created and attached the thrown error from the databaseGet call as a property to it.

Real-World Example #2: Completely Ignoring The Error

Scenario: Perhaps when a user is signing up in your application, if an error occur you want to just catch the error and show a custom message but you completely ignored the error that was caught without even logging it for debugging purposes.

router.get('/:id', function (req, res, next) { database.getData(req.params.userId) .then(function (data) { if (data.length) { res.status(200).json(data); } else { res.status(404).end(); } }) .catch(() => { log.error(' could not get data: ', req.params.userId); res.status(500).json({error: 'Internal server error'}); }) });

From the above, we can see that the error is completely ignored and the code is sending 500 to the user if the call to the database failed. But in reality, the cause for the database failure might be malformed data sent by the user which is an error with the status code of 400.

In the above case, we would be ending up in a debugging horror because you as the developer wouldn’t know what went wrong. The user won’t be able to give a decent report because 500 internal server error is always thrown. You would end up wasting hours in finding the problem which will tantamount to wastage of your employer’s time and money.

Real-World Example #3: Not Accepting The Error Thrown From An API

Scenario: An error was thrown from an API you were using but you don’t accept that error instead you marshall and transform the error in ways that make it useless for debugging purposes.

Take the following code example below:

async function doThings(input) { try { validate(input); try { await db.create(input); } catch (error) { error.message = `Inner error: ${error.message}` if (error instanceof Klass) { error.isKlass = true; } throw error } } catch (error) { error.message = `Could not do things: ${error.message}`; await rollback(input); throw error; } }

A lot is going on in the above code that would lead to debugging horror. Let’s take a look:

  • Wrapping try/catch blocks: You can see from the above that we are wrapping try/catch block which is a very bad idea. We normally try to reduce the use of try/catch blocks to minify the surface where we would have to handle our error (think of it as DRY error handling);
  • We are also manipulating the error message in the attempt to improve which is also not a good idea;
  • We are checking if the error is an instance of type Klass and in this case, we are setting a boolean property of the error isKlass to truev(but if that check passes then the error is of the type Klass);
  • We are also rolling back the database too early because, from the code structure, there is a high tendency that we might not have even hit the database when the error was thrown.

Below is a better way to write the above code:

async function doThings(input) { validate(input); try { await db.create(input); } catch (error) { try { await rollback(); } catch (error) { logger.log('Rollback failed', error, 'input:', input); } throw error; } }

Let’s analyze what we are doing right in the above snippet:

  • We are using one try/catch block and only in the catch block are we using another try/catch block which is to serve as a guard in case something goes on with that rollback function and we are logging that;
  • Finally, we are throwing our original received error meaning we don’t lose the message included in that error.

We mostly want to test our code(either manually or automatically). But most times we are only testing for the positive things. For a robust test, you must also test for errors and edge cases. This negligence is responsible for bugs finding their way into production which would cost more extra debugging time.

Tip: Always make sure to test not only the positive things(getting a status code of 200 from an endpoint) but also all the error cases and all the edge cases as well.

Real-World Example #4: Unhandled Rejections

If you’ve used promises before, you have probably run into unhandled rejections.

Here is a quick primer on unhandled rejections. Unhandled rejections are promise rejections that weren’t handled. This means that the promise was rejected but your code will continue running.

Let’s look at a common real-world example that leads to unhandled rejections..

'use strict'; async function foobar() { throw new Error('foobar'); } async function baz() { throw new Error('baz') } (async function doThings() { const a = foobar(); const b = baz(); try { await a; await b; } catch (error) { // ignore all errors! } })();

The above code at first look might seem not error-prone. But on a closer look, we begin to see a defect. Let me explain: What happens when a is rejected? That means await b is never reached and that means its an unhandled rejection. A possible solution is to use Promise.all on both promises. So the code would read like so:

'use strict'; async function foobar() { throw new Error('foobar'); } async function baz() { throw new Error('baz') } (async function doThings() { const a = foobar(); const b = baz(); try { await Promise.all([a, b]); } catch (error) { // ignore all errors! } })();

Here is another real-world scenario that would lead to an unhandled promise rejection error:

'use strict'; async function foobar() { throw new Error('foobar'); } async function doThings() { try { return foobar() } catch { // ignoring errors again ! } } doThings();

If you run the above code snippet, you will get an unhandled promise rejection, and here is why: Although it’s not obvious, we are returning a promise (foobar) before we are handling it with the try/catch. What we should do is await the promise we are handling with the try/catch so the code would read:

'use strict'; async function foobar() { throw new Error('foobar'); } async function doThings() { try { return await foobar() } catch { // ignoring errors again ! } } doThings(); Wrapping Up On The Negative Things

Now that you have seen wrong error handling patterns, and possible fixes, let’s now dive into Error class pattern and how it solves the problem of wrong error handling in NodeJS.

Error Classes

In this pattern, we would start our application with an ApplicationError class this way we know all errors in our applications that we explicitly throw are going to inherit from it. So we would start off with the following error classes:

  • ApplicationError
    This is the ancestor of all other error classes i.e all other error classes inherits from it.
  • DatabaseError
    Any error relating to Database operations will inherit from this class.
  • UserFacingError
    Any error produced as a result of a user interacting with the application would be inherited from this class.

Here is how our error class file would look like:

'use strict'; // Here is the base error classes to extend from class ApplicationError extends Error { get name() { return; } } class DatabaseError extends ApplicationError { } class UserFacingError extends ApplicationError { } module.exports = { ApplicationError, DatabaseError, UserFacingError }

This approach enables us to distinguish the errors thrown by our application. So now if we want to handle a bad request error (invalid user input) or a not found error (resource not found) we can inherit from the base class which is UserFacingError (as in the code below).

const { UserFacingError } = require('./baseErrors') class BadRequestError extends UserFacingError { constructor(message, options = {}) { super(message); // You can attach relevant information to the error instance // (e.g.. the username) for (const [key, value] of Object.entries(options)) { this[key] = value; } } get statusCode() { return 400; } } class NotFoundError extends UserFacingError { constructor(message, options = {}) { super(message); // You can attach relevant information to the error instance // (e.g.. the username) for (const [key, value] of Object.entries(options)) { this[key] = value; } } get statusCode() { return 404 } } module.exports = { BadRequestError, NotFoundError }

One of the benefits of the error class approach is that if we throw one of these errors, for example, a NotFoundError, every developer reading this codebase would be able to understand what is going on at this point in time(if they read the code).

You would be able to pass in multiple properties specific to each error class as well during the instantiation of that error.

Another key benefit is that you can have properties that are always part of an error class, for example, if you receive a UserFacing error, you would know that a statusCode is always part of this error class now you can just directly use it in the code later on.

Tips On Utilizing Error Classes
  • Make your own module(possibly a private one) for each error class that way you can simply import that in your application and use it everywhere.
  • Throw only errors that you care about(errors that are instances of your error classes). This way you know your error classes are your only Source of Truth and it contains all information necessary to debug your application.
  • Having an abstract error module is quite useful because now we know all necessary information concerning errors our applications can throw are in one place.
  • Handle errors in layers. If you handle errors everywhere, you have an inconsistent approach to error handling which is hard to keep track of. By layers I mean like database, express/fastify/HTTP layers, and so on.

Let’s see how error classes looks in code. Here is an example in express:

const { DatabaseError } = require('./error') const { NotFoundError } = require('./userFacingErrors') const { UserFacingError } = require('./error') // Express app.get('/:id', async function (req, res, next) { let data try { data = await database.getData(req.params.userId) } catch (err) { return next(err); } if (!data.length) { return next(new NotFoundError('Dataset not found')); } res.status(200).json(data) }) app.use(function (err, req, res, next) { if (err instanceof UserFacingError) { res.sendStatus(err.statusCode); // or res.status(err.statusCode).send(err.errorCode) } else { res.sendStatus(500) } // do your logic logger.error(err, 'Parameters: ', req.params, 'User data: ', req.user) });

From the above, we are leveraging that Express exposes a global error handler which allows you handle all your errors in one place. You can see the call to next() in the places we are handling errors. This call would pass the errors to the handler which is defined in the app.use section. Because express does not support async/await we are using try/catch blocks.

So from the above code, to handle our errors we just need to check if the error that was thrown is a UserFacingError instance and automatically we know that there would be a statusCode in the error object and we send that to the user (you might want to have a specific error code as well which you can pass to the client) and that is pretty much it.

You would also notice that in this pattern (error class pattern) every other error that you did not explicitly throw is a 500 error because it is something unexpected that means you did not explicitly throw that error in your application. This way, we are able to distinguish the types of error going on in our applications.


Proper error handling in your application can make you sleep better at night and save debug time. Here are some takeaway key points to take from this article:

  • Use error classes specifically set up for your application;
  • Implement abstract error handlers;
  • Always use async/await;
  • Make errors expressive;
  • User promisify if necessary;
  • Return proper error statuses and codes;
  • Make use of promise hooks.
Smashing Newsletter

Every week, we send out useful front-end & UX techniques. Subscribe and get the Smart Interface Design Checklists PDF delivered to your inbox.

Your (smashing) email Subscribe → Front-end, design and UX. Sent 2× a month.
You can always unsubscribe with just one click. (ra, yk, il)
Categories: Design

How To Create A Porsche 911 With Sketch (Part 3)

Fri, 08/07/2020 - 03:00
How To Create A Porsche 911 With Sketch (Part 3) How To Create A Porsche 911 With Sketch (Part 3) Nikola Lazarević 2020-08-07T10:00:00+00:00 2020-08-08T00:05:40+00:00

We continue our tutorial with the wheels of our Porsche 911 car, but before we proceed with the next steps, I’d like to shine the spotlight on the famous Fuchs wheels that were designed in the shape of a cloverleaf (or a wing). First, a bit of history:

“The Fuchs wheel is a specialty wheel made for the first Porsche 911/911S model in the early 1960's. Designed in conjunction with Otto Fuchs KG, Porsche modeler Heinrich Klie, and Ferdinand Porsche Jr., the Fuchs wheel was the first lightweight forged wheel to be fitted to a production automotive vehicle. They provided the rear-engined Porsche 911 sports car with a reduction in unsprung mass, through a strong and lightweight alloy wheel.”

— Source: Wikipedia

We’ll start with the design of the tires first.


Un-hide the wheel base in the Layers panel. Turn off Borders and set Fills to #2A2A2A. Then, duplicate this shape, change Fills to #000000, move it behind the base wheel (right-click on it and choose Move Backward) and push it 20px to the right.

Tip: Holding Shift + → will move the selection in 10-pixel increments.

Let’s start working on the tire design. (Large preview)

Select the base wheel and add some guidelines to make alignment of all elements easier. To do this, show the Sketch rulers (press Ctrl + R). Then, add a vertical guideline at the center of the base wheel with a click on the upper ruler, and do the same for the horizontal guide on the left ruler.

Add a vertical and a horizontal guideline at the center of the ‘base wheel’. (Large preview)

Temporarily turn off the guidelines by pressing Ctrl + R on the keyboard. Create a tiny rectangle with a width of 2px and a height of 8px, with the Fills set to #000000 and the Borders turned off. This rectangle will serve as the base unit for creating the treads (a.k.a. the tread pattern). Center the rectangle to the base wheel horizontally.

Create the base unit for the treads. (Large preview)

Zoom in close enough (here, I zoomed in to 3200%), choose Transform from the top toolbar, select the top middle point and push it 2px to the right, then select the middle bottom point and push it 2px to the left to make it look slanted.

Note: If you don’t see the Transform tool in the top toolbar, you can add it there via ViewCustomize Toolbar… or you can use the keyboard shortcut Cmd + Shift + T.

Transform the tread base unit and make it look slanted. (Large preview)

Turn back on the guidelines (Ctrl + R) and make sure this rectangle is selected. Put the rectangle into a group by pressing Cmd + G on the keyboard. Give this group the name treads.

We will use the Rotate Copies tool to create the treads around the wheel base. Like Create Symbol, Rotate Copies can be one of those features that will save you a lot of time and effort!

Note: If you are using Sketch version 67.0 or 67.1, you may experience a bug with Rotate Copies operation. If this happens, you will need to create the treads around the wheel base manually; or (better), you should update to v. 67.2 (or later) where this issue has been resolved.

Make sure the rectangle inside the group treads is selected, then go to LayerPath → select Rotate Copies. A dialog box that will open will let you define how many additional copies of the selected element to make. Enter 71 so that in total we will have 72 rectangles around the wheel base that will be the treads. Press Rotate in the dialog box. After you have entered this value in the dialog, you will be presented with all of the rectangles and a circular indicator in the middle.

Tip: Performing this step in Sketch is very CPU and memory intensive! If you are working on a modern machine, probably you will not experience any issues; but if your Mac is a bit older, then your mileage may vary. In general, when working with a large number of copies, try to first turn off Borders to avoid getting stuck and to achieve the result of the operation faster.

Use the Rotate Copies feature to create the treads. (Large preview)

Now, move this circular indicator down until it is located precisely at the intersection of the guides — and voilà! we have 72 rectangles evenly placed around the wheel base. When you’re done, press Esc or Enter. Note that if you miss putting the circular indicator (the center of rotation) right at the intersection of the guides, the rectangles won’t be distributed perfectly around the wheel base so be careful.

Note: The Rotate Copies tool doesn’t create a compound shape in the newer versions of Sketch (version 52 or later) and instead creates (and rotates) separate copies of the shape. By putting the first shape into a group we’ve secured that all created and rotated shapes are inside this group named treads.

The ‘treads’ group created. (Large preview)

Select the base wheel again, duplicate, position it above treads in the Layers panel list, and scale it down by 14px. Change Color to #3F3F3F and turn on Borders — set Color to #000000, Position to Inside and Width to 1px.

Continue working on the tire details. (Large preview)

Duplicate this circle, turn off Fills and set the Border Width to 20px. We only want to show 2⁄4 of the Borders — 1⁄4 on the top left side and 1⁄4 on the bottom right side. To do that, type in the Dash field r*π*0.25 where r is diameter of the circle (254px in my case), 0.25 is 25% (or 1⁄4) of the border, and π is 3.14.

So in this case enter the following formula in the Dash field: 254*3.14*0.25, and press Enter (or Tab) on the keyboard.

Note: If you enter a number in the Dash field and press Tab on the keyboard, Sketch will automatically fill the Gap field with the same number. Same thing will happen if you press Enter.

Let’s show only 2/4 of the borders. (Large preview)

Duplicate the circle, scale it down a bit, set the Borders Width to 12px and apply an Angular Gradient with the following properties:

  1. #9D9D9D
  2. #000000
  3. #000000
  4. #595959
  5. #000000
  6. #000000
Set an Angular Gradient on the circle shape. (Large preview)

Then, apply a Gaussian Blur effect with an Amount of 4.

Apply a Gaussian Blur. (Large preview)

Once again, duplicate the circle, turn off Gaussian Blur and scale it down. Turn on Fills, make sure it is still #3F3F3F, set the Borders to Outside position and Width to 1px. Change Color to Linear Gradient and use #000000 for the first color stop and #444444 for the last color stop.

Add Inner Shadows — for the Color use #FFFFFF at 20% Alpha and set Blur to 2; then apply Shadows — for the Color use #000000 at 90% Alpha and set Blur to 2.

The Inner Shadows effect added. (Large preview)

Now it’s the perfect time to add a bit of a texture! Select and copy the wheel base shape, paste it on top, then Move Backward once so it sits just beneath the circle we’ve just created. Set Fills to Pattern Fill, Type to Fill Image and choose the bottom right pattern. Set Opacity for this shape to 10%.

Now add a bit of texture. (Large preview)

Select the circle on top, duplicate, turn off Borders, Inner Shadows and Shadows. Set Fills to #000000 and Opacity to 100% and scale down this circle by 32px. Apply a Gaussian Blur with the Amount of 4.

(Large preview)

Push it down 3px, then duplicate and move the duplicate 6px up.

Duplicate then move the duplicate up. (Large preview)

Duplicate the last circle, turn off the Gaussian Blur, push it down by 3px and scale it down by 4px. Add a Shadows effect with the Color set to #FFFFFF at 90% Alpha and Blur set to 2.

Duplicate the circle again, push and scale it down a bit. Almost there! (Large preview)

Now, duplicate this circle, turn off Shadows and scale it down a bit (by 2px). Turn on Borders, set position to Inside, Width to 1px and apply a Linear Gradient:

  1. #CCCCCC
  2. #A6A6A6
  3. #A4A4A4
  4. #CFCFCF
Apply a Linear Gradient. (Large preview)

Change Fills to Angular Gradient with the following properties (attention! it’s a long list of color stops):

  1. #D3D3D3
  2. #ACACAC
  3. #D8D8D8
  4. #B4B4B4
  5. #8F8F8F
  6. #B2B2B2
  7. #C4C4C4
  8. #A4A4A4
  9. #C3C3C3
  10. #ADADAD
  11. #ADADAD
  12. #949494
  13. #BBBBBB
  14. #929292
  15. #C2C2C2
  16. #B4B4B4
  17. #8F8F8F
  18. #B4B4B4
  19. #D8D8D8
  20. #A9A9A9
Apply an Angular Gradient. (Large preview)

Then, add an Inner Shadows effect — set Color to #000000 at 50% Alpha and set Blur and Spread to 2.

Duplicate, scale it down by 14px, change Fills to #434343 Solid Color, Borders position to Outside, and Inner Shadows properties to: Color #000000 at 90% Alpha, Blur and Spread set to 24.

Then add two Shadows effects:

  • first — Color: #000000 at 50% Alpha; Y: 2; Blur: 5
  • second — Color: #000000 at 50% Alpha; Blur: 2
Add two Shadows effects. (Large preview)

Again, duplicate the shape, scale it down by 8px, turn off Fills, Shadows and Inner Shadow, and set Borders Color to #414141.

Duplicate and scale down the circle. (Large preview)

Switch to the Oval tool (O), and draw a circle from the intersection of the guides. Turn off Fills, set Borders Color to #575757, position to Inside and Width to 1px.

Duplicate, scale it down a bit and make sure the border Width is 1px. Repeat this seven more times, so at the end you have nine concentric circles. Make sure that all Borders Width are 1px. Use the image below as reference.

The nine concentric circles. (Large preview)

Select all the concentric circles and put them into a group.


We will start working on the rim design next.

Draw a circle from the intersection of the guides, then draw a rectangle on top and center it horizontally to the circle.

Start working on the rim design. (Large preview)

Select this rectangle, double-click on it to switch to vector editing mode and move the points until you have something like on the image below. Select the top two points and set the Radius to 20.

Set the radius of the top two points. (Large preview)

We will use Rotate Copies again to distribute this shape around the circle. Select both — circle and the modified rectangle — turn off Borders and place them into a group. Now select the modified rectangle, go to LayerPath, select Rotate Copies, enter 4 in the dialog box (so we’ll have a total of five shapes), click Rotate, and align the circular indicator to the intersection of the guides. When done, press Esc or Enter.

Use Rotate Copies to distribute this shape around the circle. We’re getting closer to the cloverleaf design! (Large preview)

Select all shapes inside the group and apply a Subtract operation from the top toolbar. Add Inner Shadows effect — for the Color use #FFFFF at 50% Alpha and set Blur to 2. Then apply Shadows with Color set to #000000 at 70% Alpha and both Blur and Spread set to 2. Finally, change Fills to #000000.

Subtract, add Inner Shadows and Shadows, change Fills to black. (Large preview)

Draw a circle from the intersection of the guides but make it a bit bigger than the shape below, then draw a shape and center it horizontally to the circle. Select both, turn off Borders and put them into a group. Select the shape and perform a Rotate Copies operation. Enter 4 in the dialog box (so again, we’ll have a total of five shapes), click Rotate, and align the circular indicator to the intersection of the guides. When ready, press Esc or Enter.

The Rotate Copies feature is useful again. (Large preview)

Select all shapes inside the group and apply a Subtract operation from the top toolbar. Add an Inner Shadows effect — for the Color use #FFFFF at 50% Alpha and set Blur to 2. Change Fills to #131313.

Subtract, then add Inner Shadows. (Large preview)

Now, we will create one rim bolt head.

Zoom in close enough (I zoomed in to 400%) and draw a circle. Set Fills to #4F4F4F, change Borders position to Outside, Width to 1px and use #8F8F8F for the Color. Add one more border but this time use #000000 for the Color, set position to Center and make sure the Width is 1px.

Create a bolt head — first steps. (Large preview)

Draw a rectangle in the middle of the circle, turn off Borders, enter vector editing mode, hold Shift and click on the right segment to add a point in the middle, then do the same for the left segment. Push those points 2px to the left and to the right to create a hexagonal shape. Apply a Linear Gradient for the Fills — use #AEAEAE for the top and #727272 for the bottom color stop. Add Inner Shadows using #000000 at 50% Alpha for the Color and set Blur to 2, and apply Shadows using #000000 at 90% Alpha for the Color and set Blur to 2.

Continue working on the bolt head. (Large preview)

Duplicate the hexagonal shape, enter vector editing mode, select all the point on the left side and push them 1px to the right, then select all top points and push them 1px down, push the bottom points 1px up and the right points 1px left. Clear the Shadows and modify the Linear Gradient:

  1. #8F8F8F
  2. #979797
  3. #A4A4A4
  4. #636363
  5. #4A4A4A

Now apply an Inner Shadows effect. For the Color use #000000 with 50% Alpha and set Blur to 2.

The bolt head details, now with the gradient applied. (Large preview)

Select all the shapes that we used to create the bolt head and group them into a bolt head group. We can Create Symbol out of the bolt head group and we can use it as many time as we need it.

To create the new Symbol, select the bolt head group, right-click on it, and choose Create Symbol from the menu. The dialog box Create New Symbol will appear, give a name to the symbol (bolt head) and click OK.

Now we need to distribute the bolt head symbols around the circle. Duplicate the symbol, choose Rotate from the top toolbar, drag the crosshair marker to the the intersection of the guides, and rotate it 72 degrees. Continue duplicating and rotating the symbol in 72-degree increments, without letting the selection go.

Distribute the ‘bolt head’ symbols around the circle. (Large preview)

Now select each symbol instance and adjust the angle of rotation to 0 degrees.

Tip: I’m suggesting to initially adjust the angle to 0 degrees so that you can better see the process and how the bolts will look like when placed on the rim. Once the rim bolts are in place, though, my recommendation is to experiment some more and try setting a different angle of rotation for each bolt symbol. This will make the wheels look more realistic — after all, in real life it’s much more likely to see rim bolts at random angles than aligned perfectly to 0 degrees!

Finally, select all the instances of the bolt head symbol, place them into a group bolts and perform a Move Backward once.

The group ‘bolts’ is now finished. (Large preview)

Draw a shape, set Border Color to #CFCFCF, set Width to 1px and position to Inside, and use a Linear Gradient for the Fills:

  1. #5F5F5F
  2. #B5B5B5
  3. #CBCBCB

Then add Inner Shadows effect using #000000 at 30% Alpha, and Blur set to 2.

Continue working on the rim details. (Large preview)

Grab the Vector tool (V) and draw two shapes that we will use for the highlights. Use a Linear Gradient for the Fills — use for the top color stop #F3F3F3 at 100% Alpha and the same color for the bottom but at 0% Alpha. Use the same gradient settings for both shapes and also apply a Gaussian Blur with the Amount of 1 to both shapes.

Create the highlights. (Large preview)

Select all shapes that we’ve just created, group them and distribute them evenly around the rim. Use the same method that we used for the bolt heads.

Distribute the shapes around the rim. (Large preview)

Select the Oval tool (O) and draw a circle from the intersection of the guides. Turn off Borders and use Linear Gradient with colors set to #D8D8D8 for the top stop and #848484 for the bottom stop. Use Inner Shadows and Shadows to make it look slightly raised.

Let’s add a light Inner Shadows effect with the following properties:

  • Color: #FFFFFF at 80% Alpha
  • Blur: 2

Then, add a dark Inner Shadows effect:

  • Color: #000000 at 50% Alpha
  • Blur: 2

Finally, apply a Shadows effect:

  • Color: #000000 at 50% Alpha
  • Blur: 2
  • Spread: 1
Create the circle in the middle and apply all the styles. (Large preview)

Duplicate this circle, scale it down a bit, turn off Inner Shadows and Shadows, turn on Borders and add the first border:

  • Color: #B5B5B5;
  • Position: Outside
  • Width: 1px

Then add a second one on the top:

  • Color: #656565
  • Position: Center
  • Width: 1px
Work on the details in the center of the rim. (Large preview)

Let’s finish the wheel design by adding to the rim the Porsche emblem.

Note: Recreating the original Porsche logo for the rims, all in vectors, is outside of the scope of this tutorial. There are a few options — you can create it yourself by following the same basic principles outlined on these pages; you can download the logo from Wikipedia in SVG format and then try to modify it; or you can download a copy of the logo in vector lines from my website (porsche-line-logo-f.svg). This copy of the Porsche logo was created by me from scratch, all in vectors, and this is the variant that I recommend you to use.

After downloading the logo file (porsche-line-logo-f.svg) bring it into our design.

Switch to the Scale tool in the top toolbar, and in the dialog box enter 20px in the height field, to adjust the size of the logo. Align the logo horizontally with the circle below.

Add the Porsche logo to the center of the rim. (Large preview) The Porsche emblem in the center of the rim (detail close-up). (Large preview) Completing the wheels — two possible workflows

Since a copy of the front wheel (once it’s complete) will be used more than once in our illustration, we have two options now:

  • A. We can complete the front wheel design, duplicate the wheel, make a couple of tweaks, and use the duplicate as the rear wheel. This is the easiest variant.
  • B. Or, for learning purposes, we can use a workflow involving the use of nested symbols. This is the more interesting option which I’ll explore in more detail in a bit. Buckle up!
A. Workflow #1: duplicate the wheel and adjust the copy

Pick up the Vector tool (V) and draw a shape on top of the wheel. Turn off Borders and Fill the shape with black #000000 color. Apply Gaussian Blur with an Amount of 10. This way we will recreate the shadow from the car body over the wheel — just an extra bit of realism added.

Add the shadow from the car body over the wheel. (Large preview)

Select the wheel group, wheel base copy layer and the shadow shape layer and group these into a front wheel group.

Create the ‘front wheel’ group. (Large preview)

Now that the wheel is ready, duplicate the front wheel group, rename the group in the Layers panel list to rear wheel and drag it to the right to its place.

[Move the ‘rear wheel’ group to its place. (Large preview)

Select the wheel group inside and push it 20px to the right, then select the wheel base copy layer and push it 20px to the left. The rear wheel is ready.

Move the ‘wheel’ group to the right, and the ‘wheel base copy’ layer to the left. The ‘rear wheel’ group is ready. (Large preview) B. Workflow #2: use nested symbols

Pick up the Vector tool (V) and draw a shape on top of the wheel. Turn off Borders and Fill the shape with black #000000 color. Apply Gaussian Blur with an Amount of 10. This way we will recreate the shadow from the car body over the wheel — just an extra bit of realism added.

Add the shadow from the car body over the wheel. (Large preview)

The wheel is finished. Now we’ll use a symbol and a nested symbol to create the front and rear wheels.

Select the wheel group, wheel base copy layer and the shadow shape layer and group these into a front wheel group.

Create the ‘front wheel’ group. (Large preview)

Here we’re coming to the more interesting bits! Select the wheel group and create a wheel symbol, then select the front wheel and create a front wheel symbol. The front wheel symbol is now a nested symbol!

Tip: You can learn more about nested symbols in the Sketch help pages dedicated to this topic, and in the following article written by Noam Zomerfeld.

Nested symbols are regular symbols that are made from other symbols that already exist in your Sketch file. In this case, the front wheel symbol is made from the wheel symbol, so the wheel symbol is nested inside the front wheel symbol.

What could be better than one symbol? Perhaps a symbol with another one inside it — enter Nested Symbols! This feature gives you a lot of possibilities when combining symbols together. Nesting symbols can be especially useful when you need to create variations of one symbol.
— Javier-Simon Cuello, “Unleashing The Full Potential Of Symbols In Sketch

Now, go to the Symbols page in Sketch, duplicate the front wheel symbol, select the wheel group and push it 20px to the right, then select the wheel base copy and push it 20px to the left. At the end, rename this symbol to rear wheel.

Front and rear wheel symbols. (Large preview)

Go back to our design, select and duplicate the front wheel symbol, then using the Inspector panel change the symbol to rear wheel, rename the symbol in the Layers panel list to rear wheel and drag it to the right. Done!

So far it may seem that we’ve spent more time playing with nested symbols, compared to the other workflow. That’s true. But also we have learned how to use this feature — and now if you would like to change the design of the wheels, instead of doing so in two separate groups, you’ll need to do it only once inside the wheel symbol and the changes will be automatically applied to both wheels of the car. This is why we used a nested symbol to create the front and rear wheels. (Also, imagine if you’re working on a design of a vehicle that has many more wheels visible from the side, not only two! The time saved will multiply.)

Back to the bigger picture — with the wheels complete, we are very close to the final design. Let’s take a look.

The Porsche 911 should look similar to this now. (Large preview) The Shadow Under the Wheels and the Car Body

Pick the Oval tool and draw an ellipse under the wheels. Set Fills to #000000 with 80% Opacity, turn off Borders and apply a Gaussian Blur with an Amount of 5.

Start making the shadow below the car. (Large preview)

Duplicate the oval shape, adjust the width using Resize handles (make it smaller), and set Fills Opacity to 50%.

Add one more oval shape. (Large preview)

Duplicate this shape once again, adjust the width, and set Fills Opacity for this layer to 80%.

And one more. (Large preview)

Select the shadow ellipses and group them all into a shadows group. Move this group to the very bottom in the Layers panel list.

17. Final Touches — The Racing Decals

We are almost there! It’s time to add some racing decals to the car body and to the windshields.

Try to find some inspiration for the racing decals and stickers. (Large preview) The Porsche sticker

Jump over to the Wikimedia Commons website and download the Porsche Wortmarke in SVG format. Bring it to our design, scale it up and position it like on the image below.

The ‘Porsche Wortmarke’ added to the door. (Large preview)

Create some rectangles using the Rectangle tool (R), set Fills to #0F0F13 and turn off Borders. Select all elements and group them into a porsche sticker group, then drag this group inside bodywork just below the door layer.

Add some decoration around the ‘Porsche’ sticker letters. (Large preview) Shell sticker

Next, download the vintage Shell logo in SVG format and open it in Sketch. Delete the white rectangle at the bottom inside the logo group, then copy and paste it into our design. Place it just above the porsche sticker in the Layers panel list and position it like on the image below.

Add the vintage Shell logo sticker. (Large preview) Dunlop sticker

Download the Dunlop logo in SVG format, open it in Sketch and delete the yellow rectangle. Bring it to our design, scale it down a bit and place in close to the tail light. Make sure that the logo is inside the bodywork group, right above the Shell logo in the list of layers.

Add the Dunlop logo sticker. (Large preview) Marlboro sticker

Get the SVG version of the Marlboro logo from Wikimedia Commons, paste into our design and scale it down. Use the resize handles to squeeze the red shape, then move the letters up, close to the red shape, and finally change Fills for the red shape to Linear Gradient with the following parameters:

  1. #E60202
  2. #BB0101
  3. #860000
Add and modify the Marlboro logo sticker. (Large preview)

Please make sure that this logo is inside bodywork group and above “Dunlop” logo.

Heuer Chronograph sticker

Download and open in Sketch the Tag Heuer SVG logo. Delete everything except: the rectangle with the black border, the red rectangle, and the word “Heuer”.

Select the rectangle with the black border, turn off Borders and change Fills to #CC2132. Next, select the inner red rectangle, turn on Borders, set Color to #FFFFFF, position to Outside and Width to 12px. Then use the Type tool (T) and type the word Chronograph — for the font use Helvetica Bold, with the size set to 72px.

Note: If you don’t have Helvetica Bold installed, use a font similar in appearance (for example, Arial Bold), as this scale it would be difficult to spot the differences.

Convert the text block into vector shapes, by right-clicking on it and selecting Convert to Outlines. Finally, select the bigger red rectangle, enter vector editing mode, select the top two points and push them down a bit. Select everything and place all the elements into a heuer chronograph logo group.

Create the ‘heuer chronograph logo’ group. (Large preview)

Bring this modified logo to our design, scale it down and place it onto the car body. Like before, make sure it’s inside bodywork, and it’s above the Marloboro logo.

Put the Heuer Chronograph sticker on the car, to the left of the driver’s door. (Large preview) Porsche Crest Badge

Jump over to Wikimedia and download the Porsche logo in SVG format. We will need to modify and simplify it a bit because it’s too complex and we don’t need all of these details for the scale at which we’ll be using it in our illustration.

Open the SVG logo file in Sketch, and first delete all the groups (amw-link and d-link) inside it. Then, select the shape on top, press Enter to switch to vector editing mode, select the word “Porsche” and the registered trademark symbol and delete them as well.

Start modifying the Porsche logo. (Large preview)

Next, click on the arrow in the front second crest compound shape to reveal its components, select the four paths and drag them outside the compound path, then change their color to #B12B28. Reveal the contents of the first compound crest shape, select all the paths that form the word “Porsche” and delete them.

The Porsche crest logo is now complete. (Large preview)

Bring the modified Porsche crest logo to our design, scale it down, select the path that is the last one inside the Porsche logo group and add a Shadows effect — for the Color use #000000 at 50% Alpha and set Blur to 2.

Put the Porsche crest logo in place on the car body. (Large preview)

The Porsche crest badge should be placed inside the bodywork group just like the previous stickers that we added, above the heuer chronograph logo group.

Rallye Monte-Carlo sticker

Draw a rounded rectangle using the Rounded Rectangle tool (U), enter vector editing mode and add and move the vector points to make the shape like on the image below.

Set Color to #9C010E and turn off Borders. Duplicate this shape, change Color to, i.e., #000000 so you can see better what you are doing, enter vector editing mode, select the top points and push them down a bit. Push by the same distance the right points to the left, and the left points to the right. Then push up the bottom points a bit more.

Turn off Fills, turn on Borders with position set to Inside, Width set to 6px, and Color to #D7CB82. Convert Borders into a shape by going to LayerConvert to Outlines.

Start working on the Rallye Monte-Carlo sticker. (Large preview)

Draw a rectangle without Borders, set Color to #D7CB82, enter vector editing mode, add points in the middle of the top and bottom segment, and push them up and down a bit. Type the words: “SIEGER, WINNER, VAINQUEUR, 1968”. For the font use Helvetica Bold (or alternatively Arial Bold) with the #9C010E Color. Add the Porsche Wortmarke (we’ve used it earlier, remember?) to the bottom, and set Color to #D7CB82.

Add the shape, text, and the ‘Porsche Wortmarke’. (Large preview)

Convert text to outlines, select the “1968” shape on the left side of the rectangle, zoom in and use Transform from the top toolbar to modify the shape:

  1. select the middle point on the right side and push it up a bit;
  2. select the bottom point on the right side and push it down the same amount of pixels.

Perform a similar action for the “1968” on the right side of the rectangle, but this time use the middle and bottom points on the left side.

Continue adding the details to the Rallye Monte-Carlo sticker. (Large preview)

Type “RALLYE” “MONTE” “-CARLO” as a three separate words, use the same font and change the Color to #D7CB82.

Again, do a Convert to Outlines action and use Transform from the top toolbar to modify the shapes. I won’t go much into details here, but first modify the words “RALLYE” and “-CARLO” by using the method outlined above. Then, select all three shapes (the words), invoke the Transform tool, select the middle top point and push it up a bit to make the shapes elongated, and finally scale it up a bit by holding Alt + Shift on the keyboard while dragging the top right Resize handle. Use the image below as a reference.

The Rallye Monte-Carlo sticker finished. (Large preview)

Select and group all the elements we used to create this sticker into a rallye monte-carlo group, bring it into our design, and put it on the side windshield. In the Layers panel list this sticker should be inside the windshields group on top.

Put the Monte-Carlo sticker on the side windshield. (Large preview) Smashing Magazine Sticker

This is the last sticker we are going to put on the car. Download the Smashing Magazine logo in SVG format, open it in Sketch and draw a red (#D33A2C) rectangle below the logo. Select both, create a group Smashing Magazine sticker, copy and paste into our design. Place it next to Rallye Monte Carlo sticker and scale it if needed.

In the Layers panel list this should be inside the windshields group on top.

The Smashing Magazine sticker added. (Large preview)

I encourage you to add even more decals to the car body and the side windshield. Use the image below as a source for your inspiration.

Note: These are just examples and recreating all the decals in vectors is outside of the scope of this tutorial. You can apply the principles learned from this tutorial and tweak the decals in vector format in a similar way.

Some side windshield decals examples. (Large preview) The Porsche 911 car body decals examples. (Large preview) Racing Number and Drivers Names

One more important detail — since this car is a racing car we need to add a racing number to it.

Download the Montserrat font family (if you don’t have it already), install only the “Montserrat Bold” font variant, and type the racing number. Set the Size to 180px and the Color to #000000. Then, Convert to Outlines to be able to apply a gradient to the racing number, and change Fills to a Linear Gradient:

  1. #22222B
  2. #3E3E42
  3. #656566
  4. #1B1B1E
  5. #0F0F13
Add the racing number. (Large preview)

Now add the drivers’ last names. I will add shamelessly my last name and the last name of one of my best friends, Ivan Minic. Use the Text tool to add the names, for the font use again “Montserrat Bold”, set Size and Line to 20px and Color to #2F2F2F.

Add the drivers’ last names. (Large preview)

Select the names and the racing number, and move them inside the bodywork group, just above the door layer.

Select and put all elements created so far into one group — Porsche 911. Our Porsche 911 is now officially finished!

The Porsche 911 in all its glory! Great job! (Large preview)

Finally, let’s add a background. Create a rectangle of the same size as the artboard, set the Fills to #F4F3F2, and push it below the Porsche 911 group.

Final image 3/3: Add the background and complete the Porsche 911 tutorial illustration! (Large preview) Conclusion

We’ve put a lot of time and effort to reach the final destination and now you know too how to create all in vectors one of my favorite cars, the original Porsche 911 from 1968, in Sketch app. :)

The tutorial probably wasn’t too easy, but the end results were well worth it, in my opinion.

The next step, of course, is to design your own favorite car. Select a car (or another object you like) and be sure to find as many photos of it from different angles, so that you can carefully replicate all of the important details.

More car illustrations for your inspiration — these are some of other racing cars that I’ve been creating in Sketch recently. (Large preview)

As you can see, there are certain tools and features in Sketch that you can master to create similar objects — use them to speed up and simplify the whole process.

I hope you will also remember how important is the proper naming of the layers/shapes (and groups), and stacking them in the right order so that even the most complex of illustrations are easy to organize and to work with.

Finally, if you have any questions, please leave a comment below or ping me on Twitter (@colaja) and I will gladly help you.

Further Reading
  1. Mastering the Bézier Curve in Sketch” (a tutorial by Peter Nowell)
  2. Designing A Realistic Chronograph Watch In Sketch” (a tutorial by Nikola Lazarević)
  3. Styling — Fills” (Sketch help page)
  4. Harnessing Vector Awesomeness in Sketch” (a tutorial by Peter Nowell)
  5. Vector Editing (and Vector Editing Mode)” (Sketch help page)
  6. Shapes” (Sketch help page)
  7. Copy styles in Sketch” (a tutorial by Drahomír Posteby-Mach)
  8. Getting the pixels right in Sketch” (a tutorial by Nav Pawera)
  9. Sketch Symbols, Everything you need to know, and more!” (a tutorial by Brian Laiche)
  10. Unleashing The Full Potential Of Symbols In Sketch” (an article by Javier Simon Cuello)
  11. How to Edit Shapes with Rotate Copies tool” (Sketch help page)
  12. Creating Nested Symbols” (Sketch help page)
  13. Nested Symbols in Sketch — I

A Practical Guide To Product Tours In React Apps

Thu, 08/06/2020 - 03:00
A Practical Guide To Product Tours In React Apps A Practical Guide To Product Tours In React Apps Blessing Krofegha 2020-08-06T10:00:00+00:00 2020-08-06T11:33:54+00:00

As stated on Appcues:

“Product tours — sometimes called product walkthroughs — introduce users to a new product and help them find their bearings.”

Usually, when it needs to showcase a new feature or complex UI functionality in a web app, the customer-success team would send a campaign email to all of its users. While this is a great way to create such awareness, some users might not have the opportunity to see the added feature; hence, the purpose of the email would be defeated.

A better way to increase user awareness of a particular feature in a web app is by integrating concise, self-explanatory UI tips, called product tours.

Product tours guide users to “a-ha” moments, or showcase high-value features that are being underused. Product tours can be powerful tools to introduce users to a new product and to help them find their bearings. They can draw attention to product launches, promo offers, and product sales.

But when done wrong, product tours can end up feeling like a backseat driver. And no one likes a backseat driver, do they?

In this tutorial, you’ll learn about what a product tour is and the types of product-tour packages in the React ecosystem, along with their pros and cons.

If you are building customer-facing products using React, then you might be keen to implement this in your React application. By the end, we’ll have built a product tour for a simple shopping-cart UI using React Joyride.

We won’t go through React and JavaScript’s syntax basics, but you don’t have to be an expert in either of these languages to follow along.

A basic product tour. (Large preview) Product Tour Guidelines

Product tours are a tricky aspect of web apps, requiring some user-experience expertise to drive results. I’d recommend going through Appcues’ tips for product tours. The following are a few guidelines to consider.

Never Lecture

Putting a lot of tours on a web page is tempting. But users are usually not keen on long introductory tutorials. They become anxious when they have to ingest a lot of information before being able to use a feature in the app.

Break It Down

Don’t teach everything. Focus on a single feature, and create a tour of two to three steps to showcase that feature. Show many small tours, rather than a single long tour. Prioritize their sequence.

Add Value

Do you enjoy taking your own tour? How about your teammates? Present the tour in such a way that users will understand. Showcase value, rather than stories.

Now that we know the value of product tours and seen some guidelines for building them, let’s cover some React libraries for product tours and learn how to use them.

There are only a few React-based libraries for implementing tours. Two of the most popular are React Tour and React Joyride.

React Tour

React Tour has around 1,600 stars on GitHub and is being actively developed. The best use case for React Tour is a simple product tour in which little customization is required. A demo is available.

How It Works

With React Tour, you pass the className selector and content for each step to the component. The library will render the tour’s user interface based on a button click, or after you’ve mounted the component. It’s simple for static pages and UIs:

const steps = [ { selector: '.first-tour', content: 'This is the content for the first tour.', }, { selector: '.second-tour', content: 'Here is the content for the second Tour.', } // ... ] Pros
  • React Tour is best for tours that need little customization.
  • It works well for static content and for dynamic content whose selector labels always exist in the UI.
  • Fans of styled-components might find it interesting because it has a hard dependency on styled-components.
  • If your project has no dependency on styled-components, then you might not find it easy to implement.
  • Your creativity will be limited because it doesn’t support customization.
React Joyride

The other main product-tour library is React Joyride, which has about 3,100 stars on GitHub and is also actively maintained.

How It Works

We pass the className as a target and the content. The state stores the tour. The Joyride component uses steps as props.

state = { steps: [ { target: '.my-first-step', content: 'This is my awesome feature!', }, { target: '.my-other-step', content: 'This is another awesome feature!', }, ... ] }; render () { const { steps } = this.state; return ( ... ); } } Pros
  • Integrating React Joyride in a web app is less rigid than with React Tour, and it has no hard dependency on other libraries.
  • Events and actions are made available, which fosters customization.
  • It’s frequently improved.
  • The UI isn’t as elegant as React Tour’s.
Why React Joyride?

Product tours, especially for really big web apps, require customization, and that sets React Joyride apart from React Tour. The example project we’ll make demands some creativity and customization — hence, we’ll go with React Joyride.

Building A Simple Product Tour

First, we’ll build a simple React tour using the props available to us in React Joyride. Next, we’ll use the useReducer hook to automate the tour’s processes.

Clone the “standard-tour” branch in the GitHub repository, or use the web page of your choice, as long as you’re able to follow along.

Install the packages by running npm install.

To start the app, run npm run start.

We’ll be covering the following steps:

  • define the tour’s steps;
  • enable a skip option in each step;
  • change text labels on buttons and links;
  • customize styles like button colors and text alignment.

Then, we’ll add some custom features:

  • autostart the tour;
  • start the tour manually (i.e. with a link or button click);
  • hide the blinking beacon.

The props in React Joyride enable us to perform some basic functionality.

For this tutorial, we’ll build a product tour of the UI shown below:

The web UI. (Large preview) Define The Tour’s Steps

To begin with, ensure that you’re targeting the particular classNames that will hold the content of the tour on the page — that is, according to whether you’ll be using your UI instead of the shopping-cart UI.

In the component folder, create a Tour.js file, and paste the following code into it. Also, ensure that the target classNames exist in your style sheet. Throughout this article, we’ll tweak the Tour.js component to suit the task at hand.

import React from "react"; import JoyRide from "react-joyride"; const TOUR_STEPS = [ { target: ".tour-logo", content: "This is our tour’s logo", }, { target: ".tour-cart", content: "View the cart you’ve added here", }, { target: ".tour-contact", content: "Contact the developer", }, { target: ".tour-policy", content: "We accept returns after 14 days max", }, ];

What we’ve done is simply define our tour’s steps by targeting the classNames that will form the bedrock of our content (the text). The content property is where we define the text that we want to see when the tour starts.

Enable Skip Option in Each Step

A skip option is important in cases where a user isn’t interested in a particular tour. We can add this feature by setting the showSkipButton prop to true, which will skip the remaining steps. Also, the continuous prop comes in handy when we need to show the Next button in each step.

const Tour = () => { return ( <> <JoyRide steps={TOUR_STEPS} continuous={true} showSkipButton={true} /> </> ); }; Change Text Labels On Buttons And Links

To change the text labels on either buttons or links, we’ll use the locale prop. The locale prop has two objects, last and skip. We specified our last tour as the End tour, while skip is the Close tour.

const Tour = () => { return ( <> <JoyRide steps={TOUR_STEPS} continuous={true} showSkipButton={true} locale={{ last: "End tour", skip: "Close tour" }} /> </> ); }; Customize Styles, Like Button Colors And Text Alignment

The default color of buttons is red, and text alignment is always set right. Let’s apply some custom styles to change button colors and align text properly.

We see in our code that the styles prop is an object. It has other objects with unique values, including:

  • tooltipContainer
    Its key is textAlign, and its value is left.
  • buttonNext
    Its key is backgroundColor, and its value is green.
  • buttonBack
    Its key is marginRight, and its value is 10px.
  • locale
    Its keys are last and skip, and its values are End Tour and Close Tour, respectively.
const Tour = () => { return ( <> <JoyRide steps={TOUR_STEPS} continuous={true} showSkipButton={true} styles={{ tooltipContainer: { textAlign: "left" }, buttonNext: { backgroundColor: "green" }, buttonBack: { marginRight: 10 } }} locale={{ last: "End tour", skip: "Close tour" }} /> </> ); };

The library exposes some props to use on our elements in place of the default elements, some of which are:

The product tour. (Large preview) useReducer

We’ve seen how to create a product tour and how to customize it using the various props of Joyride.

The problem with props, however, is that, as your web app scales and you need more tours, you don’t just want to add steps and pass props to them. You want to be able to automate the process by ensuring that the process of managing tours is controlled by functions, and not merely props. Therefore, we’ll use useReducer to revamp the process of building tours.

In this segment, we are going to take control of the tour by using actions and events, made available by the library through a callback function.

To make this process feel less daunting, we’ll break this down into steps, enabling us to build the tour in chunks.

The complete source code is available, but I’d advise you to follow this guide, to understand how it works. All of our steps will be done in the Tour.js file in the components folder.

Define the Steps import React from "react"; import JoyRide from "react-joyride"; const TOUR_STEPS = [ { target: ".tour-logo", content: "This is our tour’s logo.", }, { target: ".tour-cart", content: "View the cart you’ve added here", }, { target: ".tour-contact", content: "Contact the developer", }, { target: ".tour-policy", content: "We accept returns after 14 days max", }, ];

In this first step, we define our steps by targeting the appropriate classNames and setting our content (text).

Define the Initial State const INITIAL_STATE = { run: false, continuous: true, loading: false, stepIndex: 0, // Make the component controlled steps: TOUR_STEPS, key: new Date(), // This field makes the tour to re-render when the tour is restarted };

In this step, we define some important states, including:

  • Set the run field to false, to ensure that the tour doesn’t start automatically.
  • Set the continuous prop to true, because we want to show the button.
  • stepIndex is the index number, which is set to 0.
  • The steps field is set to the TOUR_STEPS that we declared in step 1.
  • The key field makes the tour re-render when the tour is restarted.
Manage The State With Reducer const reducer = (state = INITIAL_STATE, action) => { switch (action.type) { // start the tour case "START": return { ...state, run: true }; // Reset to 0th step case "RESET": return { ...state, stepIndex: 0 }; // Stop the tour case "STOP": return { ...state, run: false }; // Update the steps for next / back button click case "NEXT_OR_PREV": return { ...state, ...action.payload }; // Restart the tour - reset go to 1st step, restart create new tour case "RESTART": return { ...state, stepIndex: 0, run: true, loading: false, key: new Date() }; default: return state; } };

In this step, using a switch statement when case is START, we return the state and set the run field to true. Also, when case is RESET, we return the state and set stepIndex to 0. Next, when case is STOP, we set the run field to false, which will stop the tour. Lastly, when case is RESET, we restart the tour and create a new tour.

According to the events (start, stop, and reset), we’ve dispatched the proper state to manage the tour.

Listen to the Callback Changes and Dispatch State Changes import JoyRide, { ACTIONS, EVENTS, STATUS } from "react-joyride"; const callback = data => { const { action, index, type, status } = data; if (action === ACTIONS.CLOSE || (status === STATUS.SKIPPED && || status === STATUS.FINISHED ) { dispatch({ type: "STOP" }); } else if (type === EVENTS.STEP_AFTER || type === EVENTS.TARGET_NOT_FOUND) { dispatch({ type: "NEXT_OR_PREV", payload: { stepIndex: index + (action === ACTIONS.PREV ? -1 : 1) } }); } };

Using the exposed EVENTS, ACTIONS, and STATUS labels offered by React Joyride, we listen to the click events and then perform some conditional operations.

In this step, when the close or skip button is clicked, we close the tour. Otherwise, if the next or back button is clicked, we check whether the target element is active on the page. If the target element is active, then we go to that step. Otherwise, we find the next-step target and iterate.

Autostart the Tour With useEffect useEffect(() => { if(!localStorage.getItem("tour"){ dispatch({ type: "START"}); } }, []);

In this step, the tour is auto-started when the page loads or when the component is mounted, using the useEffect hook.

Trigger The Start Button const startTour = () => { dispatch({ type: "RESTART" }); };

The function in this last step starts the tour when the start button is clicked, just in case the user wishes to view the tour again. Right now, our app is set up so that the tour will be shown every time the user refreshes the page.

Here’s the final code for the tour functionality in Tour.js:

import React, { useReducer, useEffect } from "react"; import JoyRide, { ACTIONS, EVENTS, STATUS } from "react-joyride"; // Define the steps const TOUR_STEPS = [ { target: ".tour-logo", content: "This is our tour’s logo.", disableBeacon: true, }, { target: ".tour-cart", content: "View the cart you’ve added here", }, { target: ".tour-contact", content: "Contact the developer", }, { target: ".tour-policy", content: "We accept returns after 14 days max", }, ]; // Define our state const INITIAL_STATE = { key: new Date(), run: false, continuous: true, loading: false, stepIndex: 0, steps: TOUR_STEPS, }; // Set up the reducer function const reducer = (state = INITIAL_STATE, action) => { switch (action.type) { case "START": return { ...state, run: true }; case "RESET": return { ...state, stepIndex: 0 }; case "STOP": return { ...state, run: false }; case "NEXT_OR_PREV": return { ...state, ...action.payload }; case "RESTART": return { ...state, stepIndex: 0, run: true, loading: false, key: new Date(), }; default: return state; } }; // Define the Tour component const Tour = () => { const [tourState, dispatch] = useReducer(reducer, INITIAL_STATE); useEffect(() => { if (!localStorage.getItem("tour")) { dispatch({ type: "START" }); } }, []); const callback = (data) => { const { action, index, type, status } = data; if ( action === ACTIONS.CLOSE || (status === STATUS.SKIPPED && || status === STATUS.FINISHED ) { dispatch({ type: "STOP" }); } else if (type === EVENTS.STEP_AFTER || type === EVENTS.TARGET_NOT_FOUND) { dispatch({ type: "NEXT_OR_PREV", payload: { stepIndex: index + (action === ACTIONS.PREV ? -1 : 1) }, }); } }; const startTour = () => { dispatch({ type: "RESTART" }); }; return ( <> <button className="btn btn-primary" onClick={startTour}> Start Tour </button> <JoyRide {...tourState} callback={callback} showSkipButton={true} styles={{ tooltipContainer: { textAlign: "left", }, buttonBack: { marginRight: 10, }, }} locale={{ last: "End tour", }} /> </> ); }; export default Tour; Conclusion

We’ve seen how to build a product tour in a web UI with React. We’ve also covered some guidelines for making product tours effective.

Now, you can experiment with the React Joyride library and come up with something awesome in your next web app. I would love to hear your views in the comments section below.

Resources (ks, ra, al, yk, il)
Categories: Design

Creating A Static Blog With Sapper And Strapi

Wed, 08/05/2020 - 02:00
Creating A Static Blog With Sapper And Strapi Creating A Static Blog With Sapper And Strapi Daniel Madalitso Phiri 2020-08-05T09:00:00+00:00 2020-08-05T11:34:21+00:00

In this tutorial, we will build a statically generated minimal blog with Sapper, a Svelte-based progressive JavaScript framework, for our front end, and then use Strapi, an open-source headless content management system (CMS), for the back end of our application. This tutorial is aimed at intermediate front-end developers, specifically those who want the versatility of a headless CMS, like Strapi, as well as the minimal structure of a JavaScript framework, like Sapper. Feel free to try out the demo or check out the source code on GitHub.

To go through the article smoothy, you will need the LTS version of Node.js and either Yarn or npm installed on your device beforehand. It’s also worth mentioning that you will need to have a basic understanding of JavaScript and GraphQL queries.

Before getting started, let’s get some definitions out of the way. A static-site generator is a tool that generates static websites, and a static website can be defined as a website that is sourced from purely static HTML files. For an overview of your options for static-site generators today, check out “Top 10 Static Site Generators in 2020”.

A headless CMS, on the other hand, is a CMS accessible via an API. Unlike the traditional CMS’ of the past, a headless CMS is front-end agnostic and doesn’t tie you to a single programming language or platform. Strapi’s article “Why Frontend Developers Should Use a Headless CMS” is good resource to understand the usefulness of a headless CMS.

Static-site generators, like headless CMS’, are quickly gaining mainstream appeal in the front-end web development community. Both pieces of technology bring with them a much lower barrier to entry, flexibility, and a generally better developer experience. We’ll see all this and more as we build our blog.

You might be wondering, “Why should I use this instead of the alternatives?” Sapper is based on Svelte, which is known for its speed and relatively small bundle size. In a world where performance plays a huge role in determining an effective user experience, we want to optimize for that. Developers today are spoiled for choice when it comes to front-end frameworks — if we want to optimize for speed, performance, and developer experience (like I do in this project), then Sapper is a solid choice!

So, let’s get started building our minimal blog, starting with our Sapper front end.

Sapper Front End

Our front end is built with Sapper, a framework for building extremely high-performance web apps using Svelte. Sapper, which is short for “Svelte app maker”, enables developers to export pages as a static website, which we will be doing today. Svelte has a very opinionated way of scaffolding projects, using Degit.

“Degit makes copies of Git repositories and fetches the latest commit in the repository. This is a more efficient approach than using git clone, because we’re not downloading the entire Git history.”

First, install Degit by running npm install -g degit in your command-line interface (CLI).

Next up, run the following commands in the CLI to set up our project.

npx degit "sveltejs/sapper-template#rollup" frontend # or: npx degit "sveltejs/sapper-template#webpack" frontend cd frontend npm install npm run dev

Note: We have the option of using either Rollup or Webpack to bundle our project. For this tutorial, we will be using Rollup.

These commands scaffold a new project in the frontend directory, install its dependencies, and start a server on localhost.

If you’re new to Sapper, the directory structure will need some explaining.

Sapper’s App Structure

If you look in the project directory, you’ll see this:

├ package.json ├ src │ ├ routes │ │ ├ # your routes here │ │ ├ _error.svelte │ │ └ index.svelte │ ├ client.js │ ├ server.js │ ├ service-worker.js │ └ template.html ├ static │ ├ # your files here └ rollup.config.js / webpack.config.js

Note: When you first run Sapper, it will create an additional __sapper__ directory containing generated files. You’ll also notice a few extra files and a cypress directory — we don’t need to worry about those for this article.

You will see a few files and folders. Besides those already mentioned above, these are some you can expect:

  • package.json
    This file contains your app’s dependencies and defines a number of scripts.
  • src
    This contains the three entry points for your app: src/client.js, src/server.js, and (optionally) src/service-worker.js, along with a src/template.html file.
  • src/routes
    This is the meat of the app (that is, the pages and server routes).
  • static
    This is a place to put any files that your app uses: fonts, images, and so on. For example, static/favicon.png will be served as /favicon.png.
  • rollup.config.js
    We’re using Rollup to bundle our app. You probably won’t need to change its configuration, but if you want to, this is where you would do it.

The directory structure is pretty minimal for the functionality that the project provides. Now that we have an idea of what our project directory looks like and what each file and folder does, we can run our application with npm run dev.

You should see the Svelte-eque starter home page of our blog.

Your Sapper home page. (Large preview)

This looks really good! Now that our front end is set up and working, we can move on to the back end of the application, where we will set up Strapi.

Strapi Back End

Strapi is both headless and self-hosted, which means we have control over our content and where it’s hosted — no server, language, or vendor lock-in to worry about, and we can keep our content private. Strapi is built with JavaScript and has a content editor built with React. We’ll use this content editor to create some content models and store actual content that we can query later on. But before we can do all of this, we have to set it up by following the instructions below.

1. Install Strapi and Create New Project
  • Open your CLI.
  • Run yarn create strapi-app backend --quickstart. This will create a new folder named backend and build the React admin UI.
2. Create Administrator Create an admin account. (Large preview) 3. Create Blog Collection Type
  • Navigate to “Content-Types Builder”, under “Plugins” in the left-hand menu.
  • Click the “+ Create new collection type” link.
  • Name it “blog”.
  • Click “Continue”.
Create a new collection type. (Large preview)
  • Add a “Text field” (short text), and name it “Title”.
  • Click the “+ Add another field” button.
Create a new Text field. (Large preview)
  • Add a “Text field” (long text), and name it “Description”.
  • Click the “+ Add another field” button.
Create a new Text field. (Large preview)
  • Add a “Date field” of the type “date”, and name it “Published”.
  • Click the “+ Add another field” button.
Create a new Date field. (Large preview)
  • Add a “Rich Text field”, and name it “Body”.
  • Click the “+ Add another field” button.
Create a new Rich Text field. (Large preview)
  • Add another “Text field” (short text), and name it “Slug”.
  • Click the “+ Add another field” button.
Create a new Text field. (Large preview)
  • Add a “Relation field”.
  • On the right side of the relation, click on the arrow and select “User”.
  • On the left side of the relation, change the field name to “author”.
Create a new Relation field. (Large preview)
  • Click the “Finish” button.
  • Click the “Save” button, and wait for Strapi to restart.

When it’s finished, your collection type should look like this:

Overview of your Blog collection type. (Large preview) 4. Add a New User to “Users” Collection Type
  • Navigate to “Users” under “Collection Types” in the left-hand menu.
  • Click “Add new user”.
  • Enter your desired “Email”, “Username”, and “Password”, and toggle the “Confirmed” button.
  • Click “Save”.
Add some user content. (Large preview)

Now we have a new user who we can attribute articles to when adding articles to our “Blog” collection type.

5. Add Content to “Blogs” Collection Type
  • Navigate to “Blogs” under “Collection Types” in the left-hand menu.
  • Click “Add new blog”.
  • Fill in the information in the fields specified (you have the option to select the user whom you just created as an author).
  • Click “Save”.
Add some blog content. (Large preview) 6. Set Roles and Permissions
  • Navigate to “Roles and Permissions” under “Plugins” in the left-hand menu.
  • Click the “Public” role.
  • Scroll down under “Permissions”, and find “Blogs”.
  • Tick the boxes next to “find” and “findone”.
  • Click “Save”.
Set permissions for your Public role. (Large preview) 7. Send Requests to the Collection Types API

Navigate to https://localhost:1337/blog to query your data.

You should get back some JSON data containing the content that we just added. For this tutorial, however, we will be using Strapi’s GraphQL API.

To enable it:

  • Open your CLI.
  • Run cd backend to navigate to ./backend.
  • Run yarn strapi install graphql to install the GraphQL plugin.

Alternatively, you can do this:

  • In the admin UI, navigate to “Marketplace” under “General” in the left-hand menu.
  • Click “Download” on the GraphQL card.
  • Wait for Strapi to restart.
Download the GraphQL plugin. (Large preview)

When the GraphQL plugin is installed and Strapi is back up and running, we can test queries in the GraphQL playground.

That is all for our back-end setup. All that’s left for us to do is consume the GraphQL API and render all of this beautiful content.

Piecing Together Both Ends

We’ve just queried our Strapi back end and gotten back some data. All we have to do now is set up our front end to render the content that we get from Strapi via the GraphQL API. Because we are using the Strapi GraphQL, we will have to install the Svelte Apollo client and a few other packages to make sure everything works properly.

Installing Packages
  • Open the CLI, and navigate to ./frontend.
  • Run npm i --save apollo-boost graphql svelte-apollo moment.
Moment.js helps us to parse, validate, manipulate, and display dates and times in JavaScript.

The packages are now installed, which means we are able to make GraphQL queries in our Svelte app. The blog we’re building will have three pages: “home”, “about” and “articles”. All of our blog posts from Strapi will be displayed on the “articles” page, giving users access to each article. If we think about how that would look, our “articles” page’s route will be /articles, and then each article’s route will be /articles/:slug, where slug is what we enter in the “Slug” field when adding the content in the admin UI.

This is important to understand because we will tailor our Svelte app to work in the same way.

In./frontend/src/routes, you will notice a folder named “blog”. We don’t need this folder in this tutorial, so you can delete it. Doing so will break the app, but don’t worry: It’ll be back up and running once we make our “articles” page, which we’ll do now.

  • Navigate to./frontend/src/routes.
  • Create a folder named “articles”.
  • In./frontend/src/routes/articles, create a file named index.svelte, and paste the following code in it.
  • When pasting, be sure to replace <Your Strapi GraphQL Endpoint> with your actual Strapi GraphQL endpoint. For your local version, this will usually be https://localhost:1337/graphql.
<script context="module"> import ApolloClient, { gql } from 'apollo-boost'; import moment from 'moment'; const blogQuery = gql` query Blogs { blogs { id Title Description Published Body author { username } Slug } } `; export async function preload({params, query}) { const client = new ApolloClient({ uri: '<Your Strapi GraphQL Endpoint>', fetch: this.fetch }); const results = await client.query({ query: blogQuery }) return {posts:} } </script> <script> export let posts; </script> <style> ul, p { margin: 0 0 1em 0; line-height: 1.5; } .main-title { font-size: 25px; } </style> <svelte:head> <title>articles</title> </svelte:head> <h1>recent posts</h1> <ul> {#each posts as post} <li> <a class="main-title" rel='prefetch' href='articles/{post.Slug}'> {post.Title} </a> </li> <p> {moment().to(post.Published, "DD-MM-YYYY")} ago by {} </p> {/each} </ul>

This file represents our /articles route. In the code above, we’ve imported a few packages and then used Apollo Client to make a query: blogQuery. We then stored our query response in a variable, results, and used the preload() function to process the data needed on our page. The function then returns posts, a variable with the parsed query result.

We’ve used Svelte’s #each block to loop through the data from Strapi, displaying the title, date of publication, and author. Our <a> tag, when clicked, goes to a page defined by the slug that we entered for our post in Strapi’s admin UI. This means that when the link is clicked, we open up a page for a particular article, and the slug is used to identify that article.

For our /articles/:slug route, create a file named [slug].svelte, in ./src/routes/articles, and paste the following code:

<script context="module"> import ApolloClient, { gql } from 'apollo-boost'; import moment from 'moment'; const blogQuery = gql` query Blogs($Slug: String!) { blogs: blogs(where: { Slug: $Slug }) { id Title Description Published Body author { username } Slug } } `; export async function preload({params, query}) { const client = new ApolloClient({ uri: '<Your Strapi GraphQL Endpoint>', fetch: this.fetch }); const results = await client.query({ query: blogQuery, variables: {"Slug" : params.slug} }) return {post:} } </script> <script> export let post; </script> <style> .content :global(h2) { font-size: 1.4em; font-weight: 500; } .content :global(pre) { background-color: #f9f9f9; box-shadow: inset 1px 1px 5px rgba(0,0,0,0.05); padding: 0.5em; border-radius: 2px; overflow-x: auto; } .content :global(pre) :global(code) { background-color: transparent; padding: 0; } .content :global(ul) { line-height: 1.5; } .content :global(li) { margin: 0 0 0.5em 0; } </style> <svelte:head> <title>an amazing article</title> </svelte:head> {#each post as post} <h2>{post.Title}</h2> <h3>{moment().to(post.Published)} by {}</h3> <div class='content'> {@html post.Body} </div> {/each} <p>⇺<a href="articles"> back to articles</a></p>

Note: In Svelte, dynamic parameters are encoded using [brackets]. Our [slug].svelte file lets us add routes for different posts dynamically.

Just like in routes/articles/index.svelte, here we’ve imported a few packages, and then used Apollo Client to make a query: blogQuery. This query is different because we’re filtering our data to make sure it returns a specific blog post. The params argument in our preload() function lets us access params.slug, which is the slug of the current page (that is, the slug of this particular blog post). We used params.slug as a variable in our GraphQL query so that only the data with a slug matching the slug of our web page is returned. We then stored our query response in a variable (results), and our preload() function returns posts, a variable with the parsed query result.

Finally, we displayed our post’s title, publication date, and body (wrapped in Svelte’s {@html} tag).

That’s it. We can now dynamically display pages for any posts added to Strapi’s back end.

We can now work on the “about” and “home” pages. In ./frontend/src/routes, paste this code in the about.svelte file:

<svelte:head> <title>about</title> </svelte:head> <h1>about this site</h1> <p> minimalist web design really let's the content stand out and shine. this is why a simple website design is the first choice of so many artists, photographers, and even some writers. they want their creative content to be the center of attention, rather than design elements created by someone else. </p> <p>this minimal blog is built with <a href="">svelte</a> and <a href="">strapi</a> images by <a href="">glen carrie</a> from unsplash </p>

For our home page, let’s go to ./frontend/src/routes and paste the following code in index.svelte:

<style> h1, figure, p { text-align: center; margin: 0 auto; } h1 { font-size: 2.8em; font-weight: 400; margin: 0 0 0.5em 0; } figure { margin: 0 0 1em 0; } img { width: 100%; max-width: 400px; margin: 0 0 1em 0; } p { margin: 1em auto; padding-bottom: 1em; } @media (min-width: 480px) { h1 { font-size: 4em; } } </style> <svelte:head> <title>a minimal sapper blog</title> </svelte:head> <p>welcome to</p> <h1>the<b>blog.</b></h1> <figure> <img alt='the birds on a line' src='bird-bg.png'> <figcaption>where less is more</figcaption> </figure> <p> <strong> we're minimal and that might seem boring, except you're actually paying attention. </strong> </p> <p class="link"><a href="about">find out why</a>...</p>

We’ve created all the pages needed in order for our app to run as expected. If you run the app now, you should see something like this:

Your finished minimal blog home page. (Large preview)

Pretty sweet, yeah?

Locally, everything works great, but we want to deploy our static blog to the web and share our beautiful creation. Let’s do that.

Deploy To Netlify

We’re going to deploy our application to Netlify, but before we can do that, log into your Netlify account (or create an account, if you don’t already have one). Sapper gives us the option to deploy a static version of our website, and we’ll do just that.

  • Navigate to ./frontend.
  • Run npm run export to export a static version of the application.

Your application will be exported to ./frontend/sapper/export.

Drag your exported folder into Netlify, and your website will be live in an instant.

Drag your export folder to the Netlify Dashboard. (Large preview)

Optionally, we can deploy our website from Git by following Netlify’s documentation. Be sure to add npm run export as the build command and __sapper__/export as the base directory.

We also have the option to deploy to with Vercel (formally ZEIT, as mentioned in Sapper’s documentation).


That was fun, right? We just built a static blog with Sapper and Strapi and deployed it to Netlify in less than 15 minutes? Besides the stellar developer experience, Strapi and Sapper are such a delight to work with. They bring a fresh perspective to building for the web, and this tutorial is a testament to that. We definitely aren’t limited to static websites, and I can’t wait to see what you all build after this. Share your projects with me on Twitter. I can’t wait to see them. Take care, till next time!

Resources (ks, ra, al, yk, il)
Categories: Design

Smart Interface Design Patterns In Your Pocket: Checklist Cards PDF

Tue, 08/04/2020 - 07:00
Smart Interface Design Patterns In Your Pocket: Checklist Cards PDF Smart Interface Design Patterns In Your Pocket: Checklist Cards PDF Vitaly Friedman 2020-08-04T14:00:00+00:00 2020-08-04T17:34:25+00:00

Every UI component, no matter if it’s an accordion, a hamburger navigation, a data table, or a carousel, brings along its unique challenges. Coming up with a new solution for every problem takes time, and often it’s really not necessary. We can rely on smart design patterns and usability tests, and ask the right questions ahead of time to avoid issues down the line.

Meet "Smart Interface Design Checklists", with questions to ask when designing and building any interface component.

Meet Interface Design Patterns Checklists, a deck of 100 cards with common questions to ask while dealing with any interface challenge — from intricate data tables and web forms to troublesome hamburgers and carousels. Plus, many other components (full list ↓), explored in full detail.

Each checklist has been curated and refined for years by yours truly — all based upon usability sessions, design iterations and A/B tests. Useful for designers & front-end developers to discuss everything a component requires before starting designing or coding.

And if you’d like to dive into design patterns live, attend our upcoming online workshops on Smart Interface Design Patterns, 2020 Edition, where we’ll explore 100s of practical examples over 5×2.5h live sessions.

Workshop + Checklists { "sku": "checklist-cards", "type": "Book", "price": "450.00", "sales_price": "375.00", "prices": [{ "amount": "450.00", "currency": "USD", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "450.00", "currency": "EUR", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "USD", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "EUR", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] } ] } $ 375.00 $ 450.00 Attend Online Workshop

Vitaly’s 5×2.5h online workshop, with the checklists PDF, live sessions and examples.

Checklists PDF Deck { "sku": "checklist-cards", "type": "E-Book", "price": "10.00", "prices": [{ "amount": "10.00", "currency": "USD" }, { "amount": "10.00", "currency": "EUR" } ] } $ 10.00 Free! Get Checklists PDF

DRM-free, of course. PDF.
Included with Smashing Membership.

Get the eBook

Download PDF.
Thanks for being smashing! ❤️

About The Checklists

Meet 100 checklist cards with everything you need to tackle any UI challenge — from intricate tables to troublesome carousels. Created to help us all keep track of all the fine little details to design and build better interfaces, faster. Plus, it's useful to not forget anything critical and avoid costly mistakes down the line. Check the preview.

When working on pretty much any interface problem, we sit down with designers and developers and talk about its design, markup and behavior — using checklists. The deck creates a much-needed sense of alignment, so everyone is one the same page before jumping into design or coding tools.

The deck includes checklists on:

  • designing for touch (free preview),
  • hamburger menu and accordions,
  • carousels and navigation,
  • filtering, sorting, search,
  • data tables and feature comparison,
  • pricing plans and product page,
  • sliders and video players,
  • configurators and wizards,
  • date pickers and calendars,
  • timelines, maps, seating plans,
  • privacy and authentication,
  • onboarding and offboarding,
  • reviews and testimonials,
  • video and audio players,
  • web forms and donation forms.
  • Plus, 400 practical interface examples (free preview).

Beautifully designed by our dear illustrator Ricardo Gimenes, this deck is always by your side — on your desk or on your phone when you’re on the go.

Additionally, you get practical examples, action points and the checklists in a wide resolution (16×9) for reference and presentations.

A little bonus: 400 practical examples, action points and the checklist in 16×9.

You’ll get:

  • 100 checklists cards on everything from carousels to web forms, carefully curated and designed,
  • Practical examples and action points for your reference in 16×9,
  • Editable text file to adjust for your needs,
  • Life-time access to the deck, updated regularly.
  • Attend online workshop or get the checklist PDF.
Workshop + Checklists { "sku": "checklist-cards", "type": "Book", "price": "450.00", "sales_price": "375.00", "prices": [{ "amount": "450.00", "currency": "USD", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "450.00", "currency": "EUR", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "USD", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "EUR", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] } ] } $ 375.00 $ 450.00 Attend Online Workshop

Vitaly’s 5×2.5h online workshop, with the checklists PDF, live sessions and examples.

Checklists PDF Deck { "sku": "checklist-cards", "type": "E-Book", "price": "10.00", "prices": [{ "amount": "10.00", "currency": "USD" }, { "amount": "10.00", "currency": "EUR" } ] } $ 10.00 Free! Get Checklists PDF

DRM-free, of course. PDF.
Included with Smashing Membership.

Get the eBook

Download PDF.
Thanks for being smashing! ❤️

Table of Contents
About the Author

Vitaly Friedman loves beautiful content and doesn’t like to give in easily. When he is not writing or speaking at a conference, he’s most probably running front-end/UX workshops and webinars. He loves solving complex UX, front-end and performance problems. Get in touch.

“Smart Interface Design Patterns, 2020 Edition”, Online Workshop with Vitaly Friedman (Sep 22 – Oct 6)

Do you want to dive deeper into the bits and pieces of smart interface design patterns? We’ll be hosting a series of online workshops, in which we’ll take a microscopic examination of common interface components and reliable solutions to get them right — both on desktop and on mobile.

We’ll study 100s of hand-picked examples and we’ll design interfaces live, from mega-dropdowns and car configurators — all the way to timelines and onboarding. And: we’ll be reviewing and providing feedback to each other’s work. Check all topics and schedule.

Vitaly's Smart Interface Design Patterns Workshop, broken down into 5×2.5h sessions, with 100s of practical examples. The workshop includes:

  • 1500+ workshop slides with practical examples and action points
  • 100 checklist cards on everything from carousels to web forms
  • Editable text file to adjust for your needs
  • Life-time access to the deck, updated regularly
  • Live, interactive workshop sessions
  • Hands-on exercises and reviews
  • All workshop recordings
  • Dedicated Q&A time for all your questions
  • A Smashing Certificate

The workshop is delivered in five 2.5h long sessions with lots of time for you to ask all your questions. It's for interface designers, front-end designers and developers who’d love to be prepared for any challenge coming their way.

You’ll walk away with a toolbox of practical techniques for your product, website, desktop app or a mobile app.

Workshop + Checklists { "sku": "checklist-cards", "type": "Book", "price": "450.00", "sales_price": "375.00", "prices": [{ "amount": "450.00", "currency": "USD", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "450.00", "currency": "EUR", "items": [ {"amount": "449.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "USD", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] }, { "amount": "375.00", "currency": "EUR", "items": [ {"amount": "374.00", "type": "Book"}, {"amount": "1.00", "type": "E-Book"} ] } ] } $ 375.00 $ 450.00 Attend Online Workshop

Vitaly’s 5×2.5h online workshop, with the checklists PDF, live sessions and examples.

Checklists PDF Deck { "sku": "checklist-cards", "type": "E-Book", "price": "10.00", "prices": [{ "amount": "10.00", "currency": "USD" }, { "amount": "10.00", "currency": "EUR" } ] } $ 10.00 Free! Get Checklists PDF

DRM-free, of course. PDF.
Included with Smashing Membership.

Get the eBook

Download PDF.
Thanks for being smashing! ❤️

Thank You For Your Support!

We sincerely hope that the insights you’ll gain from our little goodies will help you boost your skills while also building wonderful, new friends. A sincere thank you for your kind, ongoing support, patience and generosity — for being smashing, now and ever. ❤️

More Smashing Stuff

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Paul and Alla are some of these people. Have you checked out their books already?


Add to cart $39

Design Systems

Add to cart $39

Front-End & UX Workshops

Jump to topics →

Categories: Design

Can You Design A Website For The Five Senses?

Tue, 08/04/2020 - 01:00
Can You Design A Website For The Five Senses? Can You Design A Website For The Five Senses? Suzanne Scacca 2020-08-04T08:00:00+00:00 2020-08-04T11:33:50+00:00

Maybe it’s the whiff of someone’s perfume. Or a bite of pizza from a new restaurant. Or a song playing over the loudspeakers at the store. But the second it hits, you’re immediately transported to another place, time, or mood.

Imagine if your website could evoke this kind of response. Visitors who respond to the sensory stimulation would instantly be in a more positive headspace, which they’d then associate with the site and your brand.

Just be careful if you’re going to attempt this. Not every sense-triggered memory or emotion is going to be a positive one, so you want to focus on more generalized and shared experiences that come with little risk of backfiring.

Here are some ideas to help you do this:

Designing For The Sense Of Sight

A website is a medium to be seen, so you’d assume that the sense of sight is the most powerful one to play with. But there’s a difference between a visitor taking in the photos and words on a web page and feeling something because of what they’ve seen.

The truth is, sight is the most pragmatic of the senses. Typically, what you see is what you get.

Nevertheless, it is possible to design a website so that it alters the mood of anyone who visits it.

Color theory is one tool you can use to inspire visitors to feel a certain way based on what they see. However, that can be problematic as colors often have multiple meanings not just across cultures but within them as well. So, while you might think you’re making visitors feel happier with bright yellow hues, it could instead be making them feel overwhelmed and anxious.

What I’d suggest you focus on is how to use visuals to create an immersive experience that transports your visitors to another place or time. They shouldn’t need to look past the homepage for it either (though it’s a good idea if you can make it extend across the site).

Travel and hospitality sites have a tendency to do this well. Let’s look at an example.

Visit Philly is a tourism site I like to use to find things to do around the city. And that’s because this is how most of the pages on the site are designed:

The homepage of the Visit Philly website welcomes people to the City of Brotherly and Sisterly Love. (Source: Visit Philly) (Large preview)

Each page feels like a physically immersive experience without forcing visitors to watch a background video or scroll through a carousel of photos. Instead, each full-sized image perfectly encapsulates the setting that awaits each visitor.

Unlike staged, overly manipulated or stock photos that portray an unrealistic reality, visitors aren’t likely to ignore this kind of content. Because it’s real and it’s also easy to put themselves in the shoes of the people they’re looking at. The man and his dog. The family going for a walk. Or the people enjoying the spectacle that is Spruce Street Harbor Park:

A page dedicated to Spruce Street Harbor Park in Philadelphia. (Source: Visit Philly) (Large preview)

For people who’ve been to Philadelphia, the visuals on this site are likely to lure them back to the good times they had. And for people who are new to the city, the oversized visuals that show off the city’s hotspots enable them to picture what it’s like to go, which is a very effective way to sell someone on an experience.

Designing For The Sense Of Smell

Back in elementary school, our teachers would reward us with good scores on homework and tests with a handwritten note like “Great work!” and a scratch-n-sniff sticker. Like these ones available on Etsy:

A pack of 1970s Scratch ‘N’ Sniff stickers are available through Etsy. (Source: Etsy) (Large preview)

If you don’t know what these are, the name says it all. You scratch your nail against the sticker and it smells just like the picture on it.

Looking at this photo, I can still smell the “Berry Good” strawberry. This is going to sound crazy, but it reminds me of success. I don’t know if my teachers had an entire pack of the strawberry stickers, but it’s the one I got most frequently. And so I guess that’s why I associate it with good grades just by looking at it.

This is what you want to aim for with your website. You want to depict some recognizable scent in a way that the majority (if not all) of your visitors instantly feel good.

For example:

A used bookstore or library website with imagery that depicts rows upon rows of old books like the Providence Athenaeum:

The Providence Athenaeum homepage 'smells' like old books. (Source: Providence Athenaeum) (Large preview)

Voracious readers will definitely be able to smell the athenaeum and its old collection of books through this photo.

Or how about a company that’s known for making cleaning products like Tide?

The Tide homepage includes a couple images of freshly cleaned and folded laundry. (Source: Tide) (Large preview)

Even if you don’t use Tide to do your laundry, you know exactly what the first image in this carousel is going to smell like.

Fresh laundry is the scent of cleanliness and comfort. I’d also argue that it’s the scent of satisfaction because nothing feels better than getting laundry done and over with.

I also really like what Coffee Culture Cafe & Eatery has done with its homepage video:

Coffee Culture Cafe puts its coffee beans on full display. (Source: Coffee Culture Cafe & Eatery) (Large preview)

It’s not abnormal for a restaurant or cafe to show photos of its food or drinks. However, this is the raw product: the coffee beans. And as any coffee drinker can attest, this photo smells delicious and is something that’s sure to awaken their anticipation in ordering their first cup.

Designing For The Sense Of Sound

I was driving to the beach over the weekend when a song came on that made me smile. It was “What I Got” by Sublime, a song I listened to many times when making the long drive to the beach with friends in college.

Although I was alone on this particular trip and headed to a different beach, there was something about that song that instantly transformed my mood. The stress I was feeling about work melted away and all I could focus on was how good it was going to feel to spend the day in the sunshine by the water.

That’s something that the right sound can do. It can pull us out of the present and take us back to a memory of the past. Or it can overwhelm us with emotion that has no real grounding in the moment and, yet, there it is.

It doesn’t have to be music. And, honestly, on a website, it shouldn’t be. But that doesn’t mean you can’t still appeal to visitors’ ears through design.

There are two ways to approach this.

The first is to include imagery that depicts a predictable sound and one that brings joy (or whatever positive emotion you’re shooting for). Take the website for Kindermusik:

Kindermusik website shows how kids learn through music. (Source: Kindermusik) (Large preview)

For many of us, the xylophone was one of the first instruments we were introduced to as kids. So, it’s hard to imagine anyone seeing the top photo and not immediately hear the sounds of a kid banging away on the bars.

And while this school is all about providing kids with music-based education, not every image sounds like music. For instance, the Benefits page has this photo at the top:

A baby laughs at the top of the Kindermusik Benefits page. (Source: Kindermusik) (Large preview)

There might not be any sound coming through this site, but we all know the distinctive sound of a baby’s laugh. For parents trying to decide where to have their kids educated, they’ll be pleased to hear the sound of a child laughing as they visit this web page.

The other way to approach sound in design is to remove it entirely from the experience.

This works well for places like Scandinave Spa where customers come to enjoy the solitude and silence as they retreat from pressures of society, work, life and so on:

The Scandinave Spa uses images that sound quiet. (Source: Scandinave Spa) (Large preview)

If a lack of noise is what makes the real life experience so valuable, then choosing images that represent this is really important. So, obviously, sites like these won’t have images of people standing around talking nor will it incorporate bright or flashy lights.

A calm experience is best depicted by an absence of noise, movement and distraction.

Designing For The Sense Of Taste

Dr. Bence Nanay wrote about the cooking show paradox on Psychology Today, debating why it is that so many people enjoy watching someone else cook. It can’t possibly be to learn how to cook as recipes are everywhere online. And while they might enjoy the competitive aspect of some of these shows, he suggests the main reason is this:

Watching cooking shows is eating vicariously in the most literal sense possible — we get mental imagery of tasting and smelling the food without actually tasting or smelling it.

I think we can expand a bit further on Nanay’s argument.

I also believe that people become more invested in an outcome when they get to see the process of it being made. This isn’t something we’re usually allowed to see as consumers. We go out to eat and the food is sitting there on a plate for us. So, there’s something about the build-up of watching food or drinks being made that adds something extra to the experience.

That’s why I don’t think it’s enough to just use static photos of a restaurant’s dishes or food company’s products on a website. Not if you want to deeply connect to the visitors’ sense of taste, that is.

For instance, this is the video that’s embedded into the top of Sweet Charlie’s website:

A 30-second video that depicts the process of creating rolled ice cream before putting it into the hands of a very satisfied customer.

If you’ve ever been to a shop that makes rolled ice cream, you know just how enjoyable it is to watch this process in person. So, for a website to recreate that process — especially for first-time customers wondering what the heck rolled ice cream is — it’s a brilliant move.

Designing For The Sense Of Touch

The sense of touch is a relatively easy one to depict on the web. We see it all the time on ecommerce sites that allow shoppers to zoom in on fabrics and get a sense for what they feel like to touch or wear.

Like the zoom-in capabilities Anthropologie gives its shoppers:

Anthropologie allows shoppers to zoom in on each item in its online store. (Source: Anthropologie) (Large preview)

But this is nothing more than window-shopping done virtually. Anyone could walk through a store and brush their hands through racks of clothes. Don’t get me wrong, it’s a necessary functionality. However, this isn’t how we use the sense of touch to create a deeper connection with visitors.

We need to play around with more drastic tactile sensations.

One element you might want to fixate on is temperature. If there’s a heating or cooling element, work with that.

Another element you can play with is the feeling or pressure of human touch. There are many applications for this, though it’s particularly useful for websites that advertise therapeutic services.

For instance, this is how Massage Envy invites people to its spa services:

The Massage Envy website uses images of people getting massages instead of just empty massage rooms and tables. (Source: Massage Envy) (Large preview)

This is a good start. Some spa and massage websites just show images of empty massage rooms and tables. At least here prospective customers can kind of see the massage process.

I don’t think there’s much to feel here though as the positioning of the masseuse and pressure being applied seem unrealistic. I’m guessing it’s a stock photo chosen for its symmetry, color and attractiveness.

But there are ways to capture the tangible experience while still making it look good for a website. For instance, the Bodhi Spa uses a video to take visitors through various services they can experience — alone or with others — at the space.

At one point, they’re shown someone getting a massage:

The video on the home page of the Bodhi Spa website shows a woman receiving a hands-on massage. (Source: Bodhi Spa) (Large preview)

Notice the symmetry of this screengrab and how it doesn’t have to come at the expense of authenticity. Plus, the video shows customers how the massage feels as the masseuse applies pressure and moves her hands around the woman’s neck and head.

Visitors then get to see a couple enjoying the benefits of hydrotherapy, with the woman picking up a handful of salt and placing it in the pool they’re in:

The Bodhi Spa intro video shows a couple enjoying the benefits of salt therapy. (Source: Bodhi Spa) (Large preview)

There’s a lot to touch in this video. The fine salt crystals. The warm water in the hydrotherapy tub. And, shortly after this screengrab, the couple holds hands as they enter the cool-down room.

A copywriter can certainly convey a lot of the feel-good benefits of something like this, but it’s also effective to let visitors see it with their own two eyes and experience it vicariously through others.

Wrapping Up

While I think that trying to design for all five senses at once would lead to sensory overload, choosing one particularly potent sense to design for is a great idea.

If you can find a way to recreate that sense through your site, your visitors may experience:

  • Heightened awareness,
  • Positive and happier thoughts than before they entered the site,
  • A greater connection to the bigger picture and a willingness to take next steps on the site.

We often focus on how to connect to our audience through their pain, but why don’t we focus on connecting through happiness for a change? I think we could all use a little more of that these days.

(ra, yk, il)
Categories: Design

Setting Up Redux For Use In A Real-World Application

Mon, 08/03/2020 - 04:00
Setting Up Redux For Use In A Real-World Application Setting Up Redux For Use In A Real-World Application Jerry Navi 2020-08-03T11:00:00+00:00 2020-08-03T14:06:45+00:00

Redux is an important library in the React ecosystem, and almost the default to use when working on React applications that involve state management. As such, the importance of knowing how it works cannot be overestimated.

This guide will walk the reader through setting up Redux in a fairly complex React application and introduce the reader to “best practices” configuration along the way. It will be beneficial to beginners especially, and anyone who wants to fill in the gaps in their knowledge of Redux.

Introducing Redux

Redux is a library that aims to solve the problem of state management in JavaScript apps by imposing restrictions on how and when state updates can happen. These restrictions are formed from Redux’s “three principles” which are:

  • Single source of truth
    All of your application’s state is held in a Redux store. This state can be represented visually as a tree with a single ancestor, and the store provides methods for reading the current state and subscribing to changes from anywhere within your app.

  • State is read-only
    The only way to change the state is to send the data as a plain object, called an action. You can think about actions as a way of saying to the state, “I have some data I would like to insert/update/delete”.

  • Changes are made with pure functions
    To change your app’s state, you write a function that takes the previous state and an action and returns a new state object as the next state. This function is called a reducer, and it is a pure function because it returns the same output for a given set of inputs.

The last principle is the most important in Redux, and this is where the magic of Redux happens. Reducer functions must not contain unpredictable code, or perform side-effects such as network requests, and should not directly mutate the state object.

Redux is a great tool, as we’ll learn later in this guide, but it doesn’t come without its challenges or tradeoffs. To help make the process of writing Redux efficient and more enjoyable, the Redux team offers a toolkit that abstracts over the process of setting up a Redux store and provides helpful Redux add-ons and utilities that help to simplify application code. For example, the library uses Immer.js, a library that makes it possible for you to write “mutative” immutable update logic, under the hood.

Recommended reading: Better Reducers With Immer

In this guide, we will explore Redux by building an application that lets authenticated users create and manage digital diaries.


As stated in the previous section, we will be taking a closer look at Redux by building an app that lets users create and manage diaries. We will be building our application using React, and we’ll set up Mirage as our API mocking server since we won’t have access to a real server in this guide.

Starting a Project and Installing Dependencies

Let’s get started on our project. First, bootstrap a new React application using create-react-app:

Using npx:

npx create-react-app diaries-app --template typescript

We are starting with the TypeScript template, as we can improve our development experience by writing type-safe code.

Now, let’s install the dependencies we’ll be needing. Navigate into your newly created project directory

cd diaries-app

And run the following commands:

npm install --save redux react-redux @reduxjs/toolkit npm install --save axios react-router-dom react-hook-form yup dayjs markdown-to-jsx sweetalert2 npm install --save-dev miragejs @types/react-redux @types/react-router-dom @types/yup @types/markdown-to-jsx

The first command will install Redux, React-Redux (official React bindings for Redux), and the Redux toolkit.

The second command installs some extra packages which will be useful for the app we’ll be building but are not required to work with Redux.

The last command installs Mirage and type declarations for the packages we installed as devDependencies.

Describing the Application’s Initial State

Let’s go over our application’s requirements in detail. The application will allow authenticated users to create or modify existing diaries. Diaries are private by default, but they can be made public. Finally, diary entries will be sorted by their last modified date.

This relationship should look something like this:

An Overview of the Application’s Data Model. (Large preview)

Armed with this information, we can now model our application’s state. First, we will create an interface for each of the following resources: User, Diary and DiaryEntry. Interfaces in Typescript describe the shape of an object.

Go ahead and create a new directory named interfaces in your app’s src sub-directory:

cd src && mkdir interfaces

Next, run the following commands in the directory you just created:

touch entry.interface.ts touch diary.interface.ts touch user.interface.ts

This will create three files named entry.interface.ts, diary.interface.ts and user.interface.ts respectively. I prefer to keep interfaces that would be used in multiple places across my app in a single location.

Open entry.interface.ts and add the following code to set up the Entry interface:

export interface Entry { id?: string; title: string; content: string; createdAt?: string; updatedAt?: string; diaryId?: string; }

A typical diary entry will have a title and some content, as well as information about when it was created or last updated. We’ll get back to the diaryId property later.

Next, add the following to diary.interface.ts:

export interface Diary { id?: string; title: string; type: 'private' | 'public'; createdAt?: string; updatedAt?: string; userId?: string; entryIds: string[] | null; }

Here, we have a type property which expects an exact value of either ‘private’ or ‘public’, as diaries must be either private or public. Any other value will throw an error in the TypeScript compiler.

We can now describe our User object in the user.interface.ts file as follows:

export interface User { id?: string; username: string; email: string; password?: string; diaryIds: string[] | null; }

With our type definitions finished and ready to be used across our app, let’s setup our mock API server using Mirage.

Setting up API Mocking with MirageJS

Since this tutorial is focused on Redux, we will not go into the details of setting up and using Mirage in this section. Please check out this excellent series if you would like to learn more about Mirage.

To get started, navigate to your src directory and create a file named server.ts by running the following commands:

mkdir -p services/mirage cd services/mirage # ~/diaries-app/src/services/mirage touch server.ts

Next, open the server.ts file and add the following code:

import { Server, Model, Factory, belongsTo, hasMany, Response } from 'miragejs'; export const handleErrors = (error: any, message = 'An error ocurred') => { return new Response(400, undefined, { data: { message, isError: true, }, }); }; export const setupServer = (env?: string): Server => { return new Server({ environment: env ?? 'development', models: { entry: Model.extend({ diary: belongsTo(), }), diary: Model.extend({ entry: hasMany(), user: belongsTo(), }), user: Model.extend({ diary: hasMany(), }), }, factories: { user: Factory.extend({ username: 'test', password: 'password', email: '[email protected]', }), }, seeds: (server): any => { server.create('user'); }, routes(): void { this.urlPrefix = ''; }, }); };

In this file, we are exporting two functions. A utility function for handling errors, and setupServer(), which returns a new server instance. The setupServer() function takes an optional argument which can be used to change the server’s environment. You can use this to set up Mirage for testing later.

We have also defined three models in the server’s models property: User, Diary and Entry. Remember that earlier we set up the Entry interface with a property named diaryId. This value will be automatically set to the id the entry is being saved to. Mirage uses this property to establish a relationship between an Entry and a Diary. The same thing also happens when a user creates a new diary: userId is automatically set to that user’s id.

We seeded the database with a default user and configured Mirage to intercept all requests from our app starting with Notice that we haven’t configured any route handlers yet. Let’s go ahead and create a few.

Ensure that you are in the src/services/mirage directory, then create a new directory named routes using the following command:

# ~/diaries-app/src/services/mirage mkdir routes

cd to the newly created directory and create a file named user.ts:

cd routes touch user.ts

Next, paste the following code in the user.ts file:

import { Response, Request } from 'miragejs'; import { handleErrors } from '../server'; import { User } from '../../../interfaces/user.interface'; import { randomBytes } from 'crypto'; const generateToken = () => randomBytes(8).toString('hex'); export interface AuthResponse { token: string; user: User; } const login = (schema: any, req: Request): AuthResponse | Response => { const { username, password } = JSON.parse(req.requestBody); const user = schema.users.findBy({ username }); if (!user) { return handleErrors(null, 'No user with that username exists'); } if (password !== user.password) { return handleErrors(null, 'Password is incorrect'); } const token = generateToken(); return { user: user.attrs as User, token, }; }; const signup = (schema: any, req: Request): AuthResponse | Response => { const data = JSON.parse(req.requestBody); const exUser = schema.users.findBy({ username: data.username }); if (exUser) { return handleErrors(null, 'A user with that username already exists.'); } const user = schema.users.create(data); const token = generateToken(); return { user: user.attrs as User, token, }; }; export default { login, signup, };

The login and signup methods here receive a Schema class and a fake Request object and, upon validating the password or checking that the login does not already exist, return the existing user or a new user respectively. We use the Schema object to interact with Mirage’s ORM, while the Request object contains information about the intercepted request including the request body and headers.

Next, let’s add methods for working with diaries and diary entries. Create a file named diary.ts in your routes directory:

touch diary.ts

Update the file with the following methods for working with Diary resources:

export const create = ( schema: any, req: Request ): { user: User; diary: Diary } | Response => { try { const { title, type, userId } = JSON.parse(req.requestBody) as Partial< Diary >; const exUser = schema.users.findBy({ id: userId }); if (!exUser) { return handleErrors(null, 'No such user exists.'); } const now = dayjs().format(); const diary = exUser.createDiary({ title, type, createdAt: now, updatedAt: now, }); return { user: { ...exUser.attrs, }, diary: diary.attrs, }; } catch (error) { return handleErrors(error, 'Failed to create Diary.'); } }; export const updateDiary = (schema: any, req: Request): Diary | Response => { try { const diary = schema.diaries.find(; const data = JSON.parse(req.requestBody) as Partial<Diary>; const now = dayjs().format(); diary.update({, updatedAt: now, }); return diary.attrs as Diary; } catch (error) { return handleErrors(error, 'Failed to update Diary.'); } }; export const getDiaries = (schema: any, req: Request): Diary[] | Response => { try { const user = schema.users.find(; return user.diary as Diary[]; } catch (error) { return handleErrors(error, 'Could not get user diaries.'); } };

Next, let’s add some methods for working with diary entries:

export const addEntry = ( schema: any, req: Request ): { diary: Diary; entry: Entry } | Response => { try { const diary = schema.diaries.find(; const { title, content } = JSON.parse(req.requestBody) as Partial<Entry>; const now = dayjs().format(); const entry = diary.createEntry({ title, content, createdAt: now, updatedAt: now, }); diary.update({ ...diary.attrs, updatedAt: now, }); return { diary: diary.attrs, entry: entry.attrs, }; } catch (error) { return handleErrors(error, 'Failed to save entry.'); } }; export const getEntries = ( schema: any, req: Request ): { entries: Entry[] } | Response => { try { const diary = schema.diaries.find(; return diary.entry; } catch (error) { return handleErrors(error, 'Failed to get Diary entries.'); } }; export const updateEntry = (schema: any, req: Request): Entry | Response => { try { const entry = schema.entries.find(; const data = JSON.parse(req.requestBody) as Partial<Entry>; const now = dayjs().format(); entry.update({, updatedAt: now, }); return entry.attrs as Entry; } catch (error) { return handleErrors(error, 'Failed to update entry.'); } };

Finally, let’s add the necessary imports at the top of the file:

import { Response, Request } from 'miragejs'; import { handleErrors } from '../server'; import { Diary } from '../../../interfaces/diary.interface'; import { Entry } from '../../../interfaces/entry.interface'; import dayjs from 'dayjs'; import { User } from '../../../interfaces/user.interface';

In this file, we have exported methods for working with the Diary and Entry models. In the create method, we call a method named user.createDiary() to save a new diary and associate it to a user account.

The addEntry and updateEntry methods create and correctly associate a new entry to a diary or update an existing entry’s data respectively. The latter also updates the entry’s updatedAt property with the current timestamp. The updateDiary method also updates a diary with the timestamp the change was made. Later, we’ll be sorting the records we receive from our network request with this property.

We also have a getDiaries method which retrieves a user’s diaries and a getEntries methods which retrieves a selected diary’s entries.

We can now update our server to use the methods we just created. Open server.ts to include the files:

import { Server, Model, Factory, belongsTo, hasMany, Response } from 'miragejs'; import user from './routes/user'; import * as diary from './routes/diary';

Then, update the server’s route property with the routes we want to handle:

export const setupServer = (env?: string): Server => { return new Server({ // ... routes(): void { this.urlPrefix = ''; this.get('/diaries/entries/:id', diary.getEntries); this.get('/diaries/:id', diary.getDiaries);'/auth/login', user.login);'/auth/signup', user.signup);'/diaries/', diary.create);'/diaries/entry/:id', diary.addEntry); this.put('/diaries/entry/:id', diary.updateEntry); this.put('/diaries/:id', diary.updateDiary); }, }); };

With this change, when a network request from our app matches one of the route handlers, Mirage intercepts the request and invokes the respective route handler functions.

Next, we’ll proceed to make our application aware of the server. Open src/index.tsx and import the setupServer() method:

import { setupServer } from './services/mirage/server';

And add the following code before ReactDOM.render():

if (process.env.NODE_ENV === 'development') { setupServer(); }

The check in the code block above ensures that our Mirage server will run only while we are in development mode.

One last thing we need to do before moving on to the Redux bits is configure a custom Axios instance for use in our app. This will help to reduce the amount of code we’ll have to write later on.

Create a file named api.ts under src/services and add the following code to it:

import axios, { AxiosInstance, AxiosResponse, AxiosError } from 'axios'; import { showAlert } from '../util'; const http: AxiosInstance = axios.create({ baseURL: '', });['Content-Type'] = 'application/json'; http.interceptors.response.use( async (response: AxiosResponse): Promise => { if (response.status >= 200 && response.status < 300) { return; } }, (error: AxiosError) => { const { response, request }: { response?: AxiosResponse; request?: XMLHttpRequest; } = error; if (response) { if (response.status >= 400 && response.status < 500) { showAlert(, 'error'); return null; } } else if (request) { showAlert('Request failed. Please try again.', 'error'); return null; } return Promise.reject(error); } ); export default http;

In this file, we are exporting an Axios instance modified to include our app’s API url, We have configured an interceptor to handle success and error responses, and we display error messages using a sweetalert toast which we will configure in the next step.

Create a file named util.ts in your src directory and paste the following code in it:

import Swal, { SweetAlertIcon } from 'sweetalert2'; export const showAlert = (titleText = 'Something happened.', alertType?: SweetAlertIcon): void => {{ titleText, position: 'top-end', timer: 3000, timerProgressBar: true, toast: true, showConfirmButton: false, showCancelButton: true, cancelButtonText: 'Dismiss', icon: alertType, showClass: { popup: 'swal2-noanimation', backdrop: 'swal2-noanimation', }, hideClass: { popup: '', backdrop: '', }, }); };

This file exports a function that displays a toast whenever it is invoked. The function accepts parameters to allow you set the toast message and type. For example, we are showing an error toast in the Axios response error interceptor like this:

showAlert(, 'error');

Now when we make requests from our app while in development mode, they will be intercepted and handled by Mirage instead. In the next section, we will set up our Redux store using Redux toolkit.

Setting up a Redux Store

In this section, we are going to set up our store using the following exports from Redux toolkit: configureStore(), getDefaultMiddleware() and createSlice(). Before we start, we should take a detailed look at what these exports do.

configureStore() is an abstraction over the Redux createStore() function that helps simplify your code. It uses createStore() internally to set up your store with some useful development tools:

export const store = configureStore({ reducer: rootReducer, // a single reducer function or an object of slice reducers });

The createSlice() function helps simplify the process of creating action creators and slice reducers. It accepts an initial state, an object full of reducer functions, and a “slice name”, and automatically generates action creators and action types corresponding to the reducers and your state. It also returns a single reducer function, which can be passed to Redux’s combineReducers() function as a “slice reducer”.

Remember that the state is a single tree, and a single root reducer manages changes to that tree. For maintainability, it is recommended to split your root reducer into “slices,” and have a “slice reducer” provide an initial value and calculate the updates to a corresponding slice of the state. These slices can be joined into a single reducer function by using combineReducers().

There are additional options for configuring the store. For example, you can pass an array of your own middleware to configureStore() or start up your app from a saved state using the preloadedState option. When you supply the middleware option, you have to define all the middleware you want added to the store. If you would like to retain the defaults when setting up your store, you can use getDefaultMiddleware() to get the default list of middleware:

export const store = configureStore({ // ... middleware: [...getDefaultMiddleware(), customMiddleware], });

Let’s now proceed to set up our store. We will adopt a “ducks-style” approach to structuring our files, specifically following the guidelines in practice from the Github Issues sample app. We will be organizing our code such that related components, as well as actions and reducers, live in the same directory. The final state object will look like this:

type RootState = { auth: { token: string | null; isAuthenticated: boolean; }; diaries: Diary[]; entries: Entry[]; user: User | null; editor: { canEdit: boolean; currentlyEditing: Entry | null; activeDiaryId: string | null; }; }

To get started, create a new directory named features under your src directory:

# ~/diaries-app/src mkdir features

Then, cd into features and create directories named auth, diary and entry:

cd features mkdir auth diary entry

cd into the auth directory and create a file named authSlice.ts:

cd auth # ~/diaries-app/src/features/auth touch authSlice.ts

Open the file and paste the following in it:

import { createSlice, PayloadAction } from '@reduxjs/toolkit'; interface AuthState { token: string | null; isAuthenticated: boolean; } const initialState: AuthState = { token: null, isAuthenticated: false, }; const auth = createSlice({ name: 'auth', initialState, reducers: { saveToken(state, { payload }: PayloadAction) { if (payload) { state.token = payload; } }, clearToken(state) { state.token = null; }, setAuthState(state, { payload }: PayloadAction) { state.isAuthenticated = payload; }, }, }); export const { saveToken, clearToken, setAuthState } = auth.actions; export default auth.reducer;

In this file, we’re creating a slice for the auth property of our app’s state using the createSlice() function introduced earlier. The reducers property holds a map of reducer functions for updating values in the auth slice. The returned object contains automatically generated action creators and a single slice reducer. We would need to use these in other files so, following the “ducks pattern”, we do named exports of the action creators, and a default export of the reducer function.

Let’s set up the remaining reducer slices according to the app state we saw earlier. First, create a file named userSlice.ts in the auth directory and add the following code to it:

import { createSlice, PayloadAction } from '@reduxjs/toolkit'; import { User } from '../../interfaces/user.interface'; const user = createSlice({ name: 'user', initialState: null as User | null, reducers: { setUser(state, { payload }: PayloadAction<User | null>) { return state = (payload != null) ? payload : null; }, }, }); export const { setUser } = user.actions; export default user.reducer;

This creates a slice reducer for the user property in our the application’s store. The setUser reducer function accepts a payload containing user data and updates the state with it. When no data is passed, we set the state’s user property to null.

Next, create a file named diariesSlice.ts under src/features/diary:

# ~/diaries-app/src/features cd diary touch diariesSlice.ts

Add the following code to the file:

import { createSlice, PayloadAction } from '@reduxjs/toolkit'; import { Diary } from '../../interfaces/diary.interface'; const diaries = createSlice({ name: 'diaries', initialState: [] as Diary[], reducers: { addDiary(state, { payload }: PayloadAction<Diary[]>) { const diariesToSave = payload.filter((diary) => { return state.findIndex((item) => === === -1; }); state.push(...diariesToSave); }, updateDiary(state, { payload }: PayloadAction<Diary>) { const { id } = payload; const diaryIndex = state.findIndex((diary) => === id); if (diaryIndex !== -1) { state.splice(diaryIndex, 1, payload); } }, }, }); export const { addDiary, updateDiary } = diaries.actions; export default diaries.reducer;

The “diaries” property of our state is an array containing the user’s diaries, so our reducer functions here all work on the state object they receive using array methods. Notice here that we are writing normal “mutative” code when working on the state. This is possible because the reducer functions we create using the createSlice() method are wrapped with Immer’s produce() method. This results in Immer returning a correct immutably updated result for our state regardless of us writing mutative code.

Next, create a file named entriesSlice.ts under src/features/entry:

# ~/diaries-app/src/features mkdir entry cd entry touch entriesSlice.ts

Open the file and add the following code:

import { createSlice, PayloadAction } from '@reduxjs/toolkit'; import { Entry } from '../../interfaces/entry.interface'; const entries = createSlice({ name: 'entries', initialState: [] as Entry[], reducers: { setEntries(state, { payload }: PayloadAction<Entry[] | null>) { return (state = payload != null ? payload : []); }, updateEntry(state, { payload }: PayloadAction<Entry>) { const { id } = payload; const index = state.findIndex((e) => === id); if (index !== -1) { state.splice(index, 1, payload); } }, }, }); export const { setEntries, updateEntry } = entries.actions; export default entries.reducer;

The reducer functions here have logic similar to the previous slice’s reducer functions. The entries property is also an array, but it only holds entries for a single diary. In our app, this will be the diary currently in the user’s focus.

Finally, create a file named editorSlice.ts in src/features/entry and add the following to it:

import { createSlice, PayloadAction } from '@reduxjs/toolkit'; import { Entry } from '../../interfaces/entry.interface'; interface EditorState { canEdit: boolean; currentlyEditing: Entry | null; activeDiaryId: string | null; } const initialState: EditorState = { canEdit: false, currentlyEditing: null, activeDiaryId: null, }; const editor = createSlice({ name: 'editor', initialState, reducers: { setCanEdit(state, { payload }: PayloadAction<boolean>) { state.canEdit = payload != null ? payload : !state.canEdit; }, setCurrentlyEditing(state, { payload }: PayloadAction<Entry | null>) { state.currentlyEditing = payload; }, setActiveDiaryId(state, { payload }: PayloadAction<string>) { state.activeDiaryId = payload; }, }, }); export const { setCanEdit, setCurrentlyEditing, setActiveDiaryId } = editor.actions; export default editor.reducer;

Here, we have a slice for the editor property in state. We’ll be using the properties in this object to check if the user wants to switch to editing mode, which diary the edited entry belongs to, and what entry is going to be edited.

To put it all together, create a file named rootReducer.ts in the src directory with the following content:

import { combineReducers } from '@reduxjs/toolkit'; import authReducer from './features/auth/authSlice'; import userReducer from './features/auth/userSlice'; import diariesReducer from './features/diary/diariesSlice'; import entriesReducer from './features/entry/entriesSlice'; import editorReducer from './features/entry/editorSlice'; const rootReducer = combineReducers({ auth: authReducer, diaries: diariesReducer, entries: entriesReducer, user: userReducer, editor: editorReducer, }); export type RootState = ReturnType<typeof rootReducer>; export default rootReducer;

In this file, we’ve combined our slice reducers into a single root reducer with the combineReducers() function. We’ve also exported the RootState type, which will be useful later when we’re selecting values from the store. We can now use the root reducer (the default export of this file) to set up our store.

Create a file named store.ts with the following contents:

import { configureStore } from '@reduxjs/toolkit'; import rootReducer from './rootReducer'; import { useDispatch } from 'react-redux'; const store = configureStore({ reducer: rootReducer, }); type AppDispatch = typeof store.dispatch; export const useAppDispatch = () => useDispatch<AppDispatch>(); export default store;

With this, we’ve created a store using the configureStore() export from Redux toolkit. We’ve also exported an hook called useAppDispatch() which merely returns a typed useDispatch() hook.

Next, update the imports in index.tsx to look like the following:

import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './app/App'; import * as serviceWorker from './serviceWorker'; import { setupServer } from './services/mirage/server'; import { Provider } from 'react-redux'; import store from './store'; // ...

Finally, make the store available to the app’s components by wrapping <App /> (the top-level component) with <Provider />:

ReactDOM.render( <React.StrictMode> <Provider store={store}> <App /> </Provider> </React.StrictMode>, document.getElementById('root') );

Now, if you start your app and you navigate to http://localhost:3000 with the Redux Dev Tools extension enabled, you should see the following in your app’s state:

Initial State in Redux Dev Tools Extension. (Large preview)

Great work so far, but we’re not quite finished yet. In the next section, we will design the app’s User Interface and add functionality using the store we’ve just created.

Designing The Application User Interface

To see Redux in action, we are going to build a demo app. In this section, we will connect our components to the store we’ve created and learn to dispatch actions and modify the state using reducer functions. We will also learn how to read values from the store. Here’s what our Redux-powered application will look like.

Home page showing an authenticated user’s diaries. (Large preview) Screenshots of final app. (Large preview) Setting up the Authentication Feature

To get started, move App.tsx and its related files from the src directory to its own directory like this:

# ~/diaries-app/src mkdir app mv App.tsx App.test.tsx app

You can delete the App.css and logo.svg files as we won’t be needing them.

Next, open the App.tsx file and replace its contents with the following:

import React, { FC, lazy, Suspense } from 'react'; import { BrowserRouter as Router, Switch, Route } from 'react-router-dom'; import { useSelector } from 'react-redux'; import { RootState } from '../rootReducer'; const Auth = lazy(() => import('../features/auth/Auth')); const Home = lazy(() => import('../features/home/Home')); const App: FC = () => { const isLoggedIn = useSelector( (state: RootState) => state.auth.isAuthenticated ); return ( <Router> <Switch> <Route path="/"> <Suspense fallback={<p>Loading...</p>}> {isLoggedIn ? <Home /> : <Auth />} </Suspense> </Route> </Switch> </Router> ); }; export default App;

Here we have set up our app to render an <Auth /> component if the user is unauthenticated, or otherwise render a <Home /> component. We haven’t created either of these components yet, so let’s fix that. Create a file named Auth.tsx under src/features/auth and add the following contents to the file:

import React, { FC, useState } from 'react'; import { useForm } from 'react-hook-form'; import { User } from '../../interfaces/user.interface'; import * as Yup from 'yup'; import http from '../../services/api'; import { saveToken, setAuthState } from './authSlice'; import { setUser } from './userSlice'; import { AuthResponse } from '../../services/mirage/routes/user'; import { useAppDispatch } from '../../store'; const schema = Yup.object().shape({ username: Yup.string() .required('What? No username?') .max(16, 'Username cannot be longer than 16 characters'), password: Yup.string().required('Without a password, "None shall pass!"'), email: Yup.string().email('Please provide a valid email address ([email protected])'), }); const Auth: FC = () => { const { handleSubmit, register, errors } = useForm<User>({ validationSchema: schema, }); const [isLogin, setIsLogin] = useState(true); const [loading, setLoading] = useState(false); const dispatch = useAppDispatch(); const submitForm = (data: User) => { const path = isLogin ? '/auth/login' : '/auth/signup'; http .post<User, AuthResponse>(path, data) .then((res) => { if (res) { const { user, token } = res; dispatch(saveToken(token)); dispatch(setUser(user)); dispatch(setAuthState(true)); } }) .catch((error) => { console.log(error); }) .finally(() => { setLoading(false); }); }; return ( <div className="auth"> <div className="card"> <form onSubmit={handleSubmit(submitForm)}> <div className="inputWrapper"> <input ref={register} name="username" placeholder="Username" /> {errors && errors.username && ( <p className="error">{errors.username.message}</p> )} </div> <div className="inputWrapper"> <input ref={register} name="password" type="password" placeholder="Password" /> {errors && errors.password && ( <p className="error">{errors.password.message}</p> )} </div> {!isLogin && ( <div className="inputWrapper"> <input ref={register} name="email" placeholder="Email (optional)" /> {errors && && ( <p className="error">{}</p> )} </div> )} <div className="inputWrapper"> <button type="submit" disabled={loading}> {isLogin ? 'Login' : 'Create account'} </button> </div> <p onClick={() => setIsLogin(!isLogin)} style={{ cursor: 'pointer', opacity: 0.7 }} > {isLogin ? 'No account? Create one' : 'Already have an account?'} </p> </form> </div> </div> ); }; export default Auth;

In this component, we have set up a form for users to log in, or to create an account. Our form fields are validated using Yup and, on successfully authenticating a user, we use our useAppDispatch hook to dispatch the relevant actions. You can see the dispatched actions and the changes made to your state in the Redux DevTools Extension:

Dispatched Actions with Changes Tracked in Redux Dev Tools Extensions. (Large preview)

Finally, create a file named Home.tsx under src/features/home and add the following code to the file:

import React, { FC } from 'react'; const Home: FC = () => { return ( <div> <p>Welcome user!</p> </div> ); }; export default Home;

For now, we are just displaying some text to the authenticated user. As we build the rest of our application, we will be updating this file.

Setting up the Editor

The next component we are going to build is the editor. Though basic, we will enable support for rendering markdown content using the markdown-to-jsx library we installed earlier.

First, create a file named Editor.tsx in the src/features/entry directory. Then, add the following code to the file:

import React, { FC, useState, useEffect } from 'react'; import { useSelector } from 'react-redux'; import { RootState } from '../../rootReducer'; import Markdown from 'markdown-to-jsx'; import http from '../../services/api'; import { Entry } from '../../interfaces/entry.interface'; import { Diary } from '../../interfaces/diary.interface'; import { setCurrentlyEditing, setCanEdit } from './editorSlice'; import { updateDiary } from '../diary/diariesSlice'; import { updateEntry } from './entriesSlice'; import { showAlert } from '../../util'; import { useAppDispatch } from '../../store'; const Editor: FC = () => { const { currentlyEditing: entry, canEdit, activeDiaryId } = useSelector( (state: RootState) => state.editor ); const [editedEntry, updateEditedEntry] = useState(entry); const dispatch = useAppDispatch(); const saveEntry = async () => { if (activeDiaryId == null) { return showAlert('Please select a diary.', 'warning'); } if (entry == null) { http .post<Entry, { diary: Diary; entry: Entry }>( `/diaries/entry/${activeDiaryId}`, editedEntry ) .then((data) => { if (data != null) { const { diary, entry: _entry } = data; dispatch(setCurrentlyEditing(_entry)); dispatch(updateDiary(diary)); } }); } else { http .put<Entry, Entry>(`diaries/entry/${}`, editedEntry) .then((_entry) => { if (_entry != null) { dispatch(setCurrentlyEditing(_entry)); dispatch(updateEntry(_entry)); } }); } dispatch(setCanEdit(false)); }; useEffect(() => { updateEditedEntry(entry); }, [entry]); return ( <div className="editor"> <header style={{ display: 'flex', flexWrap: 'wrap', alignItems: 'center', marginBottom: '0.2em', paddingBottom: '0.2em', borderBottom: '1px solid rgba(0,0,0,0.1)', }} > {entry && !canEdit ? ( <h4> {entry.title} <a href="#edit" onClick={(e) => { e.preventDefault(); if (entry != null) { dispatch(setCanEdit(true)); } }} style={{ marginLeft: '0.4em' }} > (Edit) </a> </h4> ) : ( <input value={editedEntry?.title ?? ''} disabled={!canEdit} onChange={(e) => { if (editedEntry) { updateEditedEntry({ ...editedEntry, title:, }); } else { updateEditedEntry({ title:, content: '', }); } }} /> )} </header> {entry && !canEdit ? ( <Markdown>{entry.content}</Markdown> ) : ( <> <textarea disabled={!canEdit} placeholder="Supports markdown!" value={editedEntry?.content ?? ''} onChange={(e) => { if (editedEntry) { updateEditedEntry({ ...editedEntry, content:, }); } else { updateEditedEntry({ title: '', content:, }); } }} /> <button onClick={saveEntry} disabled={!canEdit}> Save </button> </> )} </div> ); }; export default Editor;

Let’s break down what’s happening in the Editor component.

First, we are picking some values (with correctly inferred types) from the app’s state using the useSelector() hook from react-redux. In the next line, we have a stateful value called editedEntry whose initial value is set to the editor.currentlyEditing property we’ve selected from the store.

Next, we have the saveEntry function which updates or creates a new entry in the API, and dispatches the respective Redux action.

Finally, we have a useEffect that is fired when the editor.currentlyEditing property changes. Our editor’s UI (in the component’s return function) has been set up to respond to changes in the state. For example, rendering the entry’s content as JSX elements when the user isn’t editing.

With that, the app’s Entry feature should be completely set up. In the next section, we will finish building the Diary feature and then import the main components in the Home component we created earlier.

Final Steps

To finish up our app, we will first create components for the Diary feature. Then, we will update the Home component with the primary exports from the Diary and Entry features. Finally, we will add some styling to give our app the required pizzazz!

Let’s start by creating a file in src/features/diary named DiaryTile.tsx. This component will present information about a diary and its entries, and allow the user to edit the diary’s title. Add the following code to the file:

import React, { FC, useState } from 'react'; import { Diary } from '../../interfaces/diary.interface'; import http from '../../services/api'; import { updateDiary } from './diariesSlice'; import { setCanEdit, setActiveDiaryId, setCurrentlyEditing } from '../entry/editorSlice'; import { showAlert } from '../../util'; import { Link } from 'react-router-dom'; import { useAppDispatch } from '../../store'; interface Props { diary: Diary; } const buttonStyle: React.CSSProperties = { fontSize: '0.7em', margin: '0 0.5em', }; const DiaryTile: FC<Props> = (props) => { const [diary, setDiary] = useState(props.diary); const [isEditing, setIsEditing] = useState(false); const dispatch = useAppDispatch(); const totalEntries = props.diary?.entryIds?.length; const saveChanges = () => { http .put<Diary, Diary>(`/diaries/${}`, diary) .then((diary) => { if (diary) { dispatch(updateDiary(diary)); showAlert('Saved!', 'success'); } }) .finally(() => { setIsEditing(false); }); }; return ( <div className="diary-tile"> <h2 className="title" title="Click to edit" onClick={() => setIsEditing(true)} style={{ cursor: 'pointer', }} > {isEditing ? ( <input value={diary.title} onChange={(e) => { setDiary({ ...diary, title:, }); }} onKeyUp={(e) => { if (e.key === 'Enter') { saveChanges(); } }} /> ) : ( <span>{diary.title}</span> )} </h2> <p className="subtitle">{totalEntries ?? '0'} saved entries</p> <div style={{ display: 'flex' }}> <button style={buttonStyle} onClick={() => { dispatch(setCanEdit(true)); dispatch(setActiveDiaryId( as string)); dispatch(setCurrentlyEditing(null)); }} > Add New Entry </button> <Link to={`diary/${}`} style={{ width: '100%' }}> <button className="secondary" style={buttonStyle}> View all → </button> </Link> </div> </div> ); }; export default DiaryTile;

In this file, we receive a diary object as a prop and display the data in our component. Notice that we use local state and component props for our data display here. That’s because you don’t have to manage all your app’s state using Redux. Sharing data using props, and maintaining local state in your components is acceptable and encouraged in some cases.

Next, let’s create a component that will display a list of a diary’s entries, with the last updated entries at the top of the list. Ensure you are in the src/features/diary directory, then create a file named DiaryEntriesList.tsx and add the following code to the file:

import React, { FC, useEffect } from 'react'; import { useParams, Link } from 'react-router-dom'; import { useSelector } from 'react-redux'; import { RootState } from '../../rootReducer'; import http from '../../services/api'; import { Entry } from '../../interfaces/entry.interface'; import { setEntries } from '../entry/entriesSlice'; import { setCurrentlyEditing, setCanEdit } from '../entry/editorSlice'; import dayjs from 'dayjs'; import { useAppDispatch } from '../../store'; const DiaryEntriesList: FC = () => { const { entries } = useSelector((state: RootState) => state); const dispatch = useAppDispatch(); const { id } = useParams(); useEffect(() => { if (id != null) { http .get<null, { entries: Entry[] }>(`/diaries/entries/${id}`) .then(({ entries: _entries }) => { if (_entries) { const sortByLastUpdated = _entries.sort((a, b) => { return dayjs(b.updatedAt).unix() - dayjs(a.updatedAt).unix(); }); dispatch(setEntries(sortByLastUpdated)); } }); } }, [id, dispatch]); return ( <div className="entries"> <header> <Link to="/"> <h3>← Go Back</h3> </Link> </header> <ul> { => ( <li key={} onClick={() => { dispatch(setCurrentlyEditing(entry)); dispatch(setCanEdit(true)); }} > {entry.title} </li> ))} </ul> </div> ); }; export default DiaryEntriesList;

Here, we subscribe to the entries property of our app’s state, and have our effect fetch a diary’s entry only run when a property, id, changes. This property’s value is gotten from our URL as a path parameter using the useParams() hook from react-router. In the next step, we will create a component that will enable users to create and view diaries, as well as render a diary’s entries when it is in focus.

Create a file named Diaries.tsx while still in the same directory, and add the following code to the file:

import React, { FC, useEffect } from 'react'; import { useSelector } from 'react-redux'; import { RootState } from '../../rootReducer'; import http from '../../services/api'; import { Diary } from '../../interfaces/diary.interface'; import { addDiary } from './diariesSlice'; import Swal from 'sweetalert2'; import { setUser } from '../auth/userSlice'; import DiaryTile from './DiaryTile'; import { User } from '../../interfaces/user.interface'; import { Route, Switch } from 'react-router-dom'; import DiaryEntriesList from './DiaryEntriesList'; import { useAppDispatch } from '../../store'; import dayjs from 'dayjs'; const Diaries: FC = () => { const dispatch = useAppDispatch(); const diaries = useSelector((state: RootState) => state.diaries); const user = useSelector((state: RootState) => state.user); useEffect(() => { const fetchDiaries = async () => { if (user) { http.get<null, Diary[]>(`diaries/${}`).then((data) => { if (data && data.length > 0) { const sortedByUpdatedAt = data.sort((a, b) => { return dayjs(b.updatedAt).unix() - dayjs(a.updatedAt).unix(); }); dispatch(addDiary(sortedByUpdatedAt)); } }); } }; fetchDiaries(); }, [dispatch, user]); const createDiary = async () => { const result = await Swal.mixin({ input: 'text', confirmButtonText: 'Next →', showCancelButton: true, progressSteps: ['1', '2'], }).queue([ { titleText: 'Diary title', input: 'text', }, { titleText: 'Private or public diary?', input: 'radio', inputOptions: { private: 'Private', public: 'Public', }, inputValue: 'private', }, ]); if (result.value) { const { value } = result; const { diary, user: _user, } = await<Partial<Diary>, { diary: Diary; user: User }>('/diaries/', { title: value[0], type: value[1], userId: user?.id, }); if (diary && user) { dispatch(addDiary([diary] as Diary[])); dispatch(addDiary([diary] as Diary[])); dispatch(setUser(_user)); return{ titleText: 'All done!', confirmButtonText: 'OK!', }); } }{ titleText: 'Cancelled', }); }; return ( <div style={{ padding: '1em 0.4em' }}> <Switch> <Route path="/diary/:id"> <DiaryEntriesList /> </Route> <Route path="/"> <button onClick={createDiary}>Create New</button> {, idx) => ( <DiaryTile key={idx} diary={diary} /> ))} </Route> </Switch> </div> ); }; export default Diaries;

In this component, we have a function to fetch the user’s diaries inside a useEffect hook, and a function to create a new diary. We also render our components in react-router’s <Route /> component, rendering a diary’s entries if its id matches the path param in the route /diary/:id, or otherwise rendering a list of the user’s diaries.

To wrap things up, let’s update the Home.tsx component. First, update the imports to look like the following:

import React, { FC } from 'react'; import Diaries from '../diary/Diaries'; import Editor from '../entry/Editor';

Then, change the component’s return statement to the following:

return ( <div className="two-cols"> <div className="left"> <Diaries /> </div> <div className="right"> <Editor /> </div> </div>

Finally, replace the contents of the index.css file in your app’s src directory with the following code:

:root { --primary-color: #778899; --error-color: #f85032; --text-color: #0d0d0d; --transition: all ease-in-out 0.3s; } body { margin: 0; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } html, body, #root { height: 100%; } *, *:before, *:after { box-sizing: border-box; } .auth { display: flex; align-items: center; height: 100%; } .card { background: #fff; padding: 3rem; text-align: center; box-shadow: 2px 8px 12px rgba(0, 0, 0, 0.1); max-width: 450px; width: 90%; margin: 0 auto; } .inputWrapper { margin: 1rem auto; width: 100%; } input:not([type='checkbox']), button { border-radius: 0.5rem; width: 100%; } input:not([type='checkbox']), textarea { border: 2px solid rgba(0, 0, 0, 0.1); padding: 1em; color: var(--text-color); transition: var(--transition); } input:not([type='checkbox']):focus, textarea:focus { outline: none; border-color: var(--primary-color); } button { appearance: none; border: 1px solid var(--primary-color); color: #fff; background-color: var(--primary-color); text-transform: uppercase; font-weight: bold; outline: none; cursor: pointer; padding: 1em; box-shadow: 1px 4px 6px rgba(0, 0, 0, 0.1); transition: var(--transition); } button.secondary { color: var(--primary-color); background-color: #fff; border-color: #fff; } button:hover, button:focus { box-shadow: 1px 6px 8px rgba(0, 0, 0, 0.1); } .error { margin: 0; margin-top: 0.2em; font-size: 0.8em; color: var(--error-color); animation: 0.3s ease-in-out forwards fadeIn; } .two-cols { display: flex; flex-wrap: wrap; height: 100vh; } .two-cols .left { border-right: 1px solid rgba(0, 0, 0, 0.1); height: 100%; overflow-y: scroll; } .two-cols .right { overflow-y: auto; } .title { font-size: 1.3rem; } .subtitle { font-size: 0.9rem; opacity: 0.85; } .title, .subtitle { margin: 0; } .diary-tile { border-bottom: 1px solid rgba(0, 0, 0, 0.1); padding: 1em; } .editor { height: 100%; padding: 1em; } .editor input { width: 100%; } .editor textarea { width: 100%; height: calc(100vh - 160px); } .entries ul { list-style: none; padding: 0; } .entries li { border-top: 1px solid rgba(0, 0, 0, 0.1); padding: 0.5em; cursor: pointer; } .entries li:nth-child(even) { background: rgba(0, 0, 0, 0.1); } @media (min-width: 768px) { .two-cols .left { width: 25%; } .two-cols .right { width: 75%; } } @keyframes fadeIn { 0% { opacity: 0; } 100% { opacity: 0.8; } }

That’s it! You can now run npm start or yarn start and check out the final app at http://localhost:3000.

Final App Home Screen (Unauthenticated User). (Large preview) Conclusion

In this guide, you have learned how to rapidly develop applications using Redux. You also learned about good practices to follow when working with Redux and React, in order to make debugging and extending your applications easier. This guide is by no means extensive as there are still ongoing discussions surrounding Redux and some of its concepts. Please check out the Redux and React-Redux docs if you’d like to learn more about using Redux in your React projects.

References (yk)
Categories: Design

How To Create A Porsche 911 With Sketch (Part 2)

Fri, 07/31/2020 - 03:30
How To Create A Porsche 911 With Sketch (Part 2) How To Create A Porsche 911 With Sketch (Part 2) Nikola Lazarević 2020-07-31T10:30:00+00:00 2020-07-31T14:10:20+00:00

Are you ready to push Sketch to its limits once again? As noted in the previous part, this tutorial is geared more towards experienced illustrators, but if you’re new to Sketch then you should also be able to profit from it since all of the steps are explained in great detail.

After finished off the tail lights, let’s continue with the design of the car windows.

7. Rubber Seals Around The Windows

In this step, we will add rubber seals around the windows. Start first with the side window. Switch to the Vector tool (V) and draw a shape around the the side window, like on the image below.

Note: Before you continue, remember that we’re still drawing inside the bodywork group!

Draw a rubber seal shape around the side window. (Large preview)

Turn off Borders and set Fills to #000000, and add a Shadows effect:

  • Color: #FFFFFF
  • Alpha: 90%
  • X: 0; Y: 0; Blur: 3; Spread: 1
The rubber seal around the side window is now complete. (Large preview)

Next, let’s add a rubber seal around the front windshield. Draw a shape around the front window, turn off Borders, set Color to #000000 and apply Shadows:

  • Color: #FFFFFF
  • Alpha: 90%
  • X: 0; Y: 0; Blur: 3; Spread: 1
The front windshield’s rubber seal. (Large preview)

Now, let’s add a trim on top of the rubber seal. To do that, duplicate the seal shape, turn off Fills and Shadows, turn on Borders, set Color to #E0E0E0, border position to Inside and Width to 1.5px. Double-click on the shape to enter vector editing mode and then select and move the points until you have something like on the image below. Be patient, it may require some time!

Note: While usually I’d suggest avoiding half-pixels in your vector illustrations as much as possible, in some cases these might actually work well. After quite some trial and error while working on the trim on top of the windshield’s rubber seal, I’ve discovered that 1.5px gives the best visual results.

Create the trim. (Large preview)

Tip: Change point types as needed while working on this shape.

At the end of this step, we need also to add a seal around the rear windshield. Draw a shape around it, turn off Border, set Fills to #000000 and apply Shadows with the same parameters like we did for the previous seals.

The rear windshield’s rubber seal. (Large preview) 8. Door Handle

Pick up the Oval tool (O) and draw an ellipse. Set Border color to #949494, position to Center with a Width of 1px. For the Fills use a Linear Gradient:

  1. #787878
  2. #C9C9C9
  3. #A5A5A5

And add Inner Shadows:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 2; Blur: 2; Spread: 0
Draw an ellipse for the door handle. (Large preview)

Create a rectangle on the left and on the right side of the ellipse by using the Rectangle tool (R). Make the outer corners rounded by using the Radius property in the Inspector panel. Turn off Borders and set Fills to #333333.

Create the rectangles on the left and on the right side of the door ellipse. (Large preview)

We will now use Inner Shadows and Shadows to make it look slightly raised.

Select left side rectangle and add a light Inner Shadows effect with the following properties:

  • Color: #FFFFFF
  • Alpha: 20%
  • X: 2; Y: -2; Blur: 1; Spread: 0

Then, apply a Shadows effect:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 2; Spread: 0
Apply the effects to the left side rectangle. (Large preview)

Next, select right side rectangle and apply Inner Shadows effect:

  • Color: #FFFFFF
  • Alpha: 20%
  • X: -2; Y: -2; Blur: 1; Spread: 0

Apply a Shadows effect:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 2; Spread: 0
Apply the effects to the right side rectangle. Still not there but we’re getting closer! (Large preview)

Let’s move on to handle. We will build our handle out of three shapes.

First, create two rectangles by using the Rectangle tool (R) and make the sides rounded with a help of the Radius property set from the Inspector panel.

Start working on the handle details. (Large preview)

Then, use the Vector tool (V) to draw a shape between the rectangles.

With the Vector tool, draw a shape between these two rectangles. (Large preview)

Now select the rectangles and the shape we have just created and perform a Union operation (from the top Sketch toolbar) to create one object. Name this object handleshape. Change the Color to #E3E3E3 and add an Inner Shadows effect:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: -2; Blur: 5; Spread: 0
Create the object and apply the styles. (Large preview)

Let’s add a subtle shadow to the handle. Zoom in and draw a shape like on the image below. Don’t worry if the bottom part goes out of handle area, we will fix this later with a masking operation. Turn off Borders and set Fills to #3D3D3D.

Add a shadow to the door handle. (Large preview)

Let’s fit the shadow inside the handle. Select the handle and the shadow shape, and click on Mask in the top toolbar. The result of this masking operation will automatically be placed in a new group in the Layers panel list. Change the name of this group to handle.

Tip: *Don’t forget to check if Sketch turned off Inner Shadows for the masking layer. If that’s the case, just turn them back on.*

The ‘handle’ group is complete. (Large preview)

Now, let’s add a key lock to the door handle.

Draw a small circle. Add a Center Border with a Width of 1px and the Color set to #000000. Change Fills to Linear Gradient, and adjust the gradient with the following parameters:

  1. #888888
  2. #DFDFDF
  3. #CACACA

Apply a Shadow effect with the Color set to #000000 at 90% alpha, Blur to 3, the X and Y positions and Spread set to 0.

Create a key lock. (Large preview)

Create a keyhole by drawing a tiny black rectangle without Borders in the middle of the circle. Group both shapes (circle and rectangle) into a key-lock group.

Create the keyhole. (Large preview)

The only thing left to do is to create the handle’s shadow which should be placed inside the ellipse (see the next screenshot). Find the handleshape object in the Layers panel list, click on the caret in front of the layer name to reveal its content (the shapes), select the bridge between the rectangles and press Cmd + C to copy this shape.

Select the bridge between the rectangles then copy it. (Large preview)

Select the ellipse that is below the handle, paste (Cmd + V) over the shape that we’ve just copied, set the Color to #505050, push it down 2px and apply a Gaussian Blur with an Amount of 2. Then select this shape along with the ellipse and group them together (Cmd + G).

Paste, move, apply the styles, then group. (Large preview)

Inside this group, select the ellipse, right-click on it and choose Mask from the menu, to make sure that the shadow will stay inside the ellipse.

The handle shadow is now complete. (Large preview)

Select all the elements that we created in this step and place them into a group named door handle.

9. Bumpers

Let’s create the front bumper first. Switch to the Vector tool (V) and draw the shape. Change the Fills Opacity to 0%, make sure that Borders are turned off and apply light and dark Inner Shadows effect.

First add a light Inner Shadows effect with the following properties:

  • Color: #FFFFFF
  • Alpha: 50%
  • X: 0; Y: 5; Blur: 6; Spread: 0

Then, add a dark Inner Shadows effect:

  • Color: #000000
  • Alpha: 50%
  • X: -2; Y: -5; Blur: 6; Spread: 0
The front bumper. (Large preview)

Do the same for the rear bumper, but instead use these parameters for the dark Inner Shadow effect:

  • Color: #000000
  • Alpha: 50%
  • X: 3; Y: -5; Blur: 6; Spread: 0
The rear bumper. (Large preview)

Name these shapes front bumper and rear bumper.

Let’s move on to the next element on the case. Now we will create the decoration on the front bumper. Grab the Rounded Rectangle tool (U) and draw a rounded rectangle (174px by 14px). Make sure it is outside of the bodywork group and give it the name bumper deco base.

Turn off Borders and then click on Fills, choose Linear Gradient, and add a gradient. Use #E4E4E4 with 100% alpha for the first color stop and #858585 with alpha 100% for the last color stop. Now, add another point with a click on the gradient axis in the color dialog, and move it to the exact middle by pressing 5 on the keyboard. Give it 100% alpha, and make sure its color is #E4E4E4. Add another one to the right, and also move it to the center. Change the color of this stop to #858585 with 100% alpha.

The front bumper deco element. (Large preview)

Duplicate the shape (Cmd + D), change the name to front bumper deco shadow and using the Layers panel list, drag it inside the bodywork group just above the front bumper shape, and add two Shadows effects.

Add the first Shadows effect with the following properties:

  • Color: #000000
  • Alpha: 80%
  • X: 0; Y: 2; Blur: 2; Spread: 2

Then, add the second Shadows effect:

  • Color: #000000
  • Alpha: 80%
  • X: 0; Y: -2; Blur: 2; Spread: 1
The ‘front bumper deco’ shadow. (Large preview)

Let’s add a rubber element in the middle of the bumper deco. Select the bumper deco base, duplicate it and give this shape the name of rubber. Change the Fills to #303030 Solid Color, and change the Height to the half size, then align it to the middle with bumper deco base, using the Inspector panel.

The front bumper ‘rubber’ shape. (Large preview)

Add the following effects to the rubber shape.

First, a light Inner Shadow:

  • Color: #FFFFFF
  • Alpha: 30%
  • X: 0; Y: 2; Blur: 2; Spread: 0

Then, a dark Inner Shadow:

  • Color: #000000
  • Alpha: 100%
  • X: 0; Y: -4; Blur: 1; Spread: 0

After that, a dark Shadow:

  • Color: #000000
  • Alpha: 100%
  • X: 0; Y: -1; Blur: 2; Spread: 0

And lastly, a light Shadow:

  • Color: #FFFFFF
  • Alpha: 50%
  • X: 0; Y: 2; Blur: 2; Spread: 0
Apply all the styles to the front bumper ‘rubber’ element. (Large preview)

Finally, select the bumper deco base and the rubber shapes and perform a Mask operation so that none of the rubber shadows go outside of the bumper deco base. Name the resulting group front bumper deco.

The front bumper ‘rubber’ shape is now complete. (Large preview)

Now, using the same method as explained above, create the rear bumper deco element.

When it’s ready, the rear bumper deco should look like this. (Large preview)

Switch to the Vector tool (V) and draw a basic shape for the rear bumper guard. Add a Linear Gradient with the following properties:

  1. #EEEEEE
  2. #C9C9C9
  3. #939393
  4. #6C6C6C
Create the rear bumper guard. (Large preview)

Duplicate this shape, place it behind (right-click on the shape and choose Move Backward from the context menu), apply #2D2D2D Solid Color, push it a couple of pixels to the right and resize the height down a bit using the resize handles. Name this shape rubber buffer. Add an Inner Shadows effect with the Color set to #FFFFFF at 30% alpha. Set Y and Blur to 2, and X and Spread to 0.

The ‘rubber buffer’. (Large preview)

Select again the shape on top, duplicate it one more time, and use the ← key to move it a few pixels to the left. Modify the Linear Gradient (delete the two middle points, change the colors of the top and bottom points to #8E8E8E and #DEDEDE then move the top point down a bit). Finally, apply a Gaussian Blur effect with the Amount of 0.6.

Continue working on the rear bumper guard. (Large preview)

Select this shape and the shape below this one and perform a Mask operation. Name the resulting group bumper guard base, then select the resulting group and the rubber buffer shape and group them into a group rear bumper guard. Place this group just below the bodywork group in the Layers panel list.

The rear bumper guard — now finished. (Large preview)

Using the Rectangle tool, create two rectangles like on the image below (use Radius in the Inspector panel to control the roundness of the points). Select both shapes and to create one object, from the top toolbar in Sketch perform a Union operation. Move this new object inside the rear bumper guard group, directly into the bumper guard base group on top. Change Color to #000000, turn off Borders and add Gaussian Blur with the Amount of 1.

Create the shadow from the bumper inside the bumper guard. (Large preview)

Here’s a preview of what we’ve done so far.

The Porsche 911 — getting there bit by bit... (Large preview) 10. Windshields Side Windows

Remember those side window 1 and side window 2 copies that we have created at the beginning of the tutorial, in Part 1?

Well it’s time to use them! Locate these copies in the Layers panel list and un-hide them. Make sure that Fills is turned off and add 5px Width Borders with a #72BD20 color, positioned Inside.

Time to un-hide the ‘side window 1’ and ‘side window 2’ copies! (Large preview)

At the beginning, we will create the window frames using these shapes.

So first, we will need to convert a shape border to a shape itself. We need to apply Inner Shadows to the window frames because there’s no option to apply Inner Shadows to Borders.

To outline the borders, select both shapes and go to LayerConvert to Outlines (or press Alt + Cmd + O on the keyboard).

Note: Converting the shapes to outlines has turned each shape into two separate combined shape layers. That’s because an outline stroke is a combined path that exists of two shapes:

  • one that determines the outer boundaries, and
  • the other determines the inner boundaries, creating the appearance of a stroke.
The outline borders. (Large preview)

Select and copy (Cmd + C) the inner shapes, then deselect the shapes by pressing Esc on the keyboard and finally paste (Cmd + V) them (please note that Sketch will place the copies on top), because we will use these shapes as windshields. Give them the names of side windshield 1 and side windshield 2 and hide them for now.

Let’s continue with the window frames. Draw two shapes using the Vector tool (V), select those newly created shapes and the side window 1 shape and perform a Union operation to create one shape. Change Fills to #DCDCDC and add Inner Shadows with the Color set to #000000 with 50% Alpha and Blur set to 2.

The ‘side window 1’. (Large preview)

Apply the same styles — Fills and Inner Shadows — to the side window 2.

Un-hide the side windshields and place them below the bodywork group in the Layers panel list.

Tip: Since the windshields are basically transparent I suggest you to temporary add some background color to the artboard, so you can actually see what we are going to do. To do that, select the artboard and then turn on ‘Background color’ in the Inspector panel then set ‘Color’ to something like #434343.

Now back to the side windshields: select the first one (the one on the left), turn off Borders and set Fills to Linear Gradient:

  1. Color: #FFFFFF, Alpha: 0%
  2. Color: #FFFFFF, Alpha: 22%
  3. Color: #FFFFFF, Alpha: 50%
  4. Color: #FFFFFF, Alpha: 27%
  5. Color: #FFFFFF, Alpha: 30%
The ‘side windshield 1’. (Large preview)

Do the same for the other windshield.

The ‘side windshield 2’. (Large preview)

Tip: You can use the Sketch’s feature Copy Style from the first windshield (right-click and choose ‘Copy Style’) then paste the style to the second windshield (right-click then choose ‘Paste Style’). After that, you may only need to slightly move the points to adjust the gradient to match with previous one, since the shapes are not the same height.

Front Windshield

Switch to the Vector tool and draw a shape for the front windshield. Apply a Linear Gradient with the following parameters:

  1. Color: #F3F2F0, Alpha: 40%
  2. Color: #FFFFFF, Alpha: 50%
  3. Color: #F3F2F0, Alpha: 20%
  4. Color: #F3F2F0, Alpha: 10%

Then add Inner Shadows with the Color set to #000000 with 10% Alpha. Set Y position to 2 and Blur to 8. Name it front windshield.

The ‘front windshield’ element. (Large preview) Rear Windshield

Draw a rear windshield with the Vector tool, and apply the same style (Linear Gradient and Inner Shadows) like for the front windshield.

The ‘rear windshield’ element. (Large preview)

Name this shape rear windshield, then select all the windshield shapes, group them into a windshields group and make sure that this group is below the bodywork group in the Layers panel list.

Note: You can now turn off the Artboard’s background color in the Inspector panel.

11. Headlight

For the headlight, switch to the Vector tool and draw the shape that will be headlight glass. Use Solid Color #E4E4E4, turn off Borders and add Inner Shadows effect:

  • Color: #000000
  • Alpha: 10%
  • X: 5; Y: -2; Blur: 2; Spread: 0
Let’s create the headlight glass. (Large preview)

Next, draw a black (#000000) shape over the headlight glass. Duplicate this shape (Cmd + D), push it 1px the the left and apply a Linear Gradient with the following parameters, from top to bottom:

  1. #EEEEEE
  2. #F5F5F5
  3. #828282
  4. #484848
Create the next part of the headlight. (Large preview)

Select all the shapes and group them (Cmd + G) into s headlight group. Then we need to rotate it a bit (by 25 degrees) and place it above the bodywork group.

The ‘headlight’ group is now complete. (Large preview) 12. Rear Engine Grille

In this step we will create a grille over the rear engine lid. Once again, pick up the Vector tool (V) and draw a shape. Change Fills to #000000 and add Inner Shadows — for the Color use #FFFFFF with 80% Alpha, and set X position to -2.

Create the grille element. (Large preview)

Duplicate this shape, move it to the left and down a bit, zoom in close enough, switch to vector editing mode and move the points so they touch the edge of the rear engine lid. Use the image below as a reference.

Start building the engine grille using the grille element. (Large preview)

Repeat this eight more times to form a grille over the engine lid. Then draw a line using the Line tool (L). For the Color use #CCCCCC, set Width to 1px and choose Round cap for the Border ends. Apply black (#000000) Shadows effect with 100% Alpha and Blur of 2.

The engine grille is now complete. (Large preview)

Select all of the grille layers, and place them inside the group rear engine grille.

13. Side Mirror

Let’s move on to the other details on the car. The side mirrors!

Using the Vector tool, create a shape which will be the base for the side mirror, turn off Borders and use Linear Gradient for the Fills:

  1. #E5E5E5
  2. #D5D5D5
  3. #878787
  4. #6A6463

Then add Inner Shadows:

  • Color: #000000
  • Alpha: 50%
  • X: 2; Y: -2; Blur: 6; Spread: 0
Shaping the ‘side mirror base’ shape. (Large preview)

Name this shape side mirror base.

Draw another shape, which will be mirror cover, once again turn off Borders and change Fills to Linear Gradient:

  1. #CCCACB
  2. #FEFEFE
  3. #A1A5A4
  4. #4A413F
The ‘mirror cover base’. (Large preview)

Give this shape the name of mirror cover base. Duplicate the shape and push it 4px to the left using the ← key on the keyboard. Change Color to #C4C4C4 and add two Inner Shadows.

For the first Inner Shadow use:

  • Color: #000000
  • Alpha: 60%
  • X: 5; Y: 0; Blur: 1; Spread: 0

For the second Inner Shadow use following properties:

  • Color: #000000
  • Alpha: 50%
  • X: -4; Y: 5; Blur: 6; Spread: 0

Then select both shapes and perform a Mask operation, so the top shape does not extend past the mirror cover (the bottom shape). Name the resulting group mirror cover.

The ‘mirror cover’ group. (Large preview)

Select side mirror base and add one more Inner Shadows effect, to add shadow from the mirror cover. For the Color use #000000 with 50% Alpha, set X position to -1 and Blur to 1.

Add a shadow from the mirror cover. (Large preview)

We will finish this step by creating a shadow from the side mirror.

Grab the Vector tool and draw a shape like on the image below. Place it below the side mirror base, push it a bit up so it is really behind it, and add a Linear Gradient for the Fills. For the top stop use #000000 with 40% Alpha and for the bottom stop also use #000000 but with 0% Alpha. Don’t forget to turn off Borders.

Create a shadow from the side mirror. (Large preview)

Name this shape side mirror shadow, then select all shapes created in this step and group them into a side mirror group.

14. Exhaust Pipe

It’s time to create the exhaust pipe. First, find in the Layers panel list the floor layer, remember — the one that we’ve created at the beginning of the tutorial in Step 2 — and un-hide it. Switch to the Rectangle tool (R) and draw a rectangle with the Radius set to 2. This rectangle shape will represent the exhaust pipe.

Turn off Borders and set Fills to a Linear Gradient:

  1. #E2E2E2
  2. #E3E3E3
  3. #A0A0A0
  4. #2C2C2C
Draw the exhaust pipe shape. (Large preview)

Duplicate the rectangle, make it smaller in width, switch to the vector editing mode, select the points on the right side and set their Radius to 0, then modify the existing Linear Gradient to:

  1. #1E1E1E
  2. #3A3A3A
  3. #2A2A2A
  4. #111111
Draw another part of the exhaust pipe. (Large preview)

Select both rectangles, group them into an exhaust pipe group and place the group just above the rear bumper guard in the Layers panel list.

The exhaust pipe, now finished. (Large preview) 15. Car Interior

Select side window 1 and side window 2, duplicate them (Cmd + D), change Color to #000000 and turn off the Inner Shadows.

Start working on the car interior. (Large preview)

Place these duplicates below the rear bumper guard in the Layers panel list, and then, using the arrow keys on the keyboard, shift them 5px down and 2px to the right.

Move behind. (Large preview)

Draw a shape, which will represent the visible part of the car’s dashboard, turn off Borders and set the Fills to #2A2A2A.

Draw the visible part of the dashboard. (Large preview)

Next, let’s create the steering wheel.

Create a rectangle using the Rounded Rectangle tool (U), turn off Borders and change Fills to horizontal Linear Gradient with the following parameters:

  1. #000000
  2. #676767
  3. #292929
  4. #090909

Then we need to rotate the rectangle -24 degrees and move it to the left a bit.

Create the steering wheel. (Large preview)

Now let’s continue with other details of the car interior. Select the Vector tool and create a shape like on the image below. Turn off Borders, set Color to #000000, and apply Inner Shadows effect:

  • Color: #FFFFFF
  • Alpha: 30%
  • X: -12; Y: -6; Blur: 8; Spread: 0
Continue adding elements to the car interior. (Large preview)

Use the Oval tool (O) to draw a small ellipse. For the Color use #717171 and turn the Borders off.

Add another element to the car interior. (Large preview)

Now let’s create the visible part of the driver’s seat. Create a shape with the Vector tool. Turn off Borders and use a Linear Gradient; for the top color stop use color #6D6D6D and for the bottom #171717. And add an Inner Shadows effect — Color is #000000 with 50% Alpha, X position is 2 and Blur is 7.

Draw the driver’s seat. (Large preview)

Duplicate this shape, push it 5px to the right and 1px up by using the arrow keys. Then modify the existing Linear Gradient — change the bottom color stop to #000000. And modify the Inner Shadows effect — change the Color to #FFFFFF with 10% Alpha; set X and Y positions to 5, and Blur also to 5.

Continue working on refining the seat’s details. (Large preview)

Now let’s add stitches to the seat.

Duplicate this shape, push it 5px to the right and 3px down. Then, turn off Fills and Inner Shadows, bring back Borders and for the Color choose Linear Gradient — for the top color stop use #696969 and for the bottom #000000. Add Shadow effect — for the Color use #000000 with 50% Alpha and set Blur to 2. Then select this shape and the layer below it and perform a Mask operation, so the stitches do not go outside the seat’s boundaries.

Add stitches to the driver’s seat. (Large preview)

Almost ready with the car interior!

Next, select all layers and groups that we’ve created in this step and that are above the car body and position them just above side window 1 copy and side window 2 copy in the Layers panel list. Add to the selection those two shapes as well (side window 1 copy and side window 2 copy) and create a group (Cmd + G) named interior.

The car interior is now complete. (Large preview)

Let’s take a look at the big picture again.

Final image 2/3: Let’s take a look at our Porsche 911 car — we’re more than half-way there! (Large preview)

It’s not bad, right?

But, before we conclude this part of the tutorial, let’s add one more small detail to the car body, so pick up the Line tool (L) and draw a line. For the Color use #E5E5E5, set Width to 2px and choose Round cap for the Border ends. Then apply Shadows — set Color to #000000 at 80% Alpha, Y position to 2 and Blur to 3. Finally, place this line inside the bodywork group.

The car’s body is now finished — one more final detail added. (Large preview) Conclusion

The body of the car is now ready, as well as the windows, bumpers, headlights and taillights, and the interior — dashboard, the steering wheel, and the seat. In the next (and final) part of the tutorial, we’ll create the wheels (rims and tires), and we’ll add all the final touches, including the racing decals on the car’s body.

(mb, ra, yk, il)
Categories: Design

Making Memories To Last (August 2020 Wallpapers Edition)

Fri, 07/31/2020 - 01:00
Making Memories To Last (August 2020 Wallpapers Edition) Making Memories To Last (August 2020 Wallpapers Edition) Cosima Mielke 2020-07-31T08:00:00+00:00 2020-07-31T11:06:19+00:00

Everybody loves a beautiful wallpaper to freshen up their desktops and home screens, right? To cater for new and unique artworks on a regular basis, we started our monthly wallpapers challenge more than nine years ago, and from the early days on to today, artists and designers from all across the globe have accepted the challenge and submitted their designs to it. It wasn’t any different this time around, of course.

In this post, you’ll find their wallpapers for August 2020. All of them are available in versions with and without a calendar — to help you count down the days to a big deadline (or a few days off, maybe?) or continue to use your favorite even after the month has ended. You decide. A big thank-you to everyone who shared their designs with us — we sincerely appreciate it!

As a little bonus goodie, we also added some “oldies” at the end of this post, timeless wallpaper treasures that we rediscovered way down in our archives and that are just too good to gather dust. Now there’s only one question left to be answered: Which one will accompany you through the new month?

  • All images can be clicked on and lead to the preview of the wallpaper,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Submit your wallpaper

Did you know that you could get featured in one of our upcoming wallpapers posts, too? We are always looking for creative talent, so if you have an idea for a wallpaper for September, please don’t hesitate to submit it. We’d love to see what you’ll come up with. Join in! →

We’re All The Same

Designed by LibraFire from Serbia.

Women’s Equality Day

“Women’s Equality Day is a US celebration of the prohibition denying voting rights on the basis of sex. It’s a celebration of those trailblazers and suffragists who faught for their right to vote. So this 26th of August let’s honor them for the vitally important change they brought about back in 1920.” — Designed by Ever Increasing Circles from the United Kingdom.

August Days

“August is the month when we get sunburnt doing agricultural works, hike across the wild and untamed forests, and explore the depths of sun-kissed oceans and seas. The wind is mild and warm, the nights are filled with chatter, smiles, and music. August days are scattered around everywhere, and we try to catch and use up each and every one of them until summer gets lost in the face of murky autumn. In August, we chase laughs, sunsets, and memories.” — Designed by PopArt Studio from Serbia.

Live In The Moment

“My dog Sami inspired me for this one. He lives in the moment and enjoys every second with a big smile on his face. I wish we could learn to enjoy life like he does! Happy August everyone!” — Designed by Westie Vibes from Portugal.

In A Sunbeam

“A lot of summer activities and holiday travels had to be canceled this month, but our pets don’t know that - they’re perfectly happy to lie around at home and soak up the sun. I wanted to make some calming art that would channel that energy, to help people recharge between all the stressors you have to face these days.” — Designed by Erin Ptah from the United States.

Creating Buzz

Designed by Ricardo Gimenes from Sweden.

August And A Half

“Somewhere it’s summer, somewhere else it’s winter time.” — Designed by Dan Di from Italy.

Remembering The Freedom Struggle

“Freedom is the highest value. August tells the tales of sacrifices and courage of a nation that fought its way through a very difficult path for independence.” — Designed by AufaitUX from India.

Oldies But Goodies

A crackling fire, a well-deserved nap on a hot summer day, a cornfield glowing golden in the midday sun — a lot of things have inspired the design community to create an August wallpaper in the last few years. Here are some favorites from our archives. (Please note that these designs don’t come with a calendar.)

Colorful Summer

“‘Always keep mint on your windowsill in August, to ensure that the buzzing flies will stay outside where they belong. Don’t think summer is over, even when roses droop and turn brown and the stars shift position in the sky. Never presume August is a safe or reliable time of the year.’ (Alice Hoffman)” — Designed by Lívi from Hungary.

Smoky Mountain Bigfoot Conference

“Headed towards Smoky Mountain Bigfoot Conference this summer? Oh, they say it’s gonna be a big one! Get yourself out there well-prepared, armed with patience and ready to have loads of fun with fellow Bigfoot researchers. Looking forward to those campsite nights under the starry sky, with electrifying energy of expectations filling up the air? Lucky you!” — Designed by Pop Art Studio from Serbia.

Childhood Memories

Designed by Francesco Paratici from Australia.

Handwritten August

“I love typograhy handwritten style.” — Designed by Chalermkiat Oncharoen from Thailand.

Bee Happy!

“August means that fall is just around the corner, so I designed this wallpaper to remind everyone to ‘bee happy’ even though summer is almost over. Sweeter things are ahead!” — Designed by Emily Haines from the United States.

Estonian Summer Sun

“This is a moment from Southern Estonia that shows amazing summer nights.” Designed by Erkki Pung / Sviiter from Estonia.

Coffee Break Time

Designed by Ricardo Gimenes from Sweden.

Purple Haze

“Meet Lucy: she lives in California, loves summer and sunbathing at the beach. This is our Jimi Hendrix Experience tribute. Have a lovely summer!” — Designed by PopArt Web Design from Serbia.

Psst, It’s Camping Time…

“August is one of my favorite months, when the nights are long and deep and crackling fire makes you think of many things at once and nothing at all at the same time. It’s about heat and cold which allow you to touch the eternity for a few moments.” — Designed by Igor Izhik from Canada.

Hello Again

“In Melbourne it is the last month of quite a cool winter so we are looking forward to some warmer days to come.” — Designed by Tazi from Australia.

Liqiu And Orange Daylily Season

“Liqiu signifies the beginning of autumn in East Asian cultures. After entering the Liqiu, the mountains in Eastern Taiwan’s East Rift Valley are covered in a sea of golden flowers, very beautiful. The production season for high-mountain daylilies is in August. Chihke Mountain, in Yuli Township, and Sixty-Stone Mountain, in Fuli Township, which are both located in Hualien County, are two of the country’s three high-mountain daylily production areas.” — Designed by Hong, Zi-Qing from Taiwan.

Summer Nap

Designed by Dorvan Davoudi from Canada.

Happy Janmashtami

“Janmashtami, the day that Lord Krishna was born, is an important Hindu Festival which is celebrated worldwide. The idea was to create the Lord Krishna’s flute-playing persona, in a minimalist design form.” — Designed by Damn Perfect from Jaipur, India.

A Bloom Of Jellyfish

“I love going to aquariums – the colors, patterns and array of blue hues attract the nature lover in me while still appeasing my design eye. One of the highlights is always the jellyfish tanks. They usually have some kind of light show in them, which makes the jellyfish fade from an intense magenta to a deep purple – and it literally tickles me pink. On a recent trip to uShaka Marine World, we discovered that the collective noun for jellyfish is a bloom and, well, it was love-at-first-collective-noun all over again. I’ve used some intense colours to warm up your desktop and hopefully transport you into the depths of your own aquarium.” — Designed by Wonderland Collective from South Africa.

Saturn Among The Stars

“This summer I have a telescope. Every night I look to the sky and I look into the stars. Fortunately, I can see Saturn.” — Designed by Verónica Valenzuela from Spain.


Designed by AndrР№ Presser from Germany.

I Love Summer

“I love the summer nights and the sounds of the sea, the crickets, the music of some nice party.” — Designed by Maria Karapaunova from Bulgaria.

Smashing Newsletter

Every second Tuesday, we send a newsletter with useful techniques on front-end and UX. Subscribe and get Smart Interface Design Checklists PDF in your inbox.

Your (smashing) email Subscribe → Front-end, design and UX. Sent 2× a month.
You can always unsubscribe with just one click.
Categories: Design

The Renaissance Of No-Code For Web Designers

Thu, 07/30/2020 - 06:00
The Renaissance Of No-Code For Web Designers The Renaissance Of No-Code For Web Designers Uri Paz 2020-07-30T13:00:00+00:00 2020-07-30T17:05:15+00:00

The word Renaissance — which means “rebirth” in French — was given to a tremendous period of philosophical and artistic achievements that began in the 14th century.

During this time, there were a wide range of developments, including:

  • Use of oil paints, rather than tempera, which made the painting process easier.
  • Use of fabric, rather than wooden boards, which reduced the expenses of painting.
  • Translation of classical texts in architecture, anatomy, philosophy, and more, making knowledge more accessible to the general public.

These developments and more made the Renaissance one of the most productive artistic eras in history, dramatically reducing the creative barrier and attracting a large audience rather than just a small group of elites.

‘Every block of stone has a statue inside it, and it is the task of the sculptor to discover it.’ — Michelangelo. Some people see a block of stone, while other people see a source of creation. The tools available to us at any given time can bring out our maximum potential. (Large preview)

Just like the Renaissance era, today’s web design field is exploring its potential through no-code development platforms (NCDPs). These tools allow non-programmers to create application software through graphical user interfaces and configuration, instead of traditional computer programming.

The Designer/Developer Mental Model Taken from 'The Singularity Is Here: Human 2.0' by Amit Maman. Part of his final project at Shenkar College of Engineering and Design, Maman created this triptych to show his vision of the singularity and the turning point in human history that it represents. His work is inspired by principles from the Renaissance era. (Large preview)

In 2000, usability expert Jakob Nielsen introduced “Jakob’s Law,” the idea that users develop mental models of the products they interact with based on their previous experience. The more users can focus on their goal without challenging this mental model, the easier it is for them to achieve that goal.

“CSS is closer to painting than Python.”
— Chris Coyier, co-founder at CodePen

Design and development skills are rooted in different types of thinking and require different types of tools. While designers use WYSIWYG editors like Figma, Sketch, and Photoshop to place elements on the canvas, developers work with IDEs like VSCode, Webstorm, and Brackets. In order to remain productive, designers and developers need to be able to make changes and receive instant feedback, according to their mental model.

So, using drag and drop builders may actually interfere with developers who want to debug fast, but working only with a text editor may be inappropriate for designers who want to test composition.

Designers And Code

Many designers understand the functional differences between a mockup and a working product. In order to understand the possibilities of the medium, where to draw the boundaries and how to deal with the constraints, many designers are willing to “get their hands dirty” when it comes to learning code — but they have difficulties.

One of the main reasons designers are not coders is because there is a large gap between the designer’s mental model and the conceptual model of many code editors. Design and development take two very different modes of thought. This mismatch leads to a difficult and frustrating learning curve for designers that they might not be able to overcome.

Code Abstraction (Large preview)

Abstraction is a core concept of computer science. Languages, frameworks, and libraries are built on different abstraction layers of complexity to facilitate, optimize, and guarantee productivity.

“Visual programming tools abstract code away from the creator, making them significantly more accessible. The real magic of these tools, however, is how they integrate all of the underlying layers of software into end products, providing useful functionality through modular components that can be harnessed through intuitive visual interfaces.”
— Jeremy Q. Ho, No Code is New Programming

When working with abstraction layers, there are tools such as Editor X and Studio for websites/web applications, Draftbit and Kodika for mobile apps, and Modulz for design systems, which enable a visual representation of code, in addition to code capabilities.

By adopting a familiar visual medium, the learning curve becomes easier for designers.

If Chris Wanstrath the co-founder and former CEO of GitHub said, “the future of coding is no coding at all,” then surely no-code is a legitimate way to develop — despite the perception that these tools don’t offer the flexibility to write your own code, line by line.

Indeed, we see that interest in the term “nocode” is growing:

Search for the term 'nocode' in the last 5 years on Google Trends. (Large preview) Difference Between Imperative And Declarative Programming

In order to understand the development of no-code tools for designers, you need to know the distinction between two types of programming:

  1. Imperative Programming
    Deconstruct the result into a sequence of imperatives, i.e. explicit control flow. For example: JavaScript, Python, C ++.
  2. Declarative Programming
    Declare the result, i.e. implicit control flow. For example: SQL, HTML, CSS.

Declarative languages are often domain-specific languages, or DSL, which means they’re used for a specific purpose, in a specific domain.

For example, SQL is DSL for working with databases, HTML is DSL for adding semantic structure and meaning to content on a web page, and CSS is DSL for adding style.

“There are too many variables to consider. The point of CSS is to make it so you don’t have to worry about them all. Define some constraints. Let the language work out the details.”
— Keith J. Grant, Resilient, Declarative, Contextual

Imperative programming sets specific, step-by-step instructions to the browser to get the desired result, while declarative programming states the desired result and the browser does the work by itself.

The Middle Ages

The effort to create a visual interface tool for web design development started in the 1990s through groundbreaking attempts like InContext Spider, Netscape Navigator Gold, Microsoft FrontPage, and of course, Dreamweaver.

Dreamweaver MX, Foundation Dreamweaver MX. (Large preview)

During this period, the common terminology included: visual HTML authoring tool, WYSIWYG web page compositor, or simply HTML editor. The term “no-code” was popular in the 1990s — but for a different reason. In 1996, the American rock band Pearl Jam released their fourth studio album, No Code.

These no-code tools dramatically reduced the creative barrier and attracted a large audience, the Internet wasn’t ready for these types of tools at the time.

This effort was limited for the following reasons:

1. Layout

When the inventor of the World Wide Web Tim Berners-Lee launched his creation in 1989, he didn’t offer a way to design a website.

This came along in October 1994, after a series of suggestions on how to design the Internet by different people — including one from Håkon Wium Lie — who proposed an idea that attracted everyone’s attention. Lie believed in a declarative style that would allow browsers to handle the processing — it was called Cascading Style Sheets, or simply CSS.

“CSS stood out because it was simple, especially compared to some of its earliest competitors.”
— Jason Hoffman, A Look Back at the History of CSS

For a long time after, CSS provided design solutions for a single object — but it didn’t give an adequate response to the relationship between the objects.

Methods to address this were effectively hacks, and they weren’t able to handle a great deal of complexity. As sites evolved from simple documents to complex applications, web layouts became difficult to assemble. Instead of using a style in a declarative way as Lie designed, web developers were forced to use imperative programming.

A grid system based on the rules of Swiss designer Josef Müller-Brockmann that was customary in print from the 1940s seems like a distant dream when considering anything related to the Web.

Posters by Josef Muller-Brockmann. (Large preview)

Because of these layout limitations, no-code platforms were forced to add an abstract layer to perform backstage calculations. This layer causes a range of problems, including losing the semantic value of the objects, performance issues, bulky code, a complex learning curve, unscalability, and accessibility issues.

2. Browser Alignment

In the early days, browser makers were the ones who decided how to build the Internet. This led to the Web becoming a manipulative commodity. Competition between browsers led to unique “design features”. This forced the need to rebuild the same site several times, so it could be accessed from multiple browsers.

“Developers in the 90s would often have to make three or four versions of every website they built, so that it would be compatible with each of the browsers available at the time.”
— Amy Dickens, Web Standards: The What, The Why, And The How

To offset the need to build websites that fit specific browsers, the World Wide Web Consortium (WC3) community was established at MIT in 1994. The WC3 is an international community working to develop functional, accessible and cross-compatible web standards.

When the standards were introduced, browser makers were encouraged to stick to one way of doing things — thus preventing several versions of the same site from being built. Despite WC3’s recommendations, it took a long time for browsers to meet the same standards.

Due to a lack of alignment between the browsers (Internet Explorer, I’m looking at you), CSS for a time was stuck and no new capabilities were added. Once a declarative language doesn’t support something, it requires you to lean on all kinds of imperative hacks in order to achieve that goal.

3. Data Binding

In the early years of the Web, sites were developed as a collection of static pages with no semantic meaning. When Web 2.0 arrived, it received the description “the web as a platform,” which led to a significant change — pages had dynamic content, which affected the connection to the data, and of course the semantic meaning.

“Sites in the 1990s were usually either brochure-ware (static HTML pages with insipid content) or they were interactive in a flashy, animated, JavaScript kind of way.”
— Joshua Porter, Web 2.0 for Designers

Indeed, connecting to data using a no-code approach has existed for a long time — but the user experience was difficult. Additionally, the transition to semantic marking so content could be detected in no-code tools was difficult because of the mixing between declarative and imperative programming.

No-code tools didn’t mesh with those core tasks.

(Large preview) Proto-Renaissance

On June 29, 2007, the nature of the Internet was changed dramatically. This was the day when Steve Jobs introduced the iPhone — a combination of mobile phone and media player that connected to the Internet and enabled multi-touch navigation.

When the iPhone was introduced in 2007, it was a turning point for web design. All of a sudden web designers lost control of the canvas on which we designed websites. Previously, websites only had to work on monitor screens, which varied in size, but not all that much. How were we supposed to make our websites work on these tiny little screens?
— Clarissa Peterson, Learning Responsive Web Design

This created new challenges for web design development. Mainly, how to build a site that can be used on multiple types of devices. Many “hack” approaches to layout design simply fell apart — they caused more problems than they solved.

Everything needed to be reevaluated.

The No-Code Renaissance (Large preview)

Browsers supporting WC3 standards (Chrome and Firefox ) have huge market share today, which has pushed more browsers to support the standards. The fact that all of the browsers support the same standard, enable alignment in the building of sites and ensure these capabilities would continue to work as standards and browsers evolve.

Methods such as media query, flexbox and grid — which are natively available in the browsers for layout design — have paved the way for flexible layouts, even when element sizes are dynamic.

“When CSS Grid shipped in March 2017, our toolbox reached a tipping point. At last we have technology powerful enough to let us really get creative with layout. We can use the power of graphic design to convey meaning through our use of layout—creating unique layouts for each project, each section, each type of content, each page.”
— Rachel Andrew, The New CSS Layout

In this way, HTML became cleaner and it was able to achieve its original purpose: a semantic description of the content.

Finally, thanks to alignment between the browsers and new capabilities, no-code tools are backed by powerful, uniform technology. These changes created a clearer distinction between declarative and imperative. New possibilities were created to solve old problems.

“Simplicity is the ultimate sophistication.”
— Leonardo da Vinci The Effect Of No-code On Designers Editor X | David’s photo by Igor Ferreira on Unsplash. (Large preview)

The developments of the Internet over the years has led to a situation where the abstraction between design and code is constantly improving. This has implications for the way web designers plan and implement their designs.

1. Design Planning

While popular design tools use static content for dynamic web design, no-code tools allow designers to work with the web’s own materials.

“Photoshop is the most effective way to show your clients what their website will never look like.”
— Stephen Hay, author of Responsive Design Workflow

If we have a complex design with different states, micro-interactions, animations and responsive breakpoints — by using no-code tools we can work in a more tangible way.

Additionally, the development of the web enables no-code tools to clearly separate content from the design (which allows designers to visually manage real content). Reflecting the dynamic content in the design (e.g. text, images, videos, and audio), gives designers a clearer understanding of how it will appear.

The advantage of working in the no-code workspace is that interactions appear immediately. This allows designers to quickly test their design choices and see if they work.

2. Design Implementation

After investing in design perfection, designers should explain the visual and conceptual decisions to developers through prototypes. Prototypes not only take time in terms of preparation, but their design is also often implemented incorrectly due to misinterpretations.

With no-code tools, designers are able to place objects on their display and handle their visibility and behavior with ease and speed. In other words, they can design the end result without depending on anyone else.

To use myself as an example, when the Coronavirus pandemic hit, I worked with a small team on a project to help connect young volunteers to isolated seniors. In just three days, myself and another designer built the website and connected user registration data to a database, while the team’s developer worked to integrate data from the site into a separate mobile app.

The Effect Of No-code On Developers

Will no-code tools completely replace developers? The short answer: No. The significant change is in the way designers and developers can work together to create websites.

In addition to the development of CSS, Javascript has also evolved in parallel and perhaps even more. The idea that frontend developers need to control all the abilities makes no sense. And yet, the development of no-code over the years has enabled designers to build their own designs.

It’s a win-win situation, in which  developers can focus on developing logic, and designers have more control over the user experience and styling.

The Effort Is Not Yet Complete

I don’t want to leave you with the impression that designers have complete freedom to design with no-code tools. There are still some missing style capabilities that CSS has not yet solved, and these still require imperative development.

Unlike in the Middle Ages, where art was considered as handicraft without a theoretical basis, Renaissance developments changed the status of the artist — who was suddenly considered a polymath.

No-code tools remove bottlenecks, which allows designers to gain more ownership, influence, and control over the experiences they design.

We’ve come a long way from the days when designers weren’t able to bring their designs to life. As the Internet evolves, browsers align, capabilities are added and the accessibility of technology becomes easier — designers are faced with new opportunities to create, think, and change their status with no-code tools.

The no-code movement not only affects how things are done, but by who.

Credits: Yoav Avrahami and Jeremy Hoover contributed to this article.

Further Reading on SmashingMag: (fb, ra, yk, il)
Categories: Design

Understanding Client-Side GraphQl With Apollo-Client In React Apps

Wed, 07/29/2020 - 03:30
Understanding Client-Side GraphQl With Apollo-Client In React Apps Understanding Client-Side GraphQl With Apollo-Client In React Apps Blessing Krofegha 2020-07-29T10:30:00+00:00 2020-07-29T10:44:43+00:00

According to State of JavaScript 2019, 38.7% of developers would like to use GraphQL, while 50.8% of developers would like to learn GraphQL.

Being a query language, GraphQL simplifies the workflow of building a client application. It removes the complexity of managing API endpoints in client-side apps because it exposes a single HTTP endpoint to fetch the required data. Hence, it eliminates overfetching and underfetching of data, as in the case of REST.

But GraphQL is just a query language. In order to use it easily, we need a platform that does the heavy lifting for us. One such platform is Apollo.

The Apollo platform is an implementation of GraphQL that transfers data between the cloud (the server) to the UI of your app. When you use Apollo Client, all of the logic for retrieving data, tracking, loading, and updating the UI is encapsulated by the useQuery hook (as in the case of React). Hence, data fetching is declarative. It also has zero-configuration caching. Just by setting up Apollo Client in your app, you get an intelligent cache out of the box, with no additional configuration required.

Apollo Client is also interoperable with other frameworks, such as Angular, Vue.js, and React.

Note: *This tutorial will benefit those who have worked with RESTful or other forms of APIs in the past on the client-side and want to see whether GraphQL is worth taking a shot at. This means you should have worked with an API before; only then will you be able to understand how beneficial GraphQL could be to you. While we will be covering a few basics of GraphQL and Apollo Client, a good knowledge of JavaScript and React Hooks will come in handy.*

GraphQL Basics

This article isn’t a complete introduction to GraphQL, but we will define a few conventions before continuing.

What Is GraphQL?

GraphQL is a specification that describes a declarative query language that your clients can use to ask an API for the exact data they want. This is achieved by creating a strong type schema for your API, with ultimate flexibility. It also ensures that the API resolves data and that client queries are validated against a schema. This definition means that GraphQL contains some specifications that make it a declarative query language, with an API that is statically typed (built around Typescript) and making it possible for the client to leverage those type systems to ask the API for the exact data it wants.

So, if we created some types with some fields in them, then, from the client-side, we could say, “Give us this data with these exact fields”. Then the API will respond with that exact shape, just as if we were using a type system in a strongly typed language. You can learn more in my Typescript article.

Let’s look at some conventions of GraphQl that will help us as we continue.

The Basics
  • Operations
    In GraphQL, every action performed is called an operation. There are a few operations, namely:
    • Query
      This operation is concerned with fetching data from the server. You could also call it a read-only fetch.
    • Mutation
      This operation involves creating, updating, and deleting data from a server. It is popularly called a CUD (create, update, and delete) operation.
    • Subscriptions
      This operation in GraphQL involves sending data from a server to its clients when specific events take place. They are usually implemented with WebSockets.

In this article, we will be dealing only with query and mutation operations.

  • Operation names
    There are unique names for your client-side query and mutation operations.
  • Variables and arguments
    Operations can define arguments, very much like a function in most programming languages. Those variables can then be passed to query or mutation calls inside the operation as arguments. Variables are expected to be given at runtime during the execution of an operation from your client.
  • Aliasing
    This is a convention in client-side GraphQL that involves renaming verbose or vague field names with simple and readable field names for the UI. Aliasing is necessary in use cases where you don’t want to have conflicting field names.
GraphQL basic conventions. (Large preview) What Is Client-Side GraphQL?

When a front-end engineer builds UI components using any framework, like Vue.js or (in our case) React, those components are modeled and designed from a certain pattern on the client to suit the data that will be fetched from the server.

One of the most common problems with RESTful APIs is overfetching and underfetching. This happens because the only way for a client to download data is by hitting endpoints that return fixed data structures. Overfetching in this context means that a client downloads more information than is required by the app.

In GraphQL, on the other hand, you’d simply send a single query to the GraphQL server that includes the required data. The server would then respond with a JSON object of the exact data you’ve requested — hence, no overfetching. Sebastian Eschweiler explains the differences between RESTful APIs and GraphQL.

Client-side GraphQL is a client-side infrastructure that interfaces with data from a GraphQL server to perform the following functions:

  • It manages data by sending queries and mutating data without you having to construct HTTP requests all by yourself. You can spend less time plumbing data and more time building the actual application.
  • It manages the complexity of a cache for you. So, you can store and retrieve the data fetched from the server, without any third-party interference, and easily avoid refetching duplicate resources. Thus, it identifies when two resources are the same, which is great for a complex app.
  • It keeps your UI consistent with Optimistic UI, a convention that simulates the results of a mutation (i.e. the created data) and updates the UI even before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.

For further information about client-side GraphQL, spare an hour with the cocreator of GraphQL and other cool folks on GraphQL Radio.

What Is Apollo Client?

Apollo Client is an interoperable, ultra-flexible, community-driven GraphQL client for JavaScript and native platforms. Its impressive features include a robust state-management tool (Apollo Link), a zero-config caching system, a declarative approach to fetching data, easy-to-implement pagination, and the Optimistic UI for your client-side application.

Apollo Client stores not only the state from the data fetched from the server, but also the state that it has created locally on your client; hence, it manages state for both API data and local data.

It’s also important to note that you can use Apollo Client alongside other state-management tools, like Redux, without conflict. Plus, it’s possible to migrate your state management from, say, Redux to Apollo Client (which is beyond the scope of this article). Ultimately, the main purpose of Apollo Client is to enable engineers to query data in an API seamlessly.

Features of Apollo Client

Apollo Client has won over so many engineers and companies because of its extremely helpful features that make building modern robust applications a breeze. The following features come baked in:

  • Caching
    Apollo Client supports caching on the fly.
  • Optimistic UI
    Apollo Client has cool support for the Optimistic UI. It involves temporarily displaying the final state of an operation (mutation) while the operation is in progress. Once the operation is complete, the real data replaces the optimistic data.
  • Pagination
    Apollo Client has built-in functionality that makes it quite easy to implement pagination in your application. It takes care of most of the technical headaches of fetching a list of data, either in patches or at once, using the fetchMore function, which comes with the useQuery hook.

In this article, we will look at a selection of these features.

Enough of the theory. Tighten your seat belt and grab a cup of coffee to go with your pancakes, as we get our hands dirty.

Building Our Web App

This project is inspired by Scott Moss.

We will be building a simple pet shop web app, whose features include:

  • fetching our pets from the server-side;
  • creating a pet (which involves creating the name, type of pet, and image);
  • using the Optimistic UI;
  • using pagination to segment our data.

To begin, clone the repository, ensuring that the starter branch is what you’ve cloned.

Getting Started
  • Install the Apollo Client Developer Tools extension for Chrome.
  • Using the command-line interface (CLI), navigate to the directory of the cloned repository, and run the command to get all dependencies: npm install.
  • Run the command npm run app to start the app.
  • While still in the root folder, run the command npm run server. This will start our back-end server for us, which we’ll use as we proceed.

The app should open up in a configured port. Mine is http://localhost:1234/; yours is probably something else.

If everything worked well, your app should look like this:

Cloned starter branch UI. (Large preview)

You’ll notice that we’ve got no pets to display. That’s because we haven’t created such functionality yet.

If you’ve installed Apollo Client Developer Tools correctly, open up the developer tools and click on the tray icon. You’ll see “Apollo” and something like this:

Apollo Client Developer Tools. (Large preview)

Like the Redux and React developer tools, we will be using Apollo Client Developer Tools to write and test our queries and mutations. The extension comes with the GraphQL Playground.

Fetching Pets

Let’s add the functionality that fetches pets. Move over to client/src/client.js. We’ll be writing Apollo Client, linking it to an API, exporting it as a default client, and writing a new query.

Copy the following code and paste it in client.js:

import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' const link = new HttpLink({ uri: 'https://localhost:4000/' }) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client

Here’s an explanation of what is happening above:

  • ApolloClient
    This will be the function that wraps our app and, thus, interfaces with the HTTP, caches the data, and updates the UI.
  • InMemoryCache
    This is the normalized data store in Apollo Client that helps with manipulating the cache in our application.
  • HttpLink
    This is a standard network interface for modifying the control flow of GraphQL requests and fetching GraphQL results. It acts as middleware, fetching results from the GraphQL server each time the link is fired. Plus, it’s a good substitute for other options, like Axios and window.fetch.
  • We declare a link variable that is assigned to an instance of HttpLink. It takes a uri property and a value to our server, which is https://localhost:4000/.
  • Next is a cache variable that holds the new instance of InMemoryCache.
  • The client variable also takes an instance of ApolloClient and wraps the link and cache.
  • Lastly, we export the client so that we can use it across the application.

Before we get to see this in action, we’ve got to make sure that our entire app is exposed to Apollo and that our app can receive data fetched from the server and that it can mutate that data.

To achieve this, let’s head over to client/src/index.js:

import React from 'react' import ReactDOM from 'react-dom' import { BrowserRouter } from 'react-router-dom' import { ApolloProvider } from '@apollo/react-hooks' import App from './components/App' import client from './client' import './index.css' const Root = () => ( <BrowserRouter> <ApolloProvider client={client}> <App /> </ApolloProvider> </BrowserRouter> ); ReactDOM.render(<Root />, document.getElementById('app')) if ( { }

As you’ll notice in the highlighted code, we’ve wrapped the App component in ApolloProvider and passed the client as a prop to the client. ApolloProvider is similar to React’s Context.Provider. It wraps your React app and places the client in context, which allows you to access it from anywhere in your component tree.

To fetch our pets from the server, we need to write queries that request the exact fields that we want. Head over to client/src/pages/Pets.js, and copy and paste the following code into it:

import React, {useState} from 'react' import gql from 'graphql-tag' import { useQuery, useMutation } from '@apollo/react-hooks' import PetsList from '../components/PetsList' import NewPetModal from '../components/NewPetModal' import Loader from '../components/Loader' const GET_PETS = gql` query getPets { pets { id name type img } } `; export default function Pets () { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); if (loading) return <Loader />; if (error) return <p>An error occured!</p>; const onSubmit = input => { setModal(false) } if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) }

With a few bits of code, we are able to fetch the pets from the server.

What Is gql?

It’s important to note that operations in GraphQL are generally JSON objects written with graphql-tag and with backticks.

gql tags are JavaScript template literal tags that parse GraphQL query strings into the GraphQL AST (abstract syntax tree).

  • Query operations
    In order to fetch our pets from the server, we need to perform a query operation.
    • Because we’re making a query operation, we needed to specify the type of operation before naming it.
    • The name of our query is GET_PETS. It’s a naming convention of GraphQL to use camelCase for field names.
    • The name of our fields is pets. Hence, we specify the exact fields that we need from the server (id, name, type, img).
    • useQuery is a React hook that is the basis for executing queries in an Apollo application. To perform a query operation in our React component, we call the useQuery hook, which was initially imported from @apollo/react-hooks. Next, we pass it a GraphQL query string, which is GET_PETS in our case.
  • When our component renders, useQuery returns an object response from Apollo Client that contains loading, error, and data properties. Thus, they are destructured, so that we can use them to render the UI.
  • useQuery is awesome. We don’t have to include async-await. It’s already taken care of in the background. Pretty cool, isn’t it?
    • loading
      This property helps us handle the loading state of the application. In our case, we return a Loader component while our application loads. By default, loading is false.
    • error
      Just in case, we use this property to handle any error that might occur.
    • data
      This contains our actual data from the server.
    • Lastly, in our PetsList component, we pass the pets props, with data.pets as an object value.

At this point, we have successfully queried our server.

To start our application, let’s run the following command:

  • Start the client app. Run the command npm run app in your CLI.
  • Start the server. Run the command npm run server in another CLI.
VScode CLI partitioned to start both the client and the server. (Large preview)

If all went well, you should see this:

Pets queried from the server. Mutating Data

Mutating data or creating data in Apollo Client is almost the same as querying data, with very slight changes.

Still in client/src/pages/Pets.js, let’s copy and paste the highlighted code:

.... const GET_PETS = gql` query getPets { pets { id name type img } } `; const NEW_PETS = gql` mutation CreateAPet($newPet: NewPetInput!) { addPet(input: $newPet) { id name type img } } `; const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input } }); } if (loading || newPet.loading) return <Loader />; if (error || newPet.error) return <p>An error occured</p>; if (modal) { return <NewPetModal onSubmit={onSubmit} onCancel={() => setModal(false)} /> } return ( <div className="page pets-page"> <section> <div className="row betwee-xs middle-xs"> <div className="col-xs-10"> <h1>Pets</h1> </div> <div className="col-xs-2"> <button onClick={() => setModal(true)}>new pet</button> </div> </div> </section> <section> <PetsList pets={data.pets}/> </section> </div> ) } export default Pets

To create a mutation, we would take the following steps.

1. mutation

To create, update, or delete, we need to perform the mutation operation. The mutation operation has a CreateAPet name, with one argument. This argument has a $newPet variable, with a type of NewPetInput. The ! means that the operation is required; thus, GraphQL won’t execute the operation unless we pass a newPet variable whose type is NewPetInput.

2. addPet

The addPet function, which is inside the mutation operation, takes an argument of input and is set to our $newPet variable. The field sets specified in our addPet function must be equal to the field sets in our query. The field sets in our operation are:

  • id
  • name
  • type
  • img
3. useMutation

The useMutation React hook is the primary API for executing mutations in an Apollo application. When we need to mutate data, we call useMutation in a React component and pass it a GraphQL string (in our case, NEW_PETS).

When our component renders useMutation, it returns a tuple (that is, an ordered set of data constituting a record) in an array that includes:

  • a mutate function that we can call at any time to execute the mutation;
  • an object with fields that represent the current status of the mutation’s execution.

The useMutation hook is passed a GraphQL mutation string (which is NEW_PETS in our case). We destructured the tuple, which is the function (createPet) that will mutate the data and the object field (newPets).

4. createPet

In our onSubmit function, shortly after the setModal state, we defined our createPet. This function takes a variable with an object property of a value set to { newPet: input }. The input represents the various input fields in our form (such as name, type, etc.).

With that done, the outcome should look like this:

Mutation without instant update.

If you observe the GIF closely, you’ll notice that our created pet doesn’t show up instantly, only when the page is refreshed. However, it has been updated on the server.

The big question is, why doesn’t our pet update instantly? Let’s find out in the next section.

Caching In Apollo Client

The reason our app doesn’t update automatically is that our newly created data doesn’t match the cache data in Apollo Client. So, there is a conflict as to what exactly it needs to be updated from the cache.

Simply put, if we perform a mutation that updates or deletes multiple entries (a node), then we are responsible for updating any queries referencing that node, so that it modifies our cached data to match the modifications that a mutation makes to our back-end data.

Keeping Cache In Sync

There are a few ways to keep our cache in sync each time we perform a mutation operation.

The first is by refetching matching queries after a mutation, using the refetchQueries object property (the simplest way).

Note: If we were to use this method, it would take an object property in our createPet function called refetchQueries, and it would contain an array of objects with a value of the query: refetchQueries: [{ query: GET_PETS }].

Because our focus in this section isn’t just to update our created pets in the UI, but to manipulate the cache, we won’t be using this method.

The second approach is to use the update function. In Apollo Client, there’s an update helper function that helps modify the cache data, so that it syncs with the modifications that a mutation makes to our back-end data. Using this function, we can read and write to the cache.

Updating The Cache

Copy the following highlighted code, and paste it in client/src/pages/Pets.js:

...... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet,] }, }); }, } ); .....

The update function receives two arguments:

  • The first argument is the cache from Apollo Client.
  • The second is the exact mutation response from the server. We destructure the data property and set it to our mutation (addPet).

Next, to update the function, we need to check for what query needs to be updated (in our case, the GET_PETS query) and read the cache.

Secondly, we need to write to the query that was read, so that it knows we’re about to update it. We do so by passing an object that contains a query object property, with the value set to our query operation (GET_PETS), and a data property whose value is a pet object and that has an array of the addPet mutation and a copy of the pet’s data.

If you followed these steps carefully, you should see your pets update automatically as you create them. Let’s take a look at the changes:

Pets updates instantly. Optimistic UI

A lot of people are big fans of loaders and spinners. There’s nothing wrong with using a loader; there are perfect use cases where a loader is the best option. I’ve written about loaders versus spinners and their best use cases.

Loaders and spinners indeed play an important role in UI and UX design, but the arrival of Optimistic UI has stolen the spotlight.

What Is Optimistic UI?

Optimistic UI is a convention that simulates the results of a mutation (created data) and updates the UI before receiving a response from the server. Once the response is received from the server, the optimistic result is thrown away and replaced with the actual result.

In the end, an optimistic UI is nothing more than a way to manage perceived performance and avoid loading states.

Apollo Client has a very interesting way of integrating the Optimistic UI. It gives us a simple hook that allows us to write to the local cache after mutation. Let’s see how it works!

Step 1

Head over to client/src/client.js, and add only the highlighted code.

import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' import { setContext } from 'apollo-link-context' import { ApolloLink } from 'apollo-link' const http = new HttpLink({ uri: "http://localhost:4000/" }); const delay = setContext( request => new Promise((success, fail) => { setTimeout(() => { success() }, 800) }) ) const link = ApolloLink.from([ delay, http ]) const cache = new InMemoryCache() const client = new ApolloClient({ link, cache }) export default client

The first step involves the following:

  • We import setContext from apollo-link-context. The setContext function takes a callback function and returns a promise whose setTimeout is set to 800ms, in order to create a delay when a mutation operation is performed.
  • The ApolloLink.from method ensures that the network activity that represents the link (our API) from HTTP is delayed.
Step 2

The next step is using the Optimistic UI hook. Slide back to client/src/pages/Pets.js, and add only the highlighted code below.

..... const Pets = () => { const [modal, setModal] = useState(false) const { loading, error, data } = useQuery(GET_PETS); const [createPet, newPet] = useMutation(NEW_PETS, { update(cache, { data: { addPet } }) { const data = cache.readQuery({ query: GET_PETS }); cache.writeQuery({ query: GET_PETS, data: { pets: [addPet,] }, }); }, } ); const onSubmit = input => { setModal(false) createPet({ variables: { newPet: input }, optimisticResponse: { __typename: 'Mutation', addPet: { __typename: 'Pet', id: Math.floor(Math.random() * 10000 + ''), name:, type: input.type, img: '' } } }); } .....

The optimisticResponse object is used if we want the UI to update immediately when we create a pet, instead of waiting for the server response.

The code snippets above include the following:

  • __typename is injected by Apollo into the query to fetch the type of the queried entities. Those types are used by Apollo Client to build the id property (which is a symbol) for caching purposes in apollo-cache. So, __typename is a valid property of the query response.
  • The mutation is set as the __typename of optimisticResponse.
  • Just as earlier defined, our mutation’s name is addPet, and the __typename is Pet.
  • Next are the fields of our mutation that we want the optimistic response to update:
    • id
      Because we don’t know what the ID from the server will be, we made one up using Math.floor.
    • name
      This value is set to
    • type
      The type’s value is input.type.
    • img
      Now, because our server generates images for us, we used a placeholder to mimic our image from the server.

This was indeed a long ride. If you got to the end, don’t hesitate to take a break from your chair with your cup of coffee.

Let’s take a look at our outcome. The supporting repository for this project is on GitHub. Clone and experiment with it.

Final result of our app. Conclusion

The amazing features of Apollo Client, such as the Optimistic UI and pagination, make building client-side apps a reality.

While Apollo Client works very well with other frameworks, such as Vue.js and Angular, React developers have Apollo Client Hooks, and so they can’t help but enjoy building a great app.

In this article, we’ve only scratched the surface. Mastering Apollo Client demands constant practice. So, go ahead and clone the repository, add pagination, and play around with the other features it offers.

Please do share your feedback and experience in the comments section below. We can also discuss your progress on Twitter. Cheers!

References (ks, ra, al, yk, il)
Categories: Design

Inspired Design Decisions With Emmett McBain: Art Direction As Social Equity

Tue, 07/28/2020 - 02:30
Inspired Design Decisions With Emmett McBain: Art Direction As Social Equity Inspired Design Decisions With Emmett McBain: Art Direction As Social Equity Andrew Clarke 2020-07-28T09:30:00+00:00 2020-07-28T10:37:44+00:00

Along with advertising, selling is a skill that people often frown on. It’s true: no one likes someone coercing or misleading them, and nobody enjoys being interrupted.

But being sold to well — by a salesperson who understands your aspirations, motivations, and needs — can be an experience that benefits buyers and sellers.

Learning how to sell was one of the best things I did early on in my working life. Back then, I sold photographic equipment, and although I never enjoyed the stress which came from meeting sales targets, I always enjoyed meeting with photographers.

Finding new customers often meant cold-calling, knocking on a studio door, and frequently being rejected. I spent time talking about someone’s work before I mentioned the products my company paid me to sell. I was genuinely interested in photography, but also I’d learned that understanding someone’s problems was as crucial as explaining how my products could help solve them.

What I learned has served me immeasurably well since I stopped selling cameras and started selling my talent. It’s helped me deal with people, not least in presenting (read: selling) my ideas to clients.

It’s a fact of life that not always the best idea or the best execution wins a pitch or presentation. It’s often the idea which is being sold by the best salesperson.

Selling ideas should become one of your best skills, so learn to sell. Learn how to talk about your work so that the person you’re selling to understands your ideas and why they should buy them from you. Learn to inspire people with your words as well as your work. Make them feel like they’re so much a part of your ideas that they simply must buy from you.

As a Black American graphic designer who worked in advertising during the 1950s, ’60s, and ’70s, Emmett McBain not only had incredible talent, he also understood how to sell to other African Americans.

He knew that to sell his clients’ products, his designs needed to resonate with Black audiences, by showing images they related to and language which was familiar to them.

As a grey-bearded Englishman, it’s not easy for me to understand cultural perspectives which are different from mine. But, I’ve learned the value of making designs that speak to people whatever they look like and wherever they live. Not only to sell my clients’ products to them but so that everyone feels their needs are being listened to and their importance understood.

Born in Chicago in 1935, Emmett McBain was an African American graphic designer whose work had a remarkable impact on the representation of African Americans in advertising.

McBain studied at several art schools and graduated after studying commercial art at the American Academy of Art in Chicago.

Vince Cullers and Associates—the first African American-owned full-service advertising agency in the USA was founded in 1958. Cullers believed that “selling Black” needed “thinking Black” if advertisers were to reach African American consumers. He not only sold to African Americans but helped to educate them in advertising and employ them at his agency. One of those employees was the newly graduated Emmett McBain.

From left: Shirley & Lee, Let The Good Times Roll, 1956. Basso-Valdambrini Quintet — Exciting 6, 1967. Davis, Miles — Blue Haze by Tom Hannan, 1956. Bird — Diz — Bud — Max by Burt Goldblatt, 1954. (Large preview)

With two years of commercial experience behind him, McBain left Vince Cullers and moved to Playboy Records as an assistant art editor. But, he didn’t stay in a junior role for long and quickly became Playboy’s promotion art director. McBain carved out a niche as a cover artist, and in 1958 his Playboy Jazz All-Stars album art was named Billboard Magazine’s Album Cover of the Week.

In 1959, McBain moved on from Playboy, but he didn’t leave behind his work on album covers. His newly-founded design studio McBain Associates regularly worked with Mercury Records, and he designed over 75 album covers by the time he was 24.

Several record cover designs by McBain Associates during the 1960s. (Large preview)

McBain returned to Vince Cullers Advertising as its creative director in 1968 and made some of his most important contributions to advertising for Black Americans.

Before the 1960s, Black consumers were largely ignored by brand-name manufacturers and the mainstream advertising industry which served them. Advertising to African Americans was limited mainly to newspapers specific to Black audiences.

White clients were reticent to spend money selling to African Americans as advertisers saw black consumers as having little disposable income. In the politically charged atmosphere of the time, companies were also afraid to associate their brands with African Americans.

African Americans were unrepresented in the advertising industry too, and the number of Black people working in advertising was tiny. But, in the mid-1960s, advertising agencies began to recruit African Americans. These agencies hoped their experiences would make clients’ messages more relatable to African American audiences who, by then, spent almost $30 billion each year.

Left: Where the flavor is, advertisement for Philip Morris by Burrell-McBain Inc. Center: True Two, an advertisement for Lorillard Tobacco Company by Vince Cullers Advertising, Inc. in 1968. Right: Black is Beautiful, an advertisement for Vince Cullers Advertising, Inc., creative direction by Emmett McBain in 1968. (Large preview)

McBain’s work featured positive messages for African Americans and the Black community. He used images of everyday people in usual surroundings for clients who included Newport’s menthol cigarettes, Philip Morris’ Marlboro, and SkinFood Cosmetics’ beauty products specifically for Black skin. Like Vince Cullers, McBain knew that selling to Black consumers meant understanding their different needs. He understood that — as his future partner, copywriter Thomas Burrell said — “Black people are not dark-skinned white people.”

In 1971, Emmett McBain partnered with Burrell to form Burrell-McBain Inc., which they described in an advertisement as “An Advertising Agency for the Black Commercial Market.” Rather than exploit Black Americans, Burrell and McBain aimed to form authentic and respectful relationships with Black audiences.

Before Burrell and McBain, the iconic white cowboy was the face of Marlboro cigarettes. But, McBain’s Marlboro man was more relatable to African American smokers. Whereas Marlboro’s cowboy was shown in an idealized version of the American West, McBain’s Black characters were seen smoking in everyday surroundings.

Their Marlboro campaign was a huge success and Burrell and McBain went on to win Coca-Cola and McDonald’s as clients, helping them become the largest Black-owned advertising agency in America.

McBain left the agency he co-founded in 1974 and set out on a career as an artist. He opened his art gallery, The Black Eye, and formed a consultancy — also called The Black Eye — which helped agencies to better connect with the African American community.

Emmett McBain died of cancer in 2012 and since then has been recognized by AIGA, the Society of Typographic Arts, and the Art Directors Clubs of Chicago and Detroit.

Sadly, there hasn’t been a book published about Emmett McBain and his contribution to advertising and design. I haven’t heard his name mentioned at design conferences or seen him referenced in articles relating to modern-day design and particularly the web.

McBain’s later work had a profound impact on advertising from the 1960s onwards, but I’m especially fond of his record cover designs. The burst with energy which reflects the jazz music McBain loved. His colors are exciting and vibrant. His choice of typefaces and the ways he deconstructed and rebuilt type are inspiring. There’s plenty to inspire us in the work of Emmett McBain.

Aligning Vertical Content

Whichever graphic style I choose, the HTML needed to implement this first McBain-inspired design is identical. I need three structural elements; a header which contains my SVG logo and headlines, main, and an aside which includes a table of Citroën DS production numbers:

<header> <svg>…</svg> <div> <svg>…</svg> <svg>…</svg> </div> </header> <main> <p>…</p> </main> <aside> <table>…</table> </aside> The vertical direction and circles in this first design was inspired by Emmett McBain’s Guitars Woodwind & Bongos record cover, 1960. (Large preview)

For scalability across screen sizes, I use SVGs for the two headlines in my header. Using SVG provides an extra degree of consistency for the second headline’s stroked text, but I mustn’t forget accessibility.

In issue 8, I explained how to help people who use assistive technology using adding ARIA to SVGs. I add an ARIA role attribute, plus a level attribute which replaces the missing semantics. Adding a title element also helps assistive technology understand the difference between several blocks of SVG, but browsers won’t display this title:

<svg role="heading" aria-level="1" aria-label="Citroën DS"> <title>Citroën DS</title> <path>…</path> </svg> When someone reads numerical tabular data, their eyes scan down columns and then across rows. (Large preview)

To begin this design, I add basic foundation styles for every screen size starting with foreground and background colours:

body { background-color: #fff; color: #262626; }

I add pixel precise dimensions to the SVG elements in my header, then use auto horizontal margins to centre the Citroën logo:

header > svg { margin: 0 auto; width: 80px; } header div svg { max-height: 80px; }

In his inspiring design, Emmet McBain included vertical black stripes to add structure to his layout. To achieve a similar effect without adding extra elements to my HTML, I add dark borders to both the left and right sides of my main paragraph:

main p { padding: .75rem 0; border-right: 5px solid #262626; border-left: 5px solid #262626; }

The same technique adds a similar effect to my table of Citroën DS production numbers. I add the two outer borders to my table:

aside table { width: 100%; border-right: 5px solid #262626; border-left: 5px solid #262626; border-collapse: collapse; }

Then, I add a third rule to the right of my table headers:

aside th { padding-right: .75rem; padding-left: .75rem; border-right: 5px solid #262626; }

By ensuring every cell fills half the width of my table, this vertical stripe runs down the centre, from top to bottom:

aside th, aside td { width: 50%; box-sizing: border-box; }

When someone reads numerical tabular data like these pairs of years and production numbers, their eyes scan down the year column. Then, they follow across to see how many cars Citroën manufactured during that year. People might also compare production numbers looking for either high or low numbers.

To make their comparisons easier, I align production numbers to the right:

aside td { text-align: right; }

Depending on the OpenType features available in the font you’ve chosen, you can also improve tabular data readability by specifying lining rather than old-style numerals. Some old-style numerals—including 3, 4, 7, and 9 — have descenders which can drop below the baseline. These make longer strings of numbers more difficult to read. Lining numerals, on the other hand, include numbers that sit on the baseline.

(Large preview)

OpenType features also control the width of numerals which makes comparing strings of numbers in a table easier. Whereas proportional numbers can be different sizes, tabular numerals are all the same width so tens, hundreds, and thousands will be more precisely aligned:

aside td { font-variant-numeric: lining-nums tabular-nums; } (Large preview)

Finally, I introduce the circle motif to the bottom of this small screen design. I don’t want to include these circular images in my HTML, so I use a CSS generated content data URI where the image file is encoded into a string:

aside:after { content: url("data:image/svg+xml…"); } When someone reads numerical tabular data, their eyes scan down columns and then across rows. (Large preview)

I’m frequently surprised at how few changes I need to make to develop designs for multiple screen sizes. Switching from small screens to medium-size designs often requires no more than minor changes to type sizes and introducing simple layout styles.

I start by horizontally aligning the Citroën logo and SVG headlines in my header. On medium and large screens, this logo comes first in my HTML, and the headlines come second. But visually the elements are reversed. Flexbox is the ideal tool for making this switch, simply by changing the default flex-direction value from row to flex-direction: row-reverse:

@media (min-width: 48em) { header { display: flex; flex-direction: row-reverse; align-items: flex-start; } }

Earlier, I gave my logo a precise width. But, I want the headlines to fill all the remaining horizontal space, so I give their parent division a flex-grow value of 1. Then, I add a viewport-based margin to keep the headlines and logo apart:

header div { flex-grow: 1; margin-right: 2vw; }

For this medium-size design, I developed the layout using a symmetrical three-column grid, which I apply to both main and aside elements:

main, aside { display: grid; grid-template-columns: repeat(3, 1fr); gap: 1rem; }

Then, using the same technique I used for the aside element previously, I generate two images for the main element and place them into the first and third columns in my grid:

main:before { grid-column: 1; content: url("data:image/svg+xml…"); } main:after { grid-column: 3; content: url("data:image/svg+xml…"); }

I repeat the process for the aside element, with this new :after content replacing the generated image I added for small screens:

aside:before { grid-column: 1; content: url("data:image/svg+xml…"); } aside:after { grid-column: 3; content: url("data:image/svg+xml…"); }

The extra space available on medium-size screens allows me to introduce more of the vertical stripe motif, which is inspired by Emmett McBain’s original design. The vertical borders on the left and right of the main paragraph are already in place, so all that remains is for me to change its writing-mode to vertical-rl and rotate it by 180 degrees:

main p { justify-self: center; writing-mode: vertical-rl; transform: rotate(180deg); }

Some browsers respect grid properties and will stretch a table to the full height of the grid row without help. Others need a little help, so for them, I give my production numbers table an explicit height which adds an even amount of space between its rows:

aside table { height: 100%; }

The full effect of this McBain-inspired design comes when screens are wide enough to display main and aside elements side-by-side. I apply a simple two-column symmetrical grid:

@media (min-width: 64em) { body { display: grid; grid-template-columns: 1fr 1fr; gap: 1rem; } }

Then, I place the main and aside elements using line numbers, with the header spanning the full-width of my layout:

header { grid-column: 1 / -1; } main { grid-column: 1; } aside { grid-column: 2; } Left: Circular motifs in this version of my design. Right: Colourful portraits of the iconic Citroën DS replace the original circles. (Large preview) Looking Unstructured

The bright colours and irregular shapes of blocks in this next design are as unexpected as the jazz which inspired Emmett McBain’s original. While the arrangement of these layout might look unstructured, the code I need to develop it certainly isn’t. In fact, there are just two structural elements, header and main:

<header> <svg id="logo">…</svg> <h1>…</h1> <p>…</p> <svg>…</svg> </header> <main> <small>…</small> <h2>…</h2> <p>…</p> </main> The bright colours and irregular shapes in this design was inspired by Emmett McBain’s The Legend of Bix record cover, 1959. (Large preview)

I start by applying background and foreground colours, plus a generous amount of padding to allows someone’s eyes to roam around and through spaces in the design:

body { padding: 2rem; background-color: #fff; color: #262626; }

Those brightly coloured blocks would dominate the limited space available on a small screen. Instead, I add the same bright colours to my header:

header { padding: 2rem 2rem 4rem; background-color: #262626; } header h1 { color: #c2ce46; } header p { color: #fc88dc; }

Irregular shapes are an aspect of this design which I want visible at every screen size, so I use a polygon path to clip the header. Only areas inside the clip area remain visible, everything else turns transparent:

header { -webkit-clip-path: polygon(…); clip-path: polygon(…); }

Attention to even the smallest details of typography lets people know that every aspect of a design has been carefully considered. A horizontal line in the small element at the start of my main content changes length alongside the text.

I don’t want to add a presentational horizontal rule to my HTML, and instead opt for a combination of Flexbox and pseudo-elements in my CSS. First, I style the small element’s text:

main small { font-size: .8em; letter-spacing: .15em; line-height: .8; text-transform: uppercase; }

Then, I add an :after pseudo-element with a thin bottom border which matches the colour of my text:

main small:after { content: ""; display: block; margin-left: .75rem; border-bottom: 1px solid #262626; } Colourful content brings this small-screen design to life. (Large preview)

Adding flex properties aligns the text and my pseudo-element to the bottom of the small element. By giving the pseudo-element a flex-grow value of 1 allows it to change its width to compliment longer and shorter strings of text:

main small { display: flex; align-items: flex-end; } main small:after { flex-grow: 1; }

I enjoy surprises, and there’s more to my second-level “Champion de France” headline than meets the eye.

Almost ten years ago, Dave Rupert released Lettering.js, a jQuery plugin which uses Javascript to wrap individual letters, lines, and words text with span elements. Those separate elements can then be styled in any number of ways. With just one multi-coloured element in this design, I apply the same technique without serving a script:

<h2>Champion <span>d</span><span>e</span> <span>F</span><span>r</span><span>a</span><span>n</span><span>c</span><span>e</span></h2>

Then, I give each selected letter its own colour:

h2 span:nth-of-type(1) { color: #c43d56; } h2 span:nth-of-type(2) { color: #905dd8; } h2 span:nth-of-type(3) { color: #377dab; }

I’ve always viewed the challenge of responsive design as an opportunity to be creative and to make the most of every screen size. The extra space available on medium and large screens allows me to introduce the large, irregularly shaped blocks of color, which makes this design unexpected.

First, I apply grid properties and an eight-column symmetrical grid to the body element:

@media (min-width: 48em) { body { display: grid; grid-template-columns: repeat(8, 1fr); } }

Then, I place my header into three of those columns. With the coloured blocks now visible, I change the header’s background colour to a dark grey:

header { grid-column: 4 / 7; background-color: #262626; }

Centring content both horizontally and vertically was a challenge before Flexbox, but now aligning and justifying my header content is simple:

header { display: flex; flex-direction: column; align-items: center; justify-content: center; }

I change the colour of my header’s text elements:

header h1 { color: #fed36e; } header p { color: #fff; }

Then, I apply negative horizontal margins, so my header overlaps elements close to it:

header { margin-right: 1.5vw; margin-left: -1.5vw; }

My main element needs no extra styling, and I place it onto my grid using line numbers:

main { grid-column: 7 / -1; }

Elements needed to develop a design needn’t be in HTML. Pseudo-elements created in CSS can take their place, which keeps HTML free from any presentation. I use a :before pseudo-element applied to the body:

body:before { display: block; content: ""; }

Then, I add a data URI background image which will cover the entire pseudo-element regardless of its size:

body:before { background-image: url("data:image/svg+xml…"); background-position: 0 0; background-repeat: no-repeat; background-size: cover; }

CSS Grid treats pseudo-elements just like any other, allowing me to place those colourful blocks into my grid using line numbers:

body:before { grid-column: 1 / 4; }

Whereas developers mostly use media query breakpoints to introduce significant changes to a layout, sometimes, only minor changes are needed to tweak a design. Jeremy Keith calls these moments “tweakpoints.”

This medium-size McBain-inspired design works well at larger sizes, but I want to tweak its layout and add more detail to the very largest screens. I start by adding four extra columns to my grid:

@media (min-width: 82em) { body { grid-template-columns: repeat(12, 1fr); } }

Then I reposition the generated colour blocks, header, and main elements using new line numbers:

body:before { grid-column: 1 / 8; } header { grid-column: 7 / 10; } main { grid-column: 9 / -1; }

These elements now overlap, so to prevent them from forming new rows in my grid, I give them all the same grid-row value:

body:before, header, main { grid-row: 1; }

This tweak to my design adds another block of colour between the header and main. To preserve the semantics of my HTML, I add a pseudo-element and a data URI image before my main content:

main:before { display: block; content: url("data:image/svg+xml…"); float: left; margin-right: 2vw; width: 10vw; } The monochrome version (left) has an entirely different feeling from the brightly coloured blocks in my chosen design (right.) (Large preview) Deconstructing Type-images

Early in his career, Emmett McBain’s record cover designs showed he had a flair for typography. He was often playful with type, deconstructing, and rebuilding it to form unexpected shapes. This control over type has never been easy online, but SVG makes almost everything possible.

Deconstructing and rebuilding it to form unexpected shapes adds character to even the smallest screens. (Large preview)

This next McBain-inspired design relies on SVG and just two structural HTML elements; a header which contains the large type-based graphic, a main element for my content:

<header> <h1>…</h1> <p>…</p> <svg>…</svg> </header> <main> <h2>…<h2> <div>…</div> <svg>…</svg> </main>

I need very few foundation styles to start developing this design. First, I add background and foreground colours and padding inside my two elements:

body { background-color: #fff; color: #262626; } header, main { padding: 2rem; }

Second, I define styles for my type which includes both headings and the paragraph of text which follows them:

h1, h2, h1 + p { letter-spacing: .05em; line-height: 1.4; text-transform: uppercase; }

I give my main content a rich purple background which matches the Citroën’s colour in the panel opposite:

main { background-color: #814672; color: #fff; }

This design is dominated by a large graphic that includes a profile of the Citroën DS and a stylized type-image of the words “Champion de France.” The arrangement of its letters would be tricky to accomplish using CSS positioning and transforms, making SVG the perfect choice.

This SVG contains three groups of paths. The first includes outlines of the words “Champion de:”

<svg> <g id="champion-de"> <path>…</path> </g> </svg>

The next group includes paths for the brightly coloured arrangement of letters. I give each letter a unique id attribute to make it possible to style them individually:

<g id="france"> <path id="letter-f">…</path> <path id="letter-r">…</path> <path id="letter-a">…</path> <path id="letter-n">…</path> <path id="letter-c">…</path> <path id="letter-e">…</path> </g> Medium-size screens allow me to transform the type-image and introduce columns to my main content. (Large preview)

Then, I add class attributes to group of paths which make up the Citroën DS profile. With these attributes in place, I can adjust the car’s colours to complement different colour themes and even change them across media query breakpoints:

<g id="citroen"> <path class="car-paint">…</path> <path class="car-tyres">…</path> <path class="car-wheels">…</path> <path class="car-shadow">…</path> <path class="car-lights">…</path> <path class="car-stroke">…</path> </g>

Medium-size screens allow me to tweak the positions of my Citroën DS profile and type-image:

@media (min-width: 48em) { header svg { margin-bottom: -6rem; transform: scale(.85) translateY(-4rem) rotate(-20deg); } }

The order of these transforms is important, as various combinations of rotate, scale, and translate give subtly different results. Then, I add columns to my main content:

main div { column-width: 14em; column-gap: 2rem; }

Until now, this main content comes after my header in the document flow. For larger screens, I want those elements to sit side-by-side, so I apply grid properties and twelve columns to the body:

@media (min-width: 48em) { body { display: grid; grid-template-columns: repeat(12, 1fr); } }

I place the header and main into my grid using line numbers. The header spans seven columns, while the main content spans only five, producing an asymmetrical layout from a symmetrical grid:

header { grid-column: 1 / 8; } main { grid-column: 8 / -1; } The type-image in this design was inspired by Emmett McBain’s Caravan record cover for Eddie Layton and his organ. (Large preview) Scaling Graphical Text

The distinction between SVG and HTML has become blurred, the more I use SVG into my work. SVG is an XML-based format and is entirely at home when it’s incorporated into HTML. This final McBain-inspired design relies on SVG in HTML not just for its striking imagery, but also for text.

Most of my styling is visible to people who use even the smallest screens. (Large preview)

To develop this striking red and black design, I need four structural HTML elements. A header contains an image of the iconic Citroën DS. The banner division includes a large headline developed using SVG text. The main element includes my running text, and finally an aside for supplementary content:

<svg>…</svg> <header> <svg>…</svg> </header> <div id="banner"> <svg>…</svg> </div> <main> <div id="heading"> <svg role="heading" aria-level="1">…</svg> </div> <div class="content"> <p class="dropcap">…</p> <p>…</p> </div> </main> <aside> <small>…</small> <svg role="heading" aria-level="2">…</svg> <p>…</p> <figure>…</figure> <svg role="heading" aria-level="2">…</svg> <p>…</p> </aside>

I used to think using SVG to render text was as inappropriate as setting text within images but having used SVG more, I realize I was wrong.

In issue 8, I explained how like HTML text, SVG text is accessible and selectable. It also has the advantage of being infinitely style-able using clipping paths, gradient fills, filters, masks, and strokes.

The banner division’s headline includes two text elements. The first contains the large word “Champion,” the second contains “de France.” Pairs of x and y coordinates on each tspan element place those words precisely where I want them to develop a solid slab of text:

<svg xmlns="" viewBox="0 0 850 360"> <title>Champion de France</title> <g fill="#ff" fill-rule="evenodd"> <text> <tspan class="title__dropcap" x="0" y="240">C</tspan> <tspan class="title" x="180" y="160">hampion</tspan> </text> <text> <tspan class="title__small" x="600" y="260">de France</tspan> </text> </g> </svg>

Whether I choose to incorporate this SVG into my HTML or link to it as an external image, I can use CSS to define its style. This headline is a linked image, so I add my styles to the SVG file:

<svg> <style type="text/css"> <![CDATA[ text { color: #fff; } .title { font-family: Clarendon URW; font-size: 150px; } .title__dropcap { font-family: Clarendon URW; font-size: 300px; text-transform: lowercase; } .title__small { font-family: Obviously; font-size: 85px; text-transform: uppercase; } ]]> </style> </svg>

I start by adding foundation colour and typography styles. I’ve chosen to indent the start of each new paragraph, so I remove all bottom margins and add a 2ch wide indent to every subsequent paragraph:

body { background-color: #a73448; color: #fff; } .content p { margin-bottom: 0; } .content p + p { text-indent: 2ch; }

The dark grey background and red text of my aside element are opposite to those elsewhere in my design. Increasing lightness and saturation makes colours appear more vibrant against dark backgrounds:

aside { background-color: #262626; color: #d33c56; } Medium-size screens allow me to tweak the design of my content to get the most from the extra space available. (Large preview)

Medium-size screens allow me to tweak the design of my content to get the most from the extra space available. I use two different multiple-column layout properties. First, specifying two columns of variable widths for my content division. Then, any number of columns which will all have a width of 16em:

@media (min-width: 48em) { .content { column-count: 2; column-gap: 2rem; } aside { column-width: 16em; column-gap: 2rem; } } The typography in this design was inspired by Emmett McBain’s Bill Harris, Jazz Guitar record cover, 1960. (Large preview)

Most of my styling is visible to people who use even the smallest screens, so developing a large-screen layout involves applying grid properties and twelve columns to the body element:

@media (min-width: 64em) { body { display: grid; grid-template-columns: repeat(12, 1fr); } }

I place the Citroën logo into the first column:

body > svg { grid-column: 1; }

Then, the header which contains an image of the iconic DS spans four columns:

header { grid-column: 3 / span 4; }

Both the banner division with its stylish SVG headline and my main content’s running text occupy eight columns:

#banner, main { grid-column: 1 / span 8; }

And finally, the reversed-theme aside element occupies three columns on the right of my design. To ensure this content spans every row from the top to bottom of my layout, I place it using row line numbers:

aside { grid-column: 10 / -1; grid-row: 1 / 6; } Even a limited colour palette like this one offers plenty of creative options. (Large preview) Read More From The Series

NB: Smashing members have access to a beautifully designed PDF of Andy’s Inspired Design Decisions magazine and full code examples from this article. You can also buy the PDF and examples from this, and every issue from Andy’s website.

(ra, yk, il)
Categories: Design

Smashing Podcast Episode 21 With Chris Ferdinandi: Are Modern Best Practices Bad For The Web?

Mon, 07/27/2020 - 22:00
Smashing Podcast Episode 21 With Chris Ferdinandi: Are Modern Best Practices Bad For The Web? Smashing Podcast Episode 21 With Chris Ferdinandi: Are Modern Best Practices Bad For The Web? Drew McLellan 2020-07-28T05:00:00+00:00 2020-07-28T07:33:45+00:00

Today, we’re asking if modern best practices are bad for the web? Are modern frameworks taking us down the wrong path? I speak to Lean Web expert Chris Ferdinandi to find out.

Show Notes Weekly Update Transcript

Drew McLellan: He’s the author of Vanilla JS Pocket Guide Series, creator of the Vanilla JS Academy Training Program, and host of the Vanilla JS Podcast. He’s developed a Tips newsletter, it’s read by nearly 10,000 developers each weekday. He’s taught developers at organizations like Chobani and The Boston Globe. And his JavaScript plugins have been used by organizations like Apple and Harvard Business School. Most of all, he loves to help people learn Vanilla JavaScript. So we know he’d rather pick Vanilla JavaScript over a framework, but did you know he was once picked out in a police lineup as being the person least likely to have committed the crime? My Smashing friends, please welcome Chris Ferdinandi. Hello, Chris. How are you?

Chris Ferdinandi: Oh, I’m smashing. Thanks for having me.

Drew: I wanted to talk to you today about this concept of a Lean Web, which something of a passion for you, isn’t it?

Chris: Yes, very much so.

Drew: Why don’t you set the scene for us? When we talk about a Lean Web, what is the problem we are trying to solve?

Chris: Yeah, great question. Just as a caveat for all the listeners, this episode might get a little old man yells at cloud. I’m going to try to avoid that. When I look at the way we build for the web today, it feels a little bit like a bloated over-engineered mess. I’ve come to believe that a lot of what we think of as best practices today might actually be making the web worse.

Chris: The Lean Web is an approach to web development that is focused on simplicity, on performance, and the developer experience over… I’m sorry, not the developer experience. The user experience rather, over the developer experience, and the ease of building things from a team perspective, which is what I think where we put a lot of focus today and as we’ll probably get into in our conversation.

Chris: I’ve actually come to find that a lot of these things we think of as improving the developer experience do so for a subset of developers, but not necessarily everybody who’s working on the thing you’re building. So there’s a whole bunch of issues with that too, that we can dig into. But really, the Lean Web is about focusing on simplicity and performance for the user and really prioritizing or putting the focus on the people who use the things we make rather than us, the people who are making it.

Drew: As the web matures as a development platform, there seems to be this ever increasing drive towards specialization.

Chris: Yes.

Drew: People who used to cover Full Stack, and then we split into front-end and back-end. And then that front-end split into people who do CSS or JavaScript development. And then increasingly within JavaScript, it becomes more specialized. Somebody might consider themselves and market themselves as a React developer or an Angular developer, and their entire identity and outlook is based around a particular framework that they are highly skilled in. Is this dependency on frameworks, the core of our work on the web, a bad thing?

Chris: It’s nuanced. I used to be very strongly in the yes, always camp. I think broadly, I still feel like yes, our obsession as an industry with frameworks and tools in general really, is potentially a little bit to our detriment. I don’t think frameworks are inherently bad. I think they’re useful for a very narrow subset of use cases. And we end up using them for almost everything, including lots of situations where they’re really not necessarily the best choice for you or for the project.

Chris: When I think about a lot of the issues that we have on the web today, the core of those issues really starts with our over-reliance on frameworks. Everything else that comes after that is in many ways, because we throw so much not just frameworks which is JavaScript in general, at the web. I say that as someone who professionally teaches people how to write and use JavaScript. That’s how I make my money. And I’m here saying that I think we use too much JavaScript, which is sometimes a little bit of an odd thing.

Drew: In the time before the big frameworks sprouted up, we used to build user interfaces and things with jQuery or whatever. And then frameworks came along and they gave us more of this concept of a state-based UI.

Chris: Yes.

Drew: Now, that’s a fairly complex bit of engineering that you’re required to get in place. Does working with less JavaScript exclude using something like that, or do you have to re-implement it yourself? Are you just creating a loaded boilerplate?

Chris: A lot of it depends on what you’re doing. If you have a non-changing interface, you can build a state-based UI with… I don’t know, maybe a dozen or so lines of code. If you have a non-changing interface, I would honestly probably say state-based UI. It’s not necessarily the right approach. There’s probably other things you can do instead. Think of static site generators, some pre-rendered markup, even an old-school WordPress or PHP driven site.

Chris: But where this starts to get interesting is when you get into more dynamic and interactive interfaces. Not just apps. I know people love to draw this line between websites and apps, and I think there’s this weird blend between the two of them and the line is not always as clear. When you start to get into more the user does stuff, something changes. State-based UI becomes a little bit more important. But you still don’t need tons of code to make that happen.

Chris: I look at something like React or Vue, which are absolutely amazing pieces of engineering. I don’t want to take away from the people who work on those. I ended up as a learning exercise, building my own mini-framework just to get a better sense for how these things work under the hood. It is really hard to account for all of the different moving pieces. So I have tremendous respect for the people who build and work on these tools. But React and Vue are both about 30 kilobytes after minifying and gzipping. So once you unpack them, they’re substantially bigger than that.

Chris: Not all of it, but a good chunk of that weight is devoted to this thing called the virtual DOM. There are alternatives that use similar APIs or approaches. For React, you have Preact, which is 2.5 kilobytes or about 3 kilobytes instead of 30. It’s a tenth of the size. For Vue, you have Alpine JS instead, which is about 7 kilobytes. Still, substantially smaller. The big difference between those and their big brother counterparts, is that they’ve shed the virtual DOM. They may drop a method or two. But generally, it’s the same approach and the same kind of API in the way of working with code, and they’re substantially smaller.

Chris: I think a lot of the tools we use are potentially overpowered for the things we’re trying to do. If you need a state-based UI and you need reactive data and these dynamic interfaces, that’s great. I think one of the big things I try and talk about today is not… never use tools. For me, Vanilla JS is not you’re handwriting every single line of code and every single project you do. That’s madness. I couldn’t do that, I don’t do that. But it’s being more selective about the tools we use. We always go for the multi-tool, the Swiss Army knife that has all these things in it. And sometimes, all you really need is a pair of scissors. You don’t need all the other stuff, but we have it anyways.

Chris: Which is a really long way to… I’m sorry, of answering your question. Which is sometimes it’s more code than you could or would want to write yourself, but it’s not nearly as much code as I think we think it requires. When I say you don’t need a framework, I get a lot of push-back around this idea that, “Well, if you don’t use a framework, you’re basically writing your own.” Then that comes with its own problems. I think there’s a place in between using React or Vue and writing your own framework, and it’s maybe picking something that’s a little bit smaller. There are sometimes reasons where writing your own framework might actually be the right call, depending on what you’re trying to do. It’s all very fluid and subjective.

Drew: It’s quite interesting that as a learning exercise, you implemented your own state-based framework. I remember in the past, I used to think that every time I reached for a library or something, I liked to think that I could implement it myself.

Chris: Sure, sure.

Drew: But reaching for the library saved me the hassle of doing that. I knew if I had to write this code myself, I knew what approach I’d take to do it. And that was true all the way up to using things like jQuery and things. These days, I think if I had to write my own version of Vue or React, I have almost no idea what’s happening now in that library, in all that code.

Drew: For those of us who are not familiar, when you say something like Preact drops the virtual DOM and makes everything a lot smaller, what’s that virtual DOM giving us?

Chris: To answer that question, I want to take just a slight step back. One of the nicest things that frameworks and other state-based libraries give you is DOM diffing. If you’re not really updating the UI that much, you could get by with saying, “Here’s a string of what the markup is supposed to look like. In HTML, I’ll just put all this markup in this element.” When you need to change something, you do it again. That is a little expensive for browsers, because it triggers a repaint.

Chris: But I think potentially more importantly than that, one of the issues with doing that is that you have any sort of interactive element in there, a form-field, a link that someone has focused on. That focus is lost because the element… even though you have a new thing that looks similar, it’s not the same literal element. So if focus is lost, it can create confusion for people using screen readers. There’s just a whole bunch of problems with that.

Chris: Any state-based UI thing worth its weight is going to implement some for of DOM diffing, where they say, “Here’s what the UI should look like. Here’s what it looks like right now in the DOM. What’s different?” And it’s going to go and change those things. It’s effectively doing the stuff you would do just manually updating the UI yourself, but it’s doing it for you under the hood. So you can just say, “Here’s what I want it to look like.” And then the library or framework figures it out.

Chris: The smaller things like Preact or Alpine, they’re actually doing that directly. They’re converting the string you provide them of what the UI should look like into HTML elements. And then they’re comparing each element to its corresponding piece in the literal DOM. As you end up with UIs that get bigger and bigger and bigger, that can have a performance implication because querying the DOM over and over again becomes expensive. If you want to get a sense for the type of interface where this becomes a problem, right-click and inspect element on the “Favorite” button on Twitter, or the “Like” button on Facebook. And take a look at the nesting of divs that gets you to that element. It’s very, very deep. It’s like a dozen or so divs, nested one inside the other just for that single tweet.

Chris: When you start going that many layers down, it starts to really impact performance. What the virtual DOM does is instead of checking the real DOM every time, it creates an object-based map of what the UI looks like in JavaScript. And then does the same thing for the UI you want to replace the existing one with, and it compares those two. That’s a lot more performance in theory, than doing that in the real DOM. Once it gets a list of the things it needs to change, it just runs off and makes those changes. But it only has to attack the DOM once. It’s not checking every single element, every single time. If you have interfaces like Twitters or Facebooks or QuickBooks or something like that, virtual DOM probably makes a lot of sense.

Chris: The challenge with it is… the difference between Preact and React is 27 kilobytes before you unpack the whole thing into its actual codewave. The raw download and unpacking and compiling time on that alone can add quite a bit of latency to the initial load on a UI. That becomes even more pronounced if your users are on not the latest devices from Apple. Even an Android device from a couple of years ago or a feature phone, or if you have people in developing countries, it’s just really… the time to get going is slower. And then on top of that, the actual interactive time is slower because of the extra abstraction.

Chris: So it’s not just you load it and they’re comparable in speed. Each micro interaction that someone makes and the changes that need to happen can also be slightly slower just because of all that extra code in there. If you have a really, really complex UI with lots of nested elements and lots of data, then the virtual DOM’s performance gains outweigh that extra code weight. But any typical UI for a typical app that most of what I see developers using React or Vue for, the benefit you get from the virtual DOM just isn’t there and they’d be better off. Even if you want to keep the same convenience of React, use Preact. It’s a fraction of the size, it’ll work exactly the same way, and it’ll more performing. This is the kind of thing that I tend to argue for.

Chris: We need to be better about picking the right tool for the job. If you go with an approach like that, if you get to a point where a virtual DOM actually makes sense, it’s much easier to port Preact into React than if you rolled your own. That’s the situation. If you’re really worried about that, you get some future-proofing built in too.

Drew: Some might say, they might make the argument that these frameworks, things like Vue, React are so highly optimized for performance that you get so much benefit there that surely just pairing that with a package manager in a bundler to make sure you’re only sending down the code that you want to. Surely, you are way ahead already just by doing that?

Chris: Yeah. I don’t agree. I don’t really have much more to elaborate on that other than… I guess maybe, but not really. Even with a bundler, you still need that React core. Even with the bundling, that’s still going to be bigger than using something like Preact.

Chris: Drew, I really appreciate you leading the question on this. Because one of the other things I do talk about in my book, The Lean Web, and my talk of the same name, is how these tools… You mentioned the bundling, for example. One of the things we do to get around the performance hit that we take from using all this JavaScript is we throw even more JavaScript at the front-end to account for it. One of the ways we do that is package managers and module bundlers.

Chris: As you alluded to… for those of you who don’t know, these are tools that will… they will compile all of your little individual JavaScript bits into one big file. The newer ones and the more… I don’t want to call them thoughtful. But the newer ones will use a feature called tree shaking, where they get rid of any code that isn’t actually needed. If that code has some dependencies that aren’t used for the thing you’ve actually done, they’ll drop some of that stuff out to make your packages as small as possible. It’s actually not a terrible idea, but it results in this thing I typically call dependency health where you have this really delicate house of cards of dependencies on top of dependencies on top of dependencies.

Chris: Getting your process set up takes time. And anybody who has ever run an NPM install and then discovered that a bunch of dependencies were out of date and had to spend an hour trying to figure out which ones needed to be fixed and oh, it’s actually a dependency in a dependency and you don’t have the ability to go fix it yourself. It’s a whole thing. Maybe it works for you, but I’d rather spend my time not messing around trying to get my dependencies together.

Chris: I’ve started collecting tweets from people where they complain about time wasted on their workflow. One of my favorites, Brad Frost a couple year ago, was talking about the depressing realization that the thing you’ve been slogging through in modern JS could have taken you 10 minutes in jQuery. I’m not really a jQuery fan, but I feel that pain when it comes to working with frameworks.

Chris: The other issue with a lot of these tools is they start to become gatekeepers. I don’t know how much you really want to dive into this or not, Drew. But one of my big push-backs against JS, all the things in a workflow. Especially when you start to then say, “Well, if we’re already using JS for the HTML, why not use it for CSS too?” You start to exclude a lot of really talented people from the development process. It’s just a really big loss for the project for the community as a whole.

Drew: Well, I certainly am… I started picking up React at the beginning of 2020, adding that to my skillset. I’ve been doing it now for nearly seven months. I’ve got to say one part I’m least confident in is setting up the tooling around getting a project started.

Drew: It seems like there’s an awful lot of work to get something to Hello World, and there’s even more that you’ve got to know to get it to be production ready. That has to make development more difficult to get started with if this is being put forward as what you should be doing in 2020 to learn to be a web developer. Somebody coming in new to it is going to have a real problem. It’s going to be a real barrier to entry, isn’t it?

Chris: Absolutely. The other piece here is that JavaScript developers aren’t always the only people working on a code base or contributing in a meaningful way to that code base. The more we make knowing JavaScript a requirement for working with a code base, the less likely those people are to be able to actually participate in the project.

Chris: An example of that, that I like to talk about is WordPress, who has been recently… I shouldn’t say recently at this point. It’s been a couple of years now. But they’ve been shifting their back-end stack more and more to JavaScript, away from PHP, which is not inherently a bad thing. Their new editor, Gutenburg, is built on React.

Chris: In 2018, WordPress’s lead accessibility consultant, Rian Rietveld, whose name I almost certainly butchered… she very publicly resigned from her positioned and documented why in a really detailed article. The core of the problem was that her and her team were tasked with auditing this editor to make sure that it was going to be accessible. WordPress comprises now 30% of the web. Their goal is to democratize publishing, so accessibility is a really important part of that. It should be an important part of any web project. But for them in particular, it is acutely important. Her team’s whole job is to make sure… her team’s whole job was to make sure that this editor would be accessible. But because neither she nor anyone on her team had React experience and because they couldn’t find volunteers in the accessibility community with that experience, it made it really difficult for her and her team to do their work.

Chris: Historically, they could identify errors and then go in and fix them. But with the new React based workflow, they were reduced to identifying bugs and then filing tickets. They got added to a backlog along with all the other feature development requests that the JavaScript developers were working on. A lot of stuff that could have been easily fixed didn’t make it into the first release.

Chris: In May of 2019, about a year after Rian resigned, there was a detailed accessibility audit done on Gutenburg. The full report was 329 pages. The executive summary alone was 34 pages. It documented 91 accessibility related bugs in quite a bit of detail. Many of these were really… I don’t want to call them simple low-hanging fruit, but a lot of them were basic things that Rian’s team could have fixed and it would have freed up the developers to focus on feature development. That’s ultimately what it seems like they ended up doing, was spending a lot of time on feature development and pushing this stuff off til later. But that team is super important to the project, and they suddenly got locked out of the process because of a technical choice.

Chris: Alex Russell is a developer on the Chrome team. He wrote this article a couple of years ago called The Developer Bait and Switch, where he talked about the straw man argument of frameworks. This idea that these tools let you move faster and because of that, you can iterate faster and deliver better experiences. It’s this idea that a better developer experience means a better user experience. I think this is a very clear example of how that argument doesn’t always play out the way people believe it does. It’s a better experience for maybe some people, not for everyone.

Chris: CSS developers, people working on design systems, their ability to create tools that others can use sometimes gets limited by these tool choices too. In my experience, I used to be better at CSS. It’s changed a lot in the last few years and I am nowhere near as good as I used to be. In my experience, the people who are really advanced CSS developers… and I do mean that in the truest sense. People who work on CSS are proper web developers working on a programming language. It’s a very special, or can be a very specialized thing. The people who are exceptionally good at it, in my experience, are not always also very good at JavaScript because you end up diving really deep into one thing and you slide a little bit on some other stuff. Their ability to work with these technologies gets hindered as well, which is too bad because it used to not be the case.

Chris: When the stack was simpler, it was easier for people from multiple disciplines to participate in the development process. I think that’s to the detriment of both the things we build and the community at large, when that’s no longer the case.

Drew: I found recently in a project researching ways to deal with some of the CSS problems, workflow problems, we’re having multiple working on the project and the bundle size increasing and the old CSS never getting removed. It was a React project, so we were looking at some of these CSS in JavaScript approaches to see what would be best for us to use to solve the problems that we had. What became very quickly apparent is there’s not only one solution to do this. There are dozens of different ways you could do this.

Drew: CSS in JS is a general approach, but you might go from one project to the next and it’s completely influenced in a completely different way. Even if you’re a CSS person and you learn a particular approach on a project, those skills may not be transferrable anyway. It’s very difficult to see how somebody should investment too much time in learning that if they’re not particularly comfortable with JavaScript.

Chris: Yeah. The other interesting thing that I think you just got out a little bit is when I talk about this, one of the biggest pieces of push-back I get is that frameworks enforce conventions. You’re going with Vanilla JavaScript, you’ve got this green field-blue sky, you can do anything you want kind of project. With React, there’s a React way to do things.

Chris: But one of the things I have found is that there are Reacty approaches, but there’s not a strict one right way to do things with React. It’s one of the things people love about it. It’s a little bit more flexible, you can do things a couple of different ways if you want. Same with Vue. You can use that HTML based thing where you’ve got your properties right in the HTML and Vue replaces them, but you can also use a more JSX-like templating kind of syntax if you’d prefer.

Chris: I’ve heard multiple folks complain about when they’re learning React, one of the big problems is you Google how to do X in React and you get a dozen articles telling you a dozen different ways that you could do that thing. That’s not to say they don’t put some guardrails on a project. I don’t think it’s as clearcut as, “I’ve chosen a framework. Now this is the way I build with it.” To be honest, I don’t know that that’s necessarily something I’d want either. I don’t think you’d want those tightly confined… some people do, maybe. But if you’re touting that as a benefit, I don’t think it’s quite as pronounced as people sometimes make it out to be.

Chris: You got into an interesting thing though just there, where you were mentioning the CSS that is no longer needed. I think that is one of the legitimately interesting things that something like CSS and JS… or tying your CSS to JavaScript components in some way can get you is if that component drops out, the CSS also in theory, goes away with it. A lot of this to me feels like throwing engineering at people problems. Invariably, you’re still dependent on people somewhere along the line. That’s not to say never use those approaches, but they’re certainly not the only way to get at this problem.

Chris: There are tools you can use to audit your HTML and pull out the styles that aren’t being used even without using CSS and JavaScript. You can write CSS “the old-fashioned way” and still do the linting of unused styles. There are authoring approaches to CSS that give you a CSS in JS-like output and keep your style sheet small without spitting out these gibberish human unreadable class names that you get with CSS and JS sometimes. Like X2354ABC, or just these nonsense words that get spit out.

Chris: This is where I start to get really into the old man yells at cloud kind of thing. I’m really showing my developer age here. But yeah, it’s not necessarily that these things are bad, and they’re built to solve legitimate problems. I sometimes feel like there’s a little bit of a… if it’s good enough for Facebook, it’s good enough for us kind of thing that happens with these. Well, Facebook uses this tool. If we’re a real engineering program… team, department, organization, we should too. I don’t necessarily think that’s the right way to think about it. It’s because Facebook deals with problems that you don’t, and vice-versa. The size and scale of what they work on is just different, and that’s okay.

Drew: Exactly. I see some of these things like CSS and JavaScript to be almost like a polyfill. We’ve got legitimate problems, we need a way to solve it. The technology isn’t providing us a way to solve it yet. Maybe whilst we wait for the web platform to evolve and to get around to addressing that problem, this thing that we do right now with JavaScript kind of will see us through and we’ll be okay. We know it’s bad. We know it’s not a great solution, but it helps us right now. And hopefully in the while we can pull it out and use the native solution.

Chris: It’s really funny you bring this up. Because literally last night, I was watching a talk from Jeremy Keith from last year about progressive web apps. But he was talking about how a couple of decades ago, he was trying to convince people to use JavaScript. Which seems ridiculous at the time, but JavaScript was this new thing. He showed people how you could do cool things like change the color of a link on hover with this new… It seems absurd that you would need JavaScript for that now, because that’s what CSS does for you. But things like the focus attribute or property just didn’t exist at the time.

Chris: One of the things he said in the talk that I think really resonated with me is that JavaScript in many ways, paves those cow paths. It’s this very flexible and open language that we can use to create or bolt in features that don’t exist yet. And then eventually, browsers catch up and implement these features in a more native way. But it takes time. I completely understand what you’re saying about this. It’s not the perfect solution, but it’s what we have right now.

Chris: I think for me, the biggest difference between polyfills and some of these solutions is polyfills are designed to be ripped out. If I have a feature I want to implement that the browser doesn’t support yet, but there’s some sort of specification for it and I write a polyfill… once browsers catch up, I can rip that polyfill out and I don’t have to change anything. But when you go down the path of using some of these tools, ripping them up out means rewriting your whole code base. That’s really expensive and problematic. That’s not to say never use them, but I feel really strongly that we should be giving thought to picking tools that can easily be pulled out later. If we no longer need them or the platform catches up, it doesn’t require a huge rewrite to pull them out.

Chris: So getting to that we have a whole bunch of styles we don’t use anymore thing, that’s why I would personally favor a build tool technology that audits your CSS against the rendered markup and pulls out the things you don’t need. Because down the road if a platform catches up, you can pull that piece of the build tool out without having to change everything. It’s just augmenting what you already have, whereas CSS and JS doesn’t give you that same kind of benefit. I’m just picking on that one, but I think about a lot of these technologies more broadly.

Chris: I do feel things like React or Vue are probably paving some cow paths that browsers will eventually catch up with and probably use similar approaches if not the same, so there may be less rewriting involved there. A lot of the ecosystem stuff that gets plugged in around that may be less so.

Drew: I think it’s right, isn’t it, that the web platform moves slowly and carefully. You think if five years ago, we were all putting JavaScript Carousels on our pages. They were everywhere. Everyone was implementing JavaScript Carousels. If the web platform had jumped and implemented a Carousel solution to satisfy that need, it would not be sat there with nobody using it because people aren’t doing that with Carousels anymore. Because it was just a fad, a design trend. To counteract that and stop the platform itself becoming bloated and becoming a problem that needs solving, it does have to move at a lot steadier pace. The upshot of that is HTML that I wrote in 1999 still works today because of that slow process.

Drew: One of the other areas where things seem to be bloating out on the web is… I guess it’s related to the framework conversation. But it’s the concept of a single-page app. I feel like that a lot of promises are made around single-page apps as to their performance, like you’re getting all this benefit because you’re not reloading the entire page framework. I feel like they don’t always make good on those performance promises. Would you agree with that?

Chris: Yes. Although I will confess, despite having a very lengthy chapter in my book on this and talking about that a lot in talks and conversations with people, I don’t think single-page apps are always a terrible thing. But I do think this idea that you need one for performance is overstated. You can oftentimes get that same level of performance with different approaches.

Chris: I think one of the bigger challenges with single-page apps is… For anybody who’s unfamiliar with those. When a single-page app, instead of having separate HTML files or if you’re using something like a database driven site like WordPress, even though you don’t have actual physical HTML files for each page in your WordPress site, WordPress is creating HTML files on the fly and sending them back to browsers when URLs are requested. For purposes of this conversation, instead of having separate HTML files for every view in your app, a single-page app has a single HTML file. That’s what makes it a single-page app. JavaScript handles everything. Rendering the content, routing to different URL paths, fetching new content if it needs to from an API or something like that.

Chris: One of the spoken benefits of these or stated benefits of these is that only the content on the page changes. You don’t have to re-download all the JS and the CSS. Oh, and you can do those fancy page transitions that designers sometimes love. In theory, this is more performant than having to reload the whole page.

Chris: The problem with this approach from my perspective is that it also breaks a bunch of stuff that the browser just gives you for free out-of-the-box, and then you need to recreate it with more JS. You have an app that’s slow because it has a lot of JS. So you throw even more JavaScript at it to improve that performance and in doing so, you break a bunch of browser features and then have to re-implement those with even more JS too.

Chris: For example, some things that the browser will do for free with a traditional website that you need to recreate with JavaScript when you go the single-page app. You need to intercept clicks on links and suppress them from actually firing, with your JavaScript. You then need to figure out what you actually need to show based on that URL, which is normally something that would get handled on the server or based on the file that goes with that URL path. You need to actually update the URL in the address bar without triggering a page reload. You need to listen for forward and back clicks on the browser and update content again, just like you would with clicks on links. You need to update the document title.

Chris: You also need to shift focus in a way that announces the change in page to people who are using screen readers and other devices so that they’re not confused about where they are or what’s going on. Because they can’t see the change that’s happening, they’re hearing it announced. If you don’t actually shift focus anywhere, that announcement doesn’t happen. These are all things that the browser would do for you that get broken with single-page apps.

Chris: On top of that, because you have all this extra JavaScript, this is complicated. So most people use frameworks and libraries to handle this sort of thing. Because of all this extra JavaScript to support this approach, you end up with potentially slower initial page load than you would have otherwise. Depending on the content you have, this approach I think sometimes can make sense. If you have an app that is driven by API data where you don’t necessarily know what those URL paths are going to look like ahead of time.

Chris: Just an example here. You have an animal rescue where you have some adoptable animals, and that data comes from Petfinder, the animal adoption website. You have a bunch of animals there. Petfinder manages that, but you want to display them on your site with the Petfinder API. When your website’s being built, it doesn’t always necessarily have visibility to what pets are available in this exact moment and what kind of URL paths you’re going to need. A single-page app can help you there because it can dynamically on the fly, create these nice URLs that map with each dog or cat.

Chris: Something like Instagram with lots of user created content, maybe that also makes sense. But for a lot of things, we do know where those URLs are going to be ahead of time. Creating an HTML file that has the content on it already is going to be just as fast as… sometimes even faster than the JavaScript based single-page app approach, especially if you use some other techniques to keep your overall CSS and JavaScript size down. I use this approach on a course portal that I have. The page loads feel instantaneous because HTML is so easy for browsers to render compared to other parts of the stack. It feels like a single-page app, but it’s not.

Drew: Especially when you consider hosting solutions like a Jamstack approach of putting HTML files out in a CDN so it’s being served somewhere physically close to the user.

Chris: Yep.

Drew: Loading those pages can just be so, so quick.

Chris: Yes. Absolutely. Absolutely. One of the other arguments I think people used to make in favor of single-page apps is offline access. If someone loads it and then their network goes down, the app is already up and all the routes are handled just with the file that’s already there. So there’s no reloading, they don’t lose any work. That was true for a long time. Now with service workers and progressive web apps, that is I think less of a compelling argument, especially since service workers can fetch full HTML files and cache them ahead of time if needed.

Chris: You can literally have your whole app available offline before someone has even visited those pages if you want. It just happens in the background without the user having to do anything. It’s again, one of those technologies that maybe made sense for certain use cases a few years ago a little less compelling now.

Drew: It reminds me slightly of when we used to build websites in Flash.

Chris: Yes.

Drew: And you’d have just a rectangle embedded in an HTML page which is your Flash Player, and you’d build your entire site in that. You had to reimplement absolutely everything. There was no back button. If you wanted a back button, you had to create a back button, and then you had to create what the concept of a page was. You were writing code upon code, upon code to reimplement just as you are saying things that the browser already does for you. Does all this JavaScript that we’re putting into our pages to create this functionality… is this going to cause fragility in our front-ends?

Chris: Yes. This is almost certainly from my mind, one of the biggest issues with our over-reliance on JavaScript. JavaScript is just by its nature, is a scripting language, the most fragile part of the front-end stack.

Chris: For example, if I write an HTML element that doesn’t exist, I spell article like arcitle instead of article and the browser runs across that, it’s going to be like, “Oh, I don’t know what this is. Whatever, I’ll just treat it like a div.” And it keeps going. If I mistype a CSS property… Let’s say I forget the B in bold, so I write old instead, font way old. The browser’s going to be, “I don’t know what this is. Whatever, we’ll just keep going.” Your thing won’t be bold, but it will still be there.

Chris: With JavaScript, if you mistype a variable name or you try to use a property, you try to call a variable that doesn’t exist or a myriad of other things happen… your minifier messes up and pulls one line of code to the one before it without a semicolon where it needs one, the whole app crashes. Everything from that line on stop working. Sometimes even stuff that happens before that doesn’t complete, depending on how your app is set up. You can very quickly end up with an app that in a different approach, one where you rely a lot more on HTML and CSS, it would work. It might not look exactly right, but it would still work… to one that doesn’t work at all.

Chris: There’s an argument to be made that in 2020, JavaScript is an integral and important part of the web and most people don’t disable it and most people are using devices that can actually handle modern JavaScript. That’s true, but that’s not the only reason why JavaScript doesn’t work right, even if you have a linter there for example and you catch bugs ahead of time and things. There’s plenty of reasons why JavaScript can go horribly awry on you. CDNs fail.

Chris: Back in July of last, a year ago this month… at least, when we’re recording this… a bad deploy took down Cloudflare. Interestingly as we’re recording this, I think a week or two ago, Cloudflare had another massive outage that broke a whole bunch of things, which is not a knock on Cloudflare. They’re an incredibly important service that powers a ton of the web. But CDNs do sometimes go down. They are a provider used by 10% of Fortune 1,000 companies. If your JS is served by that CDN and it breaks, the JavaScript file never loads. And if your content is dependent on that JS, your users get nothing instead of getting something just not styled the way you’d like.

Chris: Firewalls and ad blockers get overly aggressive with what they block. I used to work at a company that had a JavaScript white list because they were extremely security conscious, they worked with some government contract stuff. They had a list of allowed JavaScript, and if your site or if your URL wasn’t part of that, no JavaScript. You have these sites. I remember going to a site where it had one of the hamburger kind of style menus on every view whether it was desktop or mobile, and I could not access any page other than the homepage because no JavaScript, no hamburger, that was it.

Chris: Sometimes connections just timeout for reasons. Either the file takes a while or someone’s in a spotty or slow connection. Ian Feather, an engineer at BuzzFeed, shared that about 1% of requests for JavaScript on the site fail which is 13 million requests a month. Or it was last year, it’s probably even more now. That’s a lot of failed JavaScript. People commuting go through tunnels and lose the internet. There’s just all sorts of reasons why JavaScript can fail and when it does, it’s so catastrophic.

Chris: And so we built this web that should be faster than ever. It’s 2020, 5G is starting to become a thing. I thought 4G was amazing. 4G is about as fast as my home wifi network. 5G is even faster, which is just bonkers. Yet somehow, we have websites that are slower and less performant than they were 5 or 10 years ago, and that makes no sense to me. It doesn’t have to be that way.

Drew: How do we get out of this mess, Chris?

Chris: Great question. I want to be really clear. I know I’ve hammered on this a couple times. I’m not saying all the new stuff is bad, never use it. But what I do want to encourage is a little bit more thoughtfulness about how we build for the web.

Chris: I think the overlying theme here is that old doesn’t mean obsolete. It doesn’t mean never embrace new stuff, but don’t be so quick to just jump on all the shiny new stuff just because it’s there. I know it’s one of the things that keeps this industry really exciting and makes it fun to work in, there’s always something new to learn. But when you pick these new things, do it because it’s the right tool for the job and not just because it’s the shiny new thing.

Chris: One of the other things we didn’t get into as much as I would have liked, but I think is really important, is that the platform has caught up in a really big way in the last few years. Embracing that as much as possible is going to result in a web experience for people that is faster, that is less fragile, that is easier for you to build and maintain because it requires fewer dependencies such as using what the browser gives you out-of-the-box. We used to need jQuery to select things like classes. Now browsers have native ways to do that. People like JSX because it allows you to write HTML in JavaScript in a more seamless way. But we also have template literals in Vanilla JavaScript that give you that same level of ease without the additional dependency. HTML itself can now replace a lot of things that used to require JavaScript, which is absolutely amazing.

Chris: We talked a little bit about… this is a CSS thing, but hovers over links and how that used to require JavaScript. But using things like the details and summary elements, you can create disclosure, like expand and collapse or accordion elements natively with no scripting needed. You can do auto complete inputs using just a… I shouldn’t say just, I hate that word. But using a humble input element and then a data list element that gets associated with it, with some options. If you’re curious about how any of this stuff works over at, I have a bunch of JavaScript stuff that the platform gives you. But I also have some used to require JavaScript and now doesn’t kind of things that might be interesting too if you want some code samples to go along with this.

Chris: On the CSS side of things, my most popular Vanilla JS plugin ever is this library that lets you animate scrolling down to anchor links. It is very big. It’s the hardest piece of code I’ve ever had to write. And it now is completely replaced with a single line of CSS, scroll behavior smooth. It’s more performant. It’s easier to write. It’s easier to modify its behavior. It’s just a better overall solution.

Chris: One of the other things that I wish we did more is leaning on multi-page apps. I feel a little bit vindicated here, because I recently saw an article from someone at Google that actually pushes for this approach now too. I thought that was pretty interesting, given this huge angular and then framework… all the things, boom, that Google started a few years back. Kind of cool to see them come back around to this. Using things like static site generators and awesome services like Netlify and CDN caching, you can create incredibly fast web experiences for people using individual HTML files for all of your different views. So kind of leaning on some of this out-of-the-box stuff.

Chris: In situations where that’s not realistic for you, where you do need more JavaScript, you do need some sort of library, maybe taking a look at the smaller and more modular approaches first instead of just going for the behemoths of the industry. Instead of React, would Preact work? Instead of angular… I mean, instead of Vue rather, would Alpine JS work? There’s also this really interesting pre-compiler out there now called Svelt, that gives you a framework-like experience and then compiles all your code into Vanilla JavaScript. So you get these really tiny bundles that have just what you need and nothing else. Instead of CSS and JavaScript, could you bolt in some third party CSS linter that will compare your HTML to your CSS and pull out the stuff that got left in there by accident? Would a different way of authoring your CSS, like object oriented CSS by Nicole Sullivan, work instead? We didn’t really get to talk about that, but it’s a really cool thing people should check out.

Chris: And then I think maybe the third and most important piece here, even though it’s less of a specific approach and more just a thing I wish more people kept in mind, is that the web is for everyone. A lot of the tools that we use today work for people who have good internet connections and powerful devices. But they don’t work for people who are on older devices, have spotty internet connections. This is not just people in developing areas. This is also people in the U.K., in certain parts of the U.S. where we have absolutely abysmal internet connections. The middle of our country has very slow internet. I know there’s places in part of London where they can’t wire a new broadband in for historical reasons, so you’re left with these old internet connections that are really bad. There’s places like that all over the world. Last time I was in Italy, same thing. The internet there was horrible. I don’t know if it’s changed since then.

Chris: The things we build today don’t always work for everyone, and that’s too bad. Because the vision of the web, the thing I love about it, is that it is a platform for absolutely everyone.

Drew: If listeners want to find out more about this approach, you’ve gone into loads of detail to it in your book, The Lean Web. And that’s available online. Is it a physical book or a digital book?

Chris: It’s a little bit of both. Well, no. It’s definitely not a physical book. You go to You can read the whole thing for free online. You can also if you want, there’s EPUB and PDF versions available for a really small amount of money, I forget how much now. I haven’t looked at it in a while. The whole thing is free online if you want it. You can also watch a talk on this topic where I go into more details if you want.

Chris: But I’ve also put together a special page just for listeners of Smashing Podcast at, because I’m very creative with naming things. That includes a bunch of resources in addition to the book, around things that we talked about today. It links to a lot of the different techniques that we covered, other articles I’ve written that go deeper into some of these topics and expand on my thinking a little bit. If folks want to learn more, that would probably be the best place to start.

Drew: That’s terrific. Thank you. I’ve been learning all about the Lean Web. What have you been learning about lately, Chris?

Chris: Yeah, a couple of things. I alluded to this a little bit earlier with watching Jeremy’s video on progressive web apps. I have been putting off learning how to actually write my own progressive web app for a couple of years because I didn’t have a specific need on anything I was working with. I recently learned from one of my students who is in South Africa, that they have been dealing with rolling blackouts because of some stuff they have going on down there. As a result, she is not able to work on some of the projects we’ve been doing together regularly, because the power goes out and she can’t access the learning portal and follow along.

Chris: For me, now building an experience where it works even if someone doesn’t have internet has become a higher priority than… I realized that maybe it was before, so I just started digging into that and hope to get that put together in the next few weeks. We’ll see. Jeremy Keith’s resources on this have been an absolute lifesaver though. I’m glad they exist.

Chris: I know, Drew, you mentioned one of the reasons you like to ask this question is to show that people no matter how seasoned they are, are always learning. Just a little related anecdote. I have been a web developer for I think, about eight years now. I still have to Google the CSS property to use for making things italic, literally every single time I use it. For some reason, my brain defaults to text decoration even though that’s not the right one. I’ll try a couple of combinations of different things, and I always have one word wrong every time. I also sometimes write italics instead of italic. Yeah. If anybody ever there is ever feeling like, oh, I’m never going to learn this stuff… just know that no matter how seasoned you are, there’s always some really basic thing that you Google over and over again.

Drew: I’ve been a web developer for 22, 23 years, and I have to Google the different properties for Flexbox still, every time. Although I’ve been using that for 23 years. But yeah, some things just… there’s probably going to more of those as I get older.

Chris: Yeah. Honestly, I ended up building a whole website of stuff I Google over and over again, just to have an easier copy-paste reference because that was easier than Googling.

Drew: That’s not a bad idea.

Chris: That’s the kind of lazy I am. I’ll build a whole website to save myself like three seconds of Googling.

Drew: If you the listener would like to hear more from Chris, you can find his book on the web at, and his developer Tips newsletter and more at Chris is on Twitter at Chris Ferdinandi. And you can check out his podcast at or wherever you usually get your podcasts. Thanks for joining us today, Chris. Do you have any parting words?

Chris: No. Thank you so much for having me, Drew. I had an absolutely smashing time. This was heaps of fun. I really appreciate the opportunity to come chat.

Categories: Design

Create A Responsive Dashboard With Angular Material And ng2-Charts

Mon, 07/27/2020 - 03:00
Create A Responsive Dashboard With Angular Material And ng2-Charts Create A Responsive Dashboard With Angular Material And ng2-Charts Zara Cooper 2020-07-27T10:00:00+00:00 2020-07-27T10:33:41+00:00

Creating a dashboard from scratch is often pretty complicated. You have to create tools to collect data on items of interest. Once collected this data has to be presented in an easy to understand and meaningful way to your users. It involves intricate planning of what data to include and how to display it effectively. Once you have a plan, implementing the design is a massive task especially since it involves building multiple components.

With Angular Material and ng2-charts, you can take advantage of schematics to cut down the effort and time you may spend building a dashboard. Angular Material ships with a number of schematics that you could use to generate a dashboard. Similarly, ng2-charts provides schematics for generating multiple chart components. In this article, I’ll illustrate how to use both ng2-charts and Angular Material to set up a dashboard fairly quickly.

An Example

To illustrate how to build a dashboard, we’ll take the example of an online store selling leather goods like bags, wallets, key holders, and so on. The store owner would like to track information such as where customers come from to their online store, how their products sell, how traffic sources relate to sales, among other things.

We’ll build a dashboard to display this information and help the store owner analyze it. The dashboard will contain four small summary cards, four different kinds of charts, and a table listing most recent orders made. The four summary cards will display information such as total revenue from sales, average order value, the total number of orders, and number of returning customers. The charts will display the number of units sold for each product, sales by traffic source, online store sessions over time, and sales for the week.


To follow along, you’ll need to have Angular CLI installed. If you do not have it installed, you can find out how to get it at If you’re not starting from a pre-existing Angular project, you need to generate one by running ng new <your project name>. For instance, to create an admin panel for the aforementioned store, we’ll run:

ng new store-admin-panel

Your project also needs to have routes configured for it. If you’re starting from a new app, select yes when prompted on whether to add an Angular Routing module during your project setup above.

Add Angular Material And Ng2-Charts To Your Project

Angular Material ships with various schematics for generating a variety of useful components like address books, trees, tables, navigation, and so on. To add Angular Material to your project, run:

ng add @angular/material

Pick a theme from the options provided in subsequent prompts. Next, you’ll be prompted to choose whether to add Angular Material typography styles and browser animations. You do not need these and could just respond no.

Next, you’ll need to install ng2-charts. ng2-charts requires charts.js as a dependency. To install ng2-charts run:

npm install ng2-charts --save

Then install charts.js:

npm install chart.js --save

To access the charts, add the ChartsModule to the AppModule’s imports.

import { ChartsModule } from 'ng2-charts'; @NgModule({ imports: [ … ChartsModule, … ] })

Lastly, install ng2-charts schematics as a dev dependency because they do not ship with ng2-charts by default.

npm install --save-dev ng2-charts-schematics Generating A Navigation Component

First off, we’ll need to add a navigation component to help users maneuver through the app comfortably. The navigation should contain links to the dashboard and other pages that will be part of the admin panel. Angular material provides a schematic that generates a navigation component. We’ll name this component nav. Adding a side nav to the application is accomplished by running:

ng generate @angular/material:navigation nav

To link other routes in the navigation, use the routerLink directive and change the page name in the toolbar depending on what route a user is on.

// nav.component.ts ... menuItems = ['dashboard', ’sales', 'orders', 'customers', 'products']; <!--nav.component.html--> ... <mat-nav-list> <a *ngFor="let item of menuItems" mat-list-item [routerLink]="'/'+item"> {{item | titlecase}} </a> ...

To see this component, add it to app.component.html.

<!--app.component.html--> <app-nav></app-nav>

This is what the NavComponent looks like.

Navigation component (Large preview)

Since the nav will be displayed alongside other components, adding a router-outlet to it would help switch between the other different components. In the nav.component.html template, just after the closing </mat-toolbar>, replace the <!-- Add Content Here --> comment with <router-outlet></router-outlet>.

<!--nav.component.html--> <mat-sidenav-container> ... <mat-sidenav-content> <mat-toolbar> ... </mat-toolbar> <router-outlet></router-outlet> </mat-sidenav-content> </mat-sidenav-container>

In the screenshots that follow in this article, this nav component will be omitted to better highlight the dashboard we’ll be generating for the sake of the tutorial. If you’re following along while building this dashboard, the nav will still appear as pictured above in your browser with the dashboard within it.

Generate The Dashboard

The most important part of the dashboard is its layout. It needs to hold all the components mentioned earlier and be responsive when displayed on different devices. To generate the dashboard layout, you’ll need to run the @angular/material:dashboard schematic. It will generate a responsive dashboard component. Pass the preferred name for your dashboard to the schematic. In this instance, let’s name it dash.

ng generate @angular/material:dashboard dash

To view the newly generated dashboard within the nav component, add a route for it to the router.

// app-routing.module.ts import { DashComponent } from './dash/dash.component'; const routes: Routes = [{ path: 'dashboard', component: DashComponent }]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] })

Once done, to see the results, run npm start and head on over to localhost:4200/dashboard. You should see this:

Generated dashboard component (Large preview)

The schematic generates four cards in the template and displays them in a responsive grid. The Angular Material CDK uses the Layout package to style this responsive card grid. The BreakpointObserver utility of the Layout package assesses media queries and makes UI changes based on them. There are various breakpoints available but within the generated component, only two categories are catered for. The Breakpoints.Handset and other queries that do not match it. The Layout package specifies 14 breakpoint states that you can use to customize the responsiveness of your dashboard.

// dashboard.component.js ... cards = this.breakpointObserver.observe(Breakpoints.Handset).pipe( map(({ matches }) => { if (matches) { ... } ... }) );

Going back to the dashboard, since four summary cards, four charts, and a table will be on the dashboard, we need nine cards in total. Breakpoints.Handset and Breakpoints.Tablet matches will display in a one-column grid where:

  • The four summary cards will span one row.
  • The charts will span two rows.
  • The table will span four rows.

Non-Breakpoints.Handset and non-Breakpoints.Tablet matches will display in four columns where:

  • The four summary cards will span one row and one column.
  • The charts will span two rows and two columns.
  • The table will span four rows and four columns.

It should look something like the screenshot below in non-Breakpoints.Handset and non-Breakpoints.Tablet matches. On Breakpoints.Handset and Breakpoints.Tablet matches, everything will just display in one column.

Dashboard component with additional cards (Large preview) Create A Card Component

In the dashboard component, all the cards are generated through iteration. To prevent repetition, when adding all the new components, we’ll create a reusable card component. The card component will accept a title as input and use ng-content to dynamically add the rest of the content. To create the card component, run:

ng g c card -m app --style css

From the dashboard component template, we’ll just take the markup enclosed within the <mat-card> tag and place it In the card template:

<!--card.component.html--> <mat-card class="dashboard-card"> <mat-card-header> <mat-card-title> {{title}} <button mat-icon-button class="more-button" [matMenuTriggerFor]="menu" aria-label="Toggle menu"> <mat-icon>more_vert</mat-icon> </button> <mat-menu #menu="matMenu" xPosition="before"> <button mat-menu-item>Expand</button> <button mat-menu-item>Remove</button> </mat-menu> </mat-card-title> </mat-card-header> <mat-card-content class="dashboard-card-content"> <ng-content></ng-content> </mat-card-content> </mat-card>

To add the title as input to the card:

// card.component.ts import { Component, Input } from '@angular/core'; ... export class CardComponent{ @Input() title: string; ... }

To style the card:

/*card.component.css*/ .more-button { position: absolute; top: 5px; right: 10px; } .dashboard-card { position: absolute; top: 15px; left: 15px; right: 15px; bottom: 15px; } .dashboard-card-content { text-align: center; flex-grow: 1; display: flex; flex-direction: column; align-items: center; max-height: 100%; justify-content: center; align-items: stretch; } mat-card { display: flex; flex-direction: column; } Adding Cards To The Dashboard

Since the dashboard elements will be added individually and not through iteration, the dashboard component needs to be modified to account for this. In dashboard.component.ts, remove the cards property and replace it with a cardLayout property instead. The cardLayout variable will define the number of columns for the material grid list and how many rows and columns each of the dashboard cards will span. Breakpoints.Handset and Breakpoints.Tablet query matches will display in 1 column and those that do not match will display in 4 columns.

// dashboard.component.js ... cardLayout = this.breakpointObserver.observe(Breakpoints.Handset).pipe( map(({ matches }) => { if (matches) { return { columns: 1, miniCard: { cols: 1, rows: 1 }, chart: { cols: 1, rows: 2 }, table: { cols: 1, rows: 4 }, }; } return { columns: 4, miniCard: { cols: 1, rows: 1 }, chart: { cols: 2, rows: 2 }, table: { cols: 4, rows: 4 }, }; }) ); ...

In the dash.component.html template, replace the colspan and rowspan values of mat-grid-tile elements and the cols property of the mat-grid-list element.

<!--dash.component.html--> <div class="grid-container"> <h1 class="mat-h1">Dashboard</h1> <mat-grid-list cols="{{ ( cardLayout | async )?.columns }}" rowHeight="200px"> <!--Mini Cards--> <mat-grid-tile *ngFor="let i of [1, 2, 3, 4]" [colspan]="( cardLayout | async )?.miniCard.cols" [rowspan]="( cardLayout | async )?.miniCard.rows"> <app-card title="Card {{i}}"><div>Mini Card Content Here</div></app-card> </mat-grid-tile> <!--Charts--> <mat-grid-tile *ngFor="let i of [5, 6, 7, 8]" [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Card {{i}}"><div>Chart Content Here</div></app-card> </mat-grid-tile> <!--Table--> <mat-grid-tile [colspan]="( cardLayout | async )?.table.cols" [rowspan]="( cardLayout | async )?.table.rows"> <app-card title="Card 9"><div>Table Content Here</div></app-card> </mat-grid-tile> </mat-grid-list> </div>

The dashboard will end up looking exactly like the most recent screenshot linked above.

Generating The Charts

The four charts that we need for the dashboard are:

  • A radar chart of products by unit sold.
  • A pie chart of sales by traffic source.
  • A bar chart of online store sessions.
  • A line chart of sales across the year.

Similar to creating the dashboard, generating chart components involves running a schematic. Using the ng2-charts schematics, generate the four different charts. We’ll place them in a folder called charts. Run ng generate ng2-charts-schematics:<chart type> <chart name>.

ng generate ng2-charts-schematics:radar charts/product-sales-chart ng generate ng2-charts-schematics:pie charts/sales-traffic-chart ng generate ng2-charts-schematics:line charts/annual-sales-chart ng generate ng2-charts-schematics:bar charts/store-sessions-chart

After running these commands, all four chart components are generated and are populated with sample data ready for display. Depending on what data you’d like to show, pick charts that most suit your data visualization needs. For each of the charts generated above, add the chartContainer class to the divs that enclose the canvas element in the chart templates.

<div class="chartContainer"> <canvas baseChart width="400" height="400"> ...

Next, add this styling to styles.css so that they could be accessible to all the chart components.

/*styles.css*/ ... .chartContainer canvas { max-height: 250px; width: auto; } .chartContainer{ height: 100%; display: flex; flex-direction: column; align-items: center; justify-content: center; } Adding Data To The Charts

The generated chart components come with sample data already plugged in. If you have pre-existing services that provide your own data, you can add this data from them to the chart components. The charts take labels for the x-axis, data or data sets, a chart type, colors, a legend as well as other customization options. To provide the data and labels to the charts, create a service that will fetch data from a source of your choice and return it in a form that the charts accept. For instance, the AnnualSalesChartComponent receives its dataset and labels from the SalesService’s getSalesByMonth method which returns an array of sales for each month for the current year. You can find this service here and data it returns here. Inject the service as a private property to the AnnualSalesChartComponent constructor. Call the method that returns the required chart data from the service within the ngOnInit lifecycle hook.

// annual-sales-chart.component.ts import { SalesService } from ’src/app/sales/sales.service'; ... export class AnnualSalesChartComponent implements OnInit { public salesChartData: ChartDataSets[] = [ { data: [], label: 'Total Sales' }, ]; public salesChartLabels: Label[] = []; ... constructor(private salesService: SalesService) { } ngOnInit() { this.salesService.getSalesByMonth().subscribe({ next: salesItems => { salesItems.forEach(li => { this.salesChartData[0].data.push(li.revenue); this.salesChartLabels.push(li.month); }); }, ... }); } } Adding Charts To The Dashboard

The next step involves adding the charts to the dashboard, in dash.component.html. Here’s what that looks like:

<!--dash.component.html--> ... <!--Charts--> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Monthly Revenue"> <app-annual-sale-chart></app-annual-sale-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Product Sales"> <app-product-sales-chart></app-product-sales-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Sales by Traffic Source"> <app-sales-traffic-chart></app-sales-traffic-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Online Store Sessions by Traffic Source"> <app-store-sessions-chart></app-store-sessions-chart> </app-card> </mat-grid-tile> ...

This is what the resultant responsive dashboard looks like.

Dashboard with charts (Large preview) Generating A Table

We’ll add an orders table to give the shop owner an overview of the most recent orders placed and their status. To generate the orders table component, run the schematic:

ng generate @angular/material:table orders-table

This will generate a table component that will look like this.

Table generated by Angular Material schematic (Large preview)

Tables with many columns may be difficult to make responsive for handset and tablet views. When adding the table to a card, make it horizontally scrollable so that all the data can be viewed properly and is not obstructed. You can do this by adding the styling below to your table component:

<!--table.component.html--> <div class="mat-elevation-z8 small-table"> <table mat-table class="full-width-table" matSort aria-label="Elements"> ... /*table.component.css*/ ... .small-table{ overflow-x: scroll !important; }

To add the table to the dash component:

<!-- dashboard.component.html> ... <mat-grid-tile [colspan]="( cardLayout | async )?.table.cols" [rowspan]="( cardLayout | async )?.table.rows"> <app-card title="Latest Orders"> <app-orders-table></app-orders-table> </app-card> </mat-grid-tile> ... Adding Data To The Table

Like with charts, you can add data to the table in the ngOnInit method from a service. Additionally, you will need to modify your table’s generated data source to consume data from the service. To start off, inject the service in the table’s class constructor. Let’s take the example of a table listing the latest orders for this dashboard. To get data for the table, let’s inject the OrderService in the OrdersTableComponent constructor, change the MatTable type assertion of the table view child, and amend the list of displayed columns to reflect an order interface. If you’re interested in the data being added to the table, you can find it here. The last thing involves getting the total length of the data items available to be used to set the total in the table’s <mat-paginator>.

// orders-table.component.ts import { OrderService } from '../orders.service'; import { Order } from '../order'; ... export class OrdersTableComponent implements AfterViewInit, OnInit { ... @ViewChild(MatTable) table: MatTable; dataLength: number; displayedColumns = [ "id", "date", "name", "status", "orderTotal", "paymentMode", ]; ... constructor(private orderService: OrderService){} ngOnInit() { this.datasource = new OrdersTableDataSource(this.orderService); this.orderService.getOrderCount().subscribe({ next: orderCount => { this.dataLength = orderCount; }, ... }); } ... }

Next, we’ll need to modify the OrdersTableDataSource class to accept the OrderService as a parameter in its constructor. We’ll have to modify its connect and destroy methods as well. The connect method connects the data source to the table and updates the table when new data items are emitted from the stream it returns, in this case, an orders array observable. The dataMutations constant combines the first data load, pagination, and sorting events into one stream for the table to consume. Pagination and sorting are handled by the OrderService server-side. So we need to pass the offset and page size from paginator and the active sort field and sort direction of the sort property to the getOrders method of the OrderService. The disconnect method should be used to close any connections made and release resources held up in the connect method.

// orders-table.datasource.ts ... export class OrdersTableDataSource extends DataSource<Order> { paginator: MatPaginator; sort: MatSort; constructor(private orderService: OrderService) { super(); } connect(): Observable<Order[]> { const dataMutations = [ of('Initial load'),, this.sort.sortChange ]; return merge(...dataMutations).pipe(mergeMap(() => { return this.orderService.getOrders( this.paginator.pageIndex * this.paginator.pageSize, this.paginator.pageSize,, this.sort.direction ); })); } disconnect() {} }

In the orders table template, insert the new columns and bind the length property of <mat-paginator> to the dataLength property. For the status column, use a <mat-chip> element for better visualization of the order status. To have access to <mat-chip>, add the MatChipsModule as an import to AppModule.

<!-- orders-table.component.html --> <div class="mat-elevation-z8"> <table mat-table class="full-width-table" matSort aria-label="Elements"> <!-- Id Column --> <ng-container matColumnDef="id"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Id</th> <td mat-cell *matCellDef="let row">{{}}</td> </ng-container> <!-- Date Column --> <ng-container matColumnDef="date"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Date</th> <td mat-cell *matCellDef="let row">{{ | date }}</td> </ng-container> <!-- Name Column --> <ng-container matColumnDef="name"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Name</th> <td mat-cell *matCellDef="let row">{{}}</td> </ng-container> <!-- Order Total Column --> <ng-container matColumnDef="orderTotal"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Order Total</th> <td mat-cell *matCellDef="let row">{{row.orderTotal | currency}}</td> </ng-container> <!-- Payment Mode Column --> <ng-container matColumnDef="paymentMode"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Payment Mode</th> <td mat-cell *matCellDef="let row">{{row.paymentMode}}</td> </ng-container> <!-- Status Column --> <ng-container matColumnDef="status"> <th mat-header-cell *matHeaderCellDef mat-sort-header>Status</th> <td mat-cell *matCellDef="let row"> <mat-chip-list> <mat-chip color="{{ row.status == 'delivered' ? 'primary' : ( row.status == ’shipped' ? 'accent' : 'warn' ) }}" selected> {{row.status}} </mat-chip> </mat-chip-list> </td> </ng-container> <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr> <tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr> </table> <mat-paginator #paginator [length]="dataLength" [pageIndex]="0" [pageSize]="5" [pageSizeOptions]="[5, 10, 15, 20]"> </mat-paginator> </div>

Once data has been added to the table, this is what the dashboard will look like:

Dashboard with charts and table (Large preview) Creating A Mini Card Component

All that’s left to complete the dashboard is to populate the four small cards that sit at the top. Having smaller summary cards as part of the dashboard makes it easy to highlight brief pieces of information that do not need whole charts or tables. In this example, the four mini cards will display total sales, average order value, the total number of orders, and the number of returning customers that visited the store for the day. This is just an example. These mini cards cannot be generated like with the navigation, dashboard layout, charts, and the table. They have no schematics. Below we’ll briefly go through how to create them. Although we’re going to add data specific to the example, you can add whatever you want to them or decide to do away with them altogether. To start off, generate the mini-card component, run:

ng g c mini-card -m app --style css

You can find the template for the component linked here and its styling here. This component has eight input properties that you can find out how to add here. To get data to the mini card components, inject the service that provides data to them in the DashComponent constructor. Assign data received from the service to a property of the DashComponent. In this instance, we’ll get data from the StoreSummaryService and assign it to the miniCardData property. Here’s how:

// dash.component.ts export class DashComponent implements OnInit{ ... miniCardData: StoreSummary[]; constructor(private breakpointObserver: BreakpointObserver, private summaryService: StoreSummaryService) {} ngOnInit() { this.summaryService.getStoreSummary().subscribe({ next: summaryData => { this.miniCardData = summaryData; } }); } }

To add the mini-cards to the dash component and have them populated with data from the service:

<!--dash.component.html--> ... <!--Mini Cards--> <mat-grid-tile *ngFor="let mc of miniCardData" [colspan]="( cardLayout | async )?.miniCard.cols" [rowspan]="( cardLayout | async )?.miniCard.rows"> <app-mini-card [title]="mc.title" [textValue]="mc.textValue" [value]="mc.value" [color]="mc.color" [percentValue]="mc.percentValue"></app-mini-card> </mat-grid-tile> ...

The screenshot below is what the dashboard will look like with the mini cards populated.

Dashboard with charts, tables, and mini-cards. (Large preview) Putting All Together

In the end, the dashboard component template should contain:

<!-- dashboard.component.html --> <div class="grid-container"> <h1 class="mat-h1">Dashboard</h1> <mat-grid-list cols="{{ ( cardLayout | async )?.columns }}" rowHeight="200px"> <!--Mini Cards--> <mat-grid-tile *ngFor="let mc of miniCardData" [colspan]="( cardLayout | async )?.miniCard.cols" [rowspan]="( cardLayout | async )?.miniCard.rows"> <app-mini-card [icon]="mc.icon" [title]="mc.title" [value]="mc.value" [color]="mc.color" [isIncrease]="mc.isIncrease" duration="since last month" [percentValue]="mc.percentValue" [isCurrency]="mc. isCurrency"></app-mini-card> </mat-grid-tile> <!--Charts--> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Monthly Revenue"> <app-annual-sale-chart></app-annual-sale-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Product Sales"> <app-product-sales-chart></app-product-sales-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Sales by Traffic Source"> <app-sales-traffic-chart></app-sales-traffic-chart> </app-card> </mat-grid-tile> <mat-grid-tile [colspan]="( cardLayout | async )?.chart.cols" [rowspan]="( cardLayout | async )?.chart.rows"> <app-card title="Online Store Sessions by Traffic Source"> <app-store-sessions-chart></app-store-sessions-chart> </app-card> </mat-grid-tile> <!--Table--> <mat-grid-tile [colspan]="( cardLayout | async )?.table.cols" [rowspan]="( cardLayout | async )?.table.rows"> <app-card title="Latest Orders"> <app-orders-table></app-orders-table> </app-card> </mat-grid-tile> </mat-grid-list> </div>

Here’s what the resultant dashboard contains.

Completed dashboard (Large preview) Conclusion

Creating dashboards involves a fair amount of work and planning. A way to make building them faster is to use the various schematics provided by Angular Material and ng2-charts. With these schematics, running a command will generate a wholly complete component and can result in having a dashboard up and running fairly quickly. This leaves you a lot more time to focus on creating data services and adding them to your dashboard components.

If you want to learn more about some of the schematics provided by Angular Material, visit, and for those provided by ng2-charts, visit their site linked here.

(ra, yk, il)
Categories: Design

How To Create A Porsche 911 With Sketch (Part 1)

Fri, 07/24/2020 - 02:00
How To Create A Porsche 911 With Sketch (Part 1) How To Create A Porsche 911 With Sketch (Part 1) Nikola Lazarević 2020-07-24T09:00:00+00:00 2020-07-25T22:04:08+00:00

If you’re both a petrolhead (a.k.a. a big car enthusiast) with a special place in your heart for the legendary Porsche 911, and also a fan of the powerful Sketch app, then this tutorial is for you. Today, we’ll be pushing Sketch to its limits — step by step. You will learn how to create a very realistic and detailed vector illustration of a vintage Porsche 911 using basic shapes, layer styles and Sketch features (such as “Rotate Copies” and “Symbols”). You’ll learn how to master the Vector tool, apply multiple shadow effects and use gradients. I’ll also explain how you can rotate and duplicate objects with just a few special clicks. No bitmap images will be used, which means the final illustration could be scaled up to any size with no loss of detail.

This tutorial is geared more towards experienced illustrators but if you’re new to Sketch you should be able to profit from it too as all of the steps are explained in great detail.

Note: This is the first part of this tutorial in which we’ll focus on laying out the main “groundwork”, i.e. we’ll create and tweak the body of the car. In addition, we’ll also make the front signal lights and the tail lights.

The Porsche 911

But first, a bit of background about the car that we’ll be making.

Model 911 is a 2-doors sports car produced by Porsche from 1963 through 1989 when it was succeeded by a new model with the same name. The original 911 series is often cited as the most successful competition car ever, especially its variations optimized for racing. In September 1999, the original Porsche 911 won 5-th place in the prestigious “Car of the Century” award.

The first 911 also had an almost unique concept for its time — rear-engine, rear-wheel-drive. (At a much later time, another company created a car with the same concept. It’s quite likely that you may have heard of this other car, too — this was the famous DeLorean DMC-12! The DeLorean became very popular in 1985 when Back to the Future was released in cinemas.)

Now buckle up and let’s go — as we have a long, narrow, windy (but fun) road ahead of us. Start the engine (Sketch app), shift into first gear (create a new file), and release the clutch (start drawing on the blank canvas)!

Note: I’ve written on the topic of using Sketch for vector illustration before. If you’re curious, do check my previous tutorial which is about designing a chronograph with Sketch: “Designing A Realistic Chronograph Watch In Sketch.”

Let’s Draw A Car!

To be able to better follow the steps in this tutorial, I will provide you with the original Sketch source file. This file will help you follow the process more easily but I encourage you to replicate the steps in a new file, starting with a blank canvas.

The final illustration of the Porsche 911 that we’ll be creating in this tutorial. (Large preview) 1. Artboard Settings

The first step is to create a new Sketch document. Name the document “Porsche911” and set up a new artboard with the same name, size: 1920px wide and 1080px high.

2. Tracing The Car With the Vector tool

For this step, we need an image of a Porsche 911 that will serve as a reference to outline the car in Sketch.

Our reference image of a Porsche 911. (Large preview)

Download, copy and paste the image into the artboard. Right-click on the reference image in the list of layers in the Layers panel and choose Lock Layer to lock the layer with the reference image so that it doesn’t move accidentally.

Tip: The other way to lock a layer in Sketch is to hover the layer name while pressing Alt and clicking on the lock icon.

Lock the reference image layer. (Large preview)

We will use the Vector tool to outline the car body. The result of this operation will be a vector shape. Each shape is made up of points and Bézier handles. Bézier handles are used to add curvature to a shape.

Vector points and Bézier handles. (Large preview) Know Your Points And Bézier Handles

For every point you add with the Vector tool, there are four point types to choose from: straight, mirrored, disconnected, and asymmetric. The point type describes how Bézier handles should behave. You can cycle through these types by selecting a point and hitting 1, 2, 3, or 4 on your keyboard. You can find point type for the selected point in the Inspector panel.

Point Types 1. Straight Straight point type. (Large preview)

The “straight” option will give you a straight corner. This type also allows you to add a corner Radius via the Inspector panel on the right.

2. Mirrored Mirrored point type. (Large preview)

“Mirrored” will add two Bézier handles that mirror each other so they are always parallel and the same length on both sides.

3. Disconnected Disconnected point type. (Large preview)

This option will give you two Bézier handles that you can change individually. Perfect for sharp corners!

4. Asymmetric Asymmetric point type. (Large preview)

“Asymmetric” is almost the same as “mirrored”, but it only keeps the Bézier handles parallel. You can change the length of the handles individually.

Now that you know more about points and handles, let’s continue.

Note: To learn more about the Bézier Curves in Sketch app, check the following tutorial by Peter Nowell: “Mastering the Bézier Curve in Sketch”.

Select the Vector tool by pressing V on the keyboard, zoom in (press Z and click to zoom in) and start outlining the car body. Click once to create a point, move some distance away, click to add a second point and without releasing the mouse button, drag that point to create a curve and use Bézier to control the curve.

Tip: I’d suggest you give temporarily the border some bright color and a thicker width (use the Inspector panel to change these) so you can trace the car shape more comfortably.

Start tracing the outline of the car. (Large preview)

Carry on doing this around the main shape of the car, but exclude the front and back windshields. Practice is needed to reach perfection and with time you’ll get better with using the Vector tool. Once you are ready with tracing of the body of the car, the results should look like on the screenshot below.

The tracing results should look somewhat like this. (Large preview)

Next we need to “cut out” the side windows from the car main body. Use the Vector tool to create two shapes over the side windows. Name the shapes side window 1 and side window 2.

Create two shapes over the side windows. (Large preview)

Duplicate these two shapes (Cmd + D) and hide the copies for now. (We will use the copies later for the side windows.) Select the car body shape and the two visible side windows shapes, and apply a Subtract operation from the top Sketch toolbar. Name the resulting shape car body.

The completed ‘car body’ shape. (Large preview)

Next, create the bottom part of the car with the Vector tool. You can trace it, but it’s not really important to be 100% accurate as the details of the shape which will not be visible do not particularly matter. Name this shape floor, then move it in the Layers panel below the car body.

The ‘floor’ vector shape. (Large preview)

There is only one more thing to do before we complete this step — to draw a wheel. Pick the Oval tool by pressing O on the keyboard and create a circle the same size as the wheel in the reference image.

Hold Shift and Alt as you drag to make a perfect circle from the center out. Give this shape the name of wheel base and make sure that its position is above the floor and below the car body layers in the Layers panel list.

Create the basic wheel shape. (Large preview) 3. Add Color, Shadows, And Reflections To The Car Body

Next, we will focus on the car body, so for now we can hide the reference image, as well as the wheel base and floor layers. What I usually like to do at this point is to unlock the reference image, move it outside the artboard and place it above it (or whatever other place works for you — below or on the left or right side of the artboard), so I can still use it for reference.

Or, alternatively, you can unlock the reference image, make a copy (Cmd + D), move the copy outside the artboard so it could be used for reference, and hide the original reference image inside the artboard.

Tip: Click on the eye icon next to the layer’s name in the Layers panel list to hide it. To unlock the layer, click with the right mouse button in the Layers panel and choose Unlock Layer, or just click on the lock icon next to the layer’s name.

First let’s set the basic color for our car. Select car body, uncheck Borders and for the Fills Color use #E9E9E7.

Tip: Use F on the keyboard to quickly toggle Fills on and off, and use B to quickly toggle Borders on or off.

Set the basic color for our car. (Large preview)

Note: New to Sketch? Check first this very detailed Sketch help page about working with Fills: “Styling — Fills”.

Next we will continue with the shadows (the darker parts of the car body). Use the Vector tool to draw a shape like on the image below.

Draw a ‘shadow’ shape. (Large preview)

As you can see, the shape is longer than the car body, so we will fix that right now. Select both shapes (car body and the shape we’ve just created) and perform a Mask operation from the top toolbar. Sketch will place the result automatically into a group. Give this resulting group the name bodywork.

Fit the ‘shadow’ shape inside the car body. (Large preview)

Now select again the shape that we’ve created, turn off Borders, set the Fills Color to #E1E1E1 and apply a Gaussian Blur with an Amount of 4.

The ‘shadow’ shape when it’s ready. (Large preview)

Draw another shape with the Vector tool. Use the image below as a reference.

Draw another shape. (Large preview)

Use the Layers panel to move this shape into the group bodywork. Turn off Borders, and apply a Linear Gradient with the following parameters:

  1. #E4E4E4
  2. #C5C5C5
Apply a gradient. (Large preview)

Apply a Gaussian Blur with an Amount of 6 to soften its edges a bit, and add a Shadow:

  • Color: #FFFFFF
  • Alpha: 90%
  • X: 0; Y: -8; Blur: 10
The second ‘shadow’ shape is now finished. (Large preview)

Next, to add a shadow at the bottom of the carrosserie, draw a shape using the Vector tool, set Fills to #4E4E4E, place it inside the bodywork group and apply a Gaussian Blur with an Amount of 12. Use the image below as a reference.

The ‘shadow’ at the bottom of the carrosserie.(Large preview)

To finish with the shadows, draw a small shape using the Vector tool, like on the image below, fill it with #D8D8D8 and give it a Gaussian Blur effect with an Amount of 5. Don’t forget to place it inside the bodywork group.

Draw the last ‘shadow’ shape. (Large preview)

To add light reflections we will create three shapes using the Vector tool and fill them with the following colors:

  1. #F9F9F9
  2. #F1F1F1
  3. #F1F1F1
Draw the light reflections. (Large preview)

Move those layers inside the bodywork group, turn off Borders and apply a Gaussian Blur with an Amount of 6.

The light reflections completed. (Large preview)

Finish this step by drawing two shapes using the Vector tool. Name these shapes front fender and rear fender. Set the color to #393939, remove the Borders, again move these inside the group and give them a Gaussian Blur effect with Amount of 2, and set Opacity to 50%. Use the image below as a reference.

The front and rear fenders. (Large preview)

Note: From now on, everything we create needs to be placed inside the bodywork group.

4. Creating The Door (And All Sorts Of Lids)

This step is pretty straightforward and will take only a couple of minutes to complete. We will add a bunch of lids and a door in this step.

Select the Vector tool (V) and start drawing the lids. You don’t have to close the shapes, just leave them open, because we don’t actually need closed shapes — just the lines. To do that, press Esc key when you are satisfied with each line. Set the border Color to black (#000000) and Width to 1px. Use the image below as a reference.

Create the lids: fuel tank, front trunk, engine, and fog light. (Large preview)

Select the Fuel Tank, Front Trunk and Rear Engine lids layers and add to them Shadows effects with the following parameters:

  • Color: #FFFFFF
  • Alpha: 90%
  • X: 0; Y: 2; Blur: 2; Spread: 0;

Next, select the Fog Light Place Lid layer and apply slightly different Shadows:

  • Color: #FFFFFF
  • Alpha: 20%
  • X: 2; Y: 0; Blur: 2; Spread: 2;

Pick up the Oval tool (O) and create a small circle that will represent the Jack Point cover. Turn off Fills and add an Outside border, with a Width of 1px and the Color set to #000000. Apply Shadows, with the Color set to #FFFFFF at 30% alpha and the Blur and Spread set to 2.

Create the ‘Jack Point Cover’ element. (Large preview)

Next, we will draw a door with the Vector tool (V), the same way as we drew all the lids.

Make the reference image in the background visible, set the bodywork layer to 50% Opacity and trace the door lines from the photo.

Trace the door lines. (Large preview)

When you are done, hide the reference image again, set bodywork layer Opacity to 100% and style the door shape.

Set the door’s shape border Color to black (#000000), Width to 2px and apply Shadows:

  • Color: #FFFFFF
  • Alpha: 40%
  • X: 2; Y: 2; Blur: 2; Spread: 2;
Style the door shape. (Large preview)

Tip: Don’t forget to give appropriate names to the shapes/layers. Proper naming of each shape/layer may help you later on, as your Sketch file becomes more and more complex!

Draw two tiny rectangles using the Vector tool (V). Press and hold Shift while drawing to make the lines straight. It’s important to align the bottom of the rectangles like on the image below. Fill both rectangles with black color and turn off Borders.

Create two rectangles. (Large preview)

Tip: Alternatively, you can draw these two tiny rectangles using the Rectangle tool (R), enter Vector Editing mode by pressing Enter on the keyboard, select the bottom two points of each rectangle and align them properly.

Finally, draw a new shape using the Vector tool again. Set Fills to black, turn off Borders and apply Shadows with the Color set to #FFFFFF at 60% alpha and the Y and Blur set to 2. Give this shape a name of engine lid. Use the image below for reference.

The engine lid shape. (Large preview) 5. Front Signal Lights And Horn

To start with the making of the signal lights, switch to the Rectangle tool (R) and draw a rectangle. Fill it with black Color, turn off Borders and apply Shadows:

  • Color: #FFFFFF
  • Alpha: 30%
  • X: 2; Y: -3; Blur: 2; Spread: 2;
Create a black rectangle. (Large preview)

Enter Vector Editing mode by double-clicking on the rectangle shape (or by pressing Enter), select the top right point, move it to the left 15px using the ← arrow on the keyboard and set the Radius to 9px. Press Enter again to exit Vector Editing mode.

Modify the rectangle. (Large preview)

Duplicate (Cmd + D) this shape, turn off Shadows, and add a Linear Gradient fill; use #ECECEC for the first color stop and #7F7F7F for the last color stop.

Duplicate the shape and apply a Linear Gradient. (Large preview)

Move this shape 2px to the left using the left arrow key on the keyboard, then enter Vector Editing mode (double-click on the shape), select the top two points and push them down by 2px.

Modify and move the shape. (Large preview)

Duplicate this shape (Cmd + D), change Color from Linear Gradient to Solid Color and pick any color you want. I will use yellow, but this is just temporary. Next, double-click on the shape to enter Vector Editing mode, select the top two points and move them down 3px, select the bottom two points and move them up 3px, select the right two points and move them to the left 3px, and finally select the bottom right point and move it to the left 3px so the right edge becomes parallel with the right edge of the shape below.

Duplicate and modify. (Large preview)

We need to split this shape into two parts. One shape will be used for the space for the horn and the other for the turn signal light. Let’s make it simple, without some fancy Boolean operations: duplicate the shape, name the original horn space and the copy turn-signal, and then hide the turn-signal shape because we will use it later.

First we need to modify the horn space shape. Select the shape, enter Vector Editing mode, select the top right point, set Radius back to 0 (using the Inspector panel on the right), move this point to the right until it’s aligned with the bottom right point (a vertical red line will appear), and then select both points on the right and move them to the left to create a small shape that we will use for the horn. Use the image below as a reference.

Tip: Hold Shift while dragging the points to maintain a straight path.

Create the ‘horn space’ shape. (Large preview)

Next, un-hide turn-signal, double-click it to enter Vector Editing mode, select the two points on the left and drag them to the right until there’s a small gap between shapes.

Create the ‘turn signal’ shape. (Large preview)

Back to the horn space shape. Double-click to enter Vector Editing mode, hold Shift and click on the right segment to add a point in the exact middle. Now, double-click on that newly added point to turn it into a Mirrored point type, and using the ← arrow on the keyboard move it 4px to the left. Then, select the bottom right point and move it 2px to the left.

Modify the ‘horn space’ shape. (Large preview)

We will modify the turn-signal in a similar fashion. Select the turn-signal shape, press Enter to access Vector Editing mode, add a point in the exact middle of the left segment, turn it into Mirrored type using the Inspector panel, and push it 3px to the left using the left arrow key on the keyboard.

Modify the ‘turn-signal’ shape. (Large preview) Horn

Let’s complete the horn first. Select the horn space shape and apply a Linear Gradient — use #1D1D1D for the top color stop and #D0D0D0 for the bottom color stop, then drag the top stop to the right and the bottom stop to the left to adjust the gradient angle.

Add a Linear Gradient. (Large preview)

Now, duplicate this shape (Cmd + D), switch Color to Solid Color and set to #131313, switch to Vector Editing mode, select the two left points and drag them a bit to the right.

Duplicate and modify. (Large preview)

Select the top left point, push it a bit to the right, add a point in the middle of the left segment, turn it into a Mirrored point, and move it 2px to the left.

Continue tweaking the shape. (Large preview)

Let’s add a grille over the horn space.

Pick up the Rectangle tool (R) and create a tiny rectangle shape over the horn space, with a height of 2px, with the Fills set to #9A9A9A and the Borders turned off. Duplicate it, change the height to 1px, change the color to #000000, move it down so it’s below the grey rectangle, switch to Vector Editing mode, select the bottom left point and move it 2px to the right. Select both shapes and place them inside a group (Cmd + G). We will use this element to build the grille. Give it a name of grille element.

Create the basic grille element. (Large preview)

Duplicate this group and move it 7px up and 2px right, then duplicate it again and push it 7px up and 3px right.

Build the horn grille. (Large preview)

Our grille now extends past the horn space, so we need to fix it. Select all the elements that are part of the horn and perform a Mask operation so that none of the created elements go outside of the horn space.

Sketch will place the result automatically into a group. Give this resulting group the name of horn.

The horn completed. (Large preview) Turn Signal Light

Select the turn-signal shape and add a Linear Gradient fill. Set the gradient to a horizontal position with the right-pointing arrow in the color dialog and use the following colors:

  1. #FFA137
  2. #B23821
  3. #B23821
Add a horizontal Linear Gradient. (Large preview)

Add an Inner Shadows effect with the following properties:

  • Color: #000000
  • Alpha: 40%
  • X: 0; Y: 0; Blur: 5; Spread: 0

And apply a Shadows effect:

  • Color: #FFFFFF
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 2; Spread: 0

It’s time to add the light bulbs. First, use the Oval tool (O) to draw a circle like on the image below. Turn off Borders, set Fills Opacity to 0% and apply Inner Shadows:

  • Color: #000000
  • Alpha: 12%
  • X: -9; Y: 0; Blur: 9; Spread: 0
Create the first light bulb shape. (Large preview)

Then, draw a small rectangle with the Rectangle tool (R) and use Radius (Round Corners) in the Inspector panel to create a rounded rectangle that will serve as a light bulb in our car illustration. Turn off Borders, and set Fills to Linear Gradient:

  1. #C06D25
  2. #DE8D55
  3. #BC4E08
  4. #A64A15
Continue tweaking the light bulb. (Large preview)

Finally select both — the circle and the rounded rectangle — and perform a Mask operation to place the rectangle inside the circle. Name the resulting group light1.

Tip: Sketch may turn off Inner Shadows on the masking shape (in this case, light1) while performing a Mask operation, so select the masking shape and check. If Inner Shadows are turned off, turn them back on using the Inspector panel. It’s a good idea to check for this every time when performing a Mask operation.

The ‘light1’ group completed. (Large preview)

We will add a second light bulb in a similar way. Draw a circle, turn off Borders, set Fills Opacity to 0% and add Inner Shadows:

  • Color: #000000
  • Alpha: 18%
  • X: 0; Y: 12; Blur: 5; Spread: 0
Create the second light bulb shape. (Large preview)

Duplicate this circle and scale it down. Modify the existing Inner Shadow:

  • Color: #000000
  • Alpha: 28%
  • X: 0; Y: -5; Blur: 5; Spread: 0

And add another one on top of it:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 2; Spread: 0

Then select both and group them into a light2 group.

The ‘light2’ group completed. (Large preview)

In the Layers panel list select turn-signal, light1 and light2 and apply a Mask operation. This way light1 and light2 will be inside turn-signal. Name the resulting group turn signal light.

The ‘turn signal light’ when finished. (Large preview)

To complete the turn signal light, we need to add a tiny screw on the right side of it. We will construct our screw using a circle, so grab the Oval tool (O), and draw a small circle on the right, close to the edge of the signal light. Set the Fill Opacity to 0%, set Borders Width to 1px, position Inside, and color to #B3B3B3 with alpha 30%; and add an Inner Shadows effect:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 2; Blur: 2; Spread: 0
Start designing the little screw. (Large preview)

Duplicate this circle, scale it down, turn off Borders, set Fills to #B2CBDF with Opacity back to 100% and add the following Shadows and Inner Shadows.

First Inner Shadow:

  • Color: #FFFFFF
  • Alpha: 80%
  • X: 0; Y: 0; Blur: 1; Spread: 0

Second Inner Shadow:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 1; Spread: 0

And at the end, a Shadows effect:

  • Color: #000000
  • Alpha: 100%
  • X: 0; Y: 0; Blur: 2; Spread: 0
Duplicate, scale down, and apply the styles. (Large preview)

We need one more circle for the screw, so again, duplicate the previous circle, scale it down, set Fills to #303030, and turn off Shadows and Inner Shadows.

Duplicate, scale down, and apply the styles. Rinse and repeat! (Large preview)

Tip: At this point, you may end up with a 1px circle which still looks a little bigger than what you can see in the screenshot above, and you may also have some trouble aligning it properly. If this happens, check whether Pixel Fitting is checked in Sketch Preferences, and if it is, it might be a good idea (at least temporarily) to disable it: go to Preferences → Layers → un-check the Pixel Fitting checkbox.

Sketch Preferences → Layers → Pixel Fitting. (Large preview)

Select all circles that we used to create the screw and group them into a screw group, then move this resulting group inside the turn signal light group on top.

Now it’s time to use the Create Symbol feature in Sketch and create a new Symbol out of the screw group. Later, we could use this symbol in our illustration as many times as we need it.

Tip: Symbols are created for those elements that you expect to reuse. When you use them right, Symbols can become a very powerful feature; they can speed up your workflow by giving you a way to save and reuse common elements across your illustrations and designs. When you make changes to a Symbol, those changes will be automatically applied to all the instances of this Symbol in your designs.

To create a Symbol, select the screw group in the Layers panel list, right-click on it, and choose Create Symbol from the menu. The dialog box Create New Symbol will appear; give a name to the symbol (screw in this case) and click OK.

Create a Symbol out of the ‘screw’ group. (Large preview)

There is one more small detail to add. Zoom in close enough (i.e., 3200%) and draw a tiny rectangle. Turn off Borders and set Fills to #131313.

Create one more detail. (Large preview) 6. Tail Light

We are going to build the tail lights the same way as we did in the previous step. Let’s quickly go through this step.

Draw the rectangle. Fill it with black color, turn off Borders and apply Shadows:

  • Color: #FFFFFF
  • Alpha: 30%
  • X: -2; Y: -3; Blur: 2; Spread: 2;

Enter Vector Editing mode, move the top left corner 15px to the right and set Radius to 9px.

Draw the tail light rectangle at the rear end of the car body. (Large preview)

Duplicate the rectangle, turn off Shadows and add a Linear Gradient fill; use #ECECEC for the first color stop and #7F7F7F for the last color stop. Then, move it 2px to the right, enter Vector Editing mode, select the top two points and push them down 2px.

Duplicate, apply the styles, and modify. (Large preview)

Duplicate this shape (Cmd + D), change Color from Linear Gradient to Solid Color and pick any color you want. Next, switch to Vector Editing mode, select the top two points and move them down 3px, select the bottom two points and move them up 3px, select the left two points and move them to the right 3px, and finally select the bottom left point and move it to the left 3px so the right edge becomes parallel with the right edge of the shape below.

Duplicate again, apply the styles, and move the points. (Large preview)

Now, change Fills to Linear Gradient. Set the gradient to a horizontal position with the right-pointing arrow in the color dialog and use the following colors:

  1. #5D1720
  2. #621822
  3. #662423
  4. #B04643
  5. #C25F56
Apply a horizontal Linear Gradient. (Large preview)

Add an Inner Shadows effect with the following properties:

  • Color: #000000
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 5; Spread: 0

And apply a Shadows effect:

  • Color: #FFFFFF
  • Alpha: 50%
  • X: 0; Y: 0; Blur: 2; Spread: 0
Add the effects. (Large preview)

Let’s now move to the design of the tail light’s light bulbs.

Use the Rectangle tool (R) to draw a rectangle like on the image below. Turn off Borders, set Fills Opacity to 0% and apply Inner Shadows:

  • Color: #000000
  • Alpha: 40%
  • X: -2; Y: 0; Blur: 5; Spread: 0
Draw a rectangle and apply the layer styles. (Large preview)

Then, draw a small rectangle with the Rectangle tool (R) and use Radius (Round Corners) in the Inspector panel to create a rounded rectangle that will serve the purpose of a light bulb. Turn off Borders, and set Fills to Linear Gradient:

  1. #B75D61
  2. #6B2224
Create the first light bulb. (Large preview)

Finally, select both rectangles and perform a Mask operation to place the rounded rectangle inside the other rectangle. Name the resulting group tail-light1.

Tip: Again, remember that Sketch may turn off Inner Shadows on the masking shape while performing a Mask operation, so select the masking shape and check. If Inner Shadows are turned off, turn them back on using the Inspector panel.

The ‘tail-light1’ is ready. (Large preview)

Draw a rectangle, turn off Borders, set Fills Opacity to 0% and add Shadows:

  • Color: #000000
  • Alpha: 30%
  • X: -2; Y: 0; Blur: 2; Spread: 0
Create another rectangle. (Large preview)

Draw a small circle, turn off Borders, set Fills Opacity to 0% and apply the following Inner Shadows.

First Inner Shadow:

  • Color: #000000
  • Alpha: 40%
  • X: 0; Y: -2; Blur: 5; Spread: 0

Second Inner Shadow:

  • Color: #000000
  • Alpha: 30%
  • X: 0; Y: 0; Blur: 5; Spread: 0
Create the other light bulb for the tail light. (Large preview)

Select the rectangle and circle that we’ve just created and place them inside the group (Cmd + G) tail-light2.

Finish this step by adding the screw symbol instance. Go to Insert → Document, choose screw, click over the tail-light2 to insert the symbol and then position it to the correct spot. Use the image below as a reference.

It’s time to save some time: insert the ‘screw’ symbol which we created earlier. (Large preview)

Let’s take a look at the bigger picture and check what we did so far!

Final image 1/3: The Porsche 911 car should look very similarly to this now. (Large preview) Conclusion

Good job! The main body of the car is now ready; we have the door shape, the lids, the front turn light and the tale lights.

In the next part of the tutorial, we’ll continue with the windows, bumpers, headlights, the interior, and a few other elements of the car. Stay tuned!

(mb, ra, yk, il)
Categories: Design

How To Use Styled-Components In React

Thu, 07/23/2020 - 03:30
How To Use Styled-Components In React How To Use Styled-Components In React Adebiyi Adedotun 2020-07-23T10:30:00+00:00 2020-07-25T22:04:08+00:00

Styled components are a CSS-in-JS tool that bridges the gap between components and styling, offering numerous features to get you up and running in styling components in a functional and reusable way. In this article, you’ll learn the basics of styled components and how to properly apply them to your React applications. You should have worked on React previously before going through this tutorial. If you’re looking for various options in styling React components, you can check out our previous post on the subject.

At the core of CSS is the capability to target any HTML element — globally — no matter its position in the DOM tree. This can be a hindrance when used with components, because components demand, to a reasonable extent, colocation (i.e. keeping assets such as states and styling) closer to where they’re used (known as localization).

In React’s own words, styled components are “visual primitives for components”, and their goal is to give us a flexible way to style components. The result is a tight coupling between components and their styles.

Note: Styled components are available both for React and React Native, and while you should definitely check out the React Native guide, our focus here will be on styled components for React.

Why Styled Components?

Apart from helping you to scope styles, styled components include the following features:

  • Automatic vendor prefixing
    You can use standard CSS properties, and styled components will add vendor prefixes should they be needed.
  • Unique class names
    Styled components are independent of each other, and you do not have to worry about their names because the library handles that for you.
  • Elimination of dead styles
    Styled components remove unused styles, even if they’re declared in your code.
  • and many more.

Installing styled components is easy. You can do it through a CDN or with a package manager such as Yarn…

yarn add styled-components

… or npm:

npm i styled-components

Our demo uses create-react-app.

Starting Out

Perhaps the first thing you’ll notice about styled components is their syntax, which can be daunting if you don’t understand the magic behind styled components. To put it briefly, styled components use JavaScript’s template literals to bridge the gap between components and styles. So, when you create a styled component, what you’re actually creating is a React component with styles. It looks like this:

import styled from "styled-components"; // Styled component named StyledButton const StyledButton = styled.button` background-color: black; font-size: 32px; color: white; `; function Component() { // Use it like any other component. return <StyledButton> Login </StyledButton>; }

Here, StyledButton is the styled component, and it will be rendered as an HTML button with the contained styles. styled is an internal utility method that transforms the styling from JavaScript into actual CSS.

In raw HTML and CSS, we would have this:

button { background-color: black; font-size: 32px; color: white; } <button> Login </button>

If styled components are React components, can we use props? Yes, we can.

Adapting Based on Props

Styled components are functional, so we can easily style elements dynamically. Let’s assume we have two types of buttons on our page, one with a black background, and the other blue. We do not have to create two styled components for them; we can adapt their styling based on their props.

import styled from "styled-components"; const StyledButton = styled.button` min-width: 200px; border: none; font-size: 18px; padding: 7px 10px; /* The resulting background color will be based on the bg props. */ background-color: ${props => === "black" ? "black" : "blue"; `; function Profile() { return ( <div> <StyledButton bg="black">Button A</StyledButton> <StyledButton bg="blue">Button B</StyledButton> </div> ) }

Because StyledButton is a React component that accepts props, we can assign a different background color based on the existence or value of the bg prop.

You’ll notice, though, that we haven’t given our button a type. Let’s do that:

function Profile() { return ( <> <StyledButton bg="black" type="button"> Button A </StyledButton> <StyledButton bg="blue" type="submit" onClick={() => alert("clicked")}> Button B </StyledButton> </> ); }

Styled components can differentiate between the types of props they receive. They know that type is an HTML attribute, so they actually render <button type="button">Button A</button>, while using the bg prop in their own processing. Notice how we attached an event handler, too?

Speaking of attributes, an extended syntax lets us manage props using the attrs constructor. Check this out:

const StyledContainer = styled.section.attrs((props) => ({ width: props.width || "100%", hasPadding: props.hasPadding || false, }))` --container-padding: 20px; width: ${(props) => props.width}; // Falls back to 100% padding: ${(props) => (props.hasPadding && "var(--container-padding)") || "none"}; `;

Notice how we don’t need a ternary when setting the width? That’s because we’ve already set a default for it with width: props.width || "100%",. Also, we used CSS custom properties because we can!

Note: If styled components are React components, and we can pass props, then can we also use states? The library’s GitHub account has an issue addressing this very matter.

Extending Styles

Let’s say you’re working on a landing page, and you’ve set your container to a certain max-width to keep things centered. You have a StyledContainer for that:

const StyledContainer = styled.section` max-width: 1024px; padding: 0 20px; margin: 0 auto; `;

Then, you discover that you need a smaller container, with padding of 10 pixels on both sides, instead of 20 pixels. Your first thought might be to create another styled component, and you’d be right, but it wouldn’t take any time before you realize that you are duplicating styles.

const StyledContainer = styled.section` max-width: 1024px; padding: 0 20px; margin: 0 auto; `; const StyledSmallContainer = styled.section` max-width: 1024px; padding: 0 10px; margin: 0 auto; `;

Before you go ahead and create StyledSmallContainer, like in the snippet above, let’s learn the way to reuse and inherit styles. It’s more or less like how the spread operator works:

const StyledContainer = styled.section` max-width: 1024px; padding: 0 20px; margin: 0 auto; `; // Inherit StyledContainer in StyledSmallConatiner const StyledSmallContainer = styled(StyledContainer)` padding: 0 10px; `; function Home() { return ( <StyledContainer> <h1>The secret is to be happy</h1> </StyledContainer> ); } function Contact() { return ( <StyledSmallContainer> <h1>The road goes on and on</h1> </StyledSmallContainer> ); }

In your StyledSmallContainer, you’ll get all of the styles from StyledContainer, but the padding will be overridden. Keep in mind that, ordinarily, you’ll get a section element rendered for StyledSmallContainer, because that’s what StyledContainer renders. But that doesn’t mean it’s carved in stone or unchangeable.

The “as” Polymorphic Prop

With the as polymorphic prop, you can swap the end element that gets rendered. One use case is when you inherit styles (as in the last example). If, for example, you’d prefer a div to a section for StyledSmallContainer, you can pass the as prop to your styled component with the value of your preferred element, like so:

function Home() { return ( <StyledContainer> <h1>It’s business, not personal</h1> </StyledContainer> ); } function Contact() { return ( <StyledSmallContainer as="div"> <h1>Never dribble when you can pass</h1> </StyledSmallContainer> ); }

Now, StyledSmallContainer will be rendered as a div. You could even have a custom component as your value:

function Home() { return ( <StyledContainer> <h1>It’s business, not personal</h1> </StyledContainer> ); } function Contact() { return ( <StyledSmallContainer as={StyledContainer}> <h1>Never dribble when you can pass</h1> </StyledSmallContainer> ); }

Don’t take it for granted.

SCSS-Like Syntax

The CSS preprocessor Stylis enables styled components to support SCSS-like syntax, such as nesting:

const StyledProfileCard = styled.div` border: 1px solid black; > .username { font-size: 20px; color: black; transition: 0.2s; &:hover { color: red; } + .dob { color: grey; } } `; function ProfileCard() { return ( <StyledProfileCard> <h1 className="username">John Doe</h1> <p className="dob"> Date: <span>12th October, 2013</span> </p> <p className="gender">Male</p> </StyledProfileCard> ); } Animation

Styled components have a keyframes helper that assists with constructing (reusable) animation keyframes. The advantage here is that the keyframes will be detached from the styled components and can be exported and reused wherever needed.

import styled, {keyframes} from "styled-components"; const slideIn = keyframes` from { opacity: 0; } to { opacity: 1; } `; const Toast = styled.div` animation: ${slideIn} 0.5s cubic-bezier(0.4, 0, 0.2, 1) both; border-radius: 5px; padding: 20px; position: fixed; `; Global Styling

While the original goal of CSS-in-JS and, by extension, styled components is scoping of styles, we can also leverage styled components’ global styling. Because we’re mostly working with scoped styles, you might think that’s an invariable factory setting, but you’d be wrong. Think about it: What really is scoping? It’s technically possible for us — in the name of global styling — to do something similar to this:

ReactDOM.render( <StyledApp> <App /> </StyledApp>, document.getElementById("root") );

But we already have a helper function — createGlobalStyle — whose sole reason for existence is global styling. So, why deny it its responsibility?

One thing we can use createGlobalStyle for is to normalize the CSS:

import {createGlobalStyle} from "styled-components"; const GlobalStyle = createGlobalStyle` /* Your css reset here */ `; // Use your GlobalStyle function App() { return ( <div> <GlobalStyle /> <Routes /> </div> ); }

Note: Styles created with createGlobalStyle do not accept any children. Learn more in the documentation.

At this point, you might be wondering why we should bother using createGlobalStlye at all. Here are a few reasons:

  • We can’t target anything outside of the root render without it (for example, html, body, etc.).
  • createGlobalStyle injects styles but does not render any actual elements. If you look at the last example closely, you’ll notice we didn’t specify any HTML element to render. This is cool because we might not actually need the element. After all, we’re concerned with global styles. We are targeting selectors at large, not specific elements.
  • createGlobalStyle is not scoped and can be rendered anywhere in our app and will be applicable as long as it’s in the DOM. Think about the concept, not the structure.
import {createGlobalStyle} from "styled-components"; const GlobalStyle = createGlobalStyle` /* Your css reset here */ .app-title { font-size: 40px; } `; const StyledNav = styled.nav` /* Your styles here */ `; function Nav({children}) { return ( <StyledNav> <GlobalStyle /> {children} </StyledNav> ); } function App() { return ( <div> <Nav> <h1 className="app-title">STYLED COMPONENTS</h1> </Nav> <Main /> <Footer /> </div> ); }

If you think about the structure, then app-title should not be styled as set in GlobalStyle. But it doesn’t work that way. Wherever you choose to render your GlobalStyle, it will be injected when your component is rendered.

Be careful: createGlobalStyles will only be rendered if and when it’s in the DOM.

CSS Helper

Already we’ve seen how to adapt styles based on props. What if we wanted to go a little further? The CSS helper function helps to achieve this. Let’s assume we have two text-input fields with states: empty and active, each with a different color. We can do this:

const StyledTextField = styled.input` color: ${(props) => (props.isEmpty ? "none" : "black")}; `;

All’s well. Subsequently, if we need to add another state of filled, we’d have to modify our styles:

const StyledTextField = styled.input` color: ${(props) => props.isEmpty ? "none" : ? "purple" : "blue"}; `;

Now the ternary operation is growing in complexity. What if we add another state to our text-input fields later on? Or what if we want to give each state additional styles, other than color? Can you imagine cramping the styles into the ternary operation? The css helper comes in handy.

const StyledTextField = styled.input` width: 100%; height: 40px; ${(props) => (props.empty && css` color: none; backgroundcolor: white; `) || ( && css` color: black; backgroundcolor: whitesmoke; `)} `;

What we’ve done is sort of expanded our ternary syntax to accommodate more styles, and with a more understandable and organized syntax. If the previous statement seems wrong, it’s because the code is trying to do too much. So, let’s step back and refine:

const StyledTextField = styled.input` width: 100%; height: 40px; // 1. Empty state ${(props) => props.empty && css` color: none; backgroundcolor: white; `} // 2. Active state ${(props) => && css` color: black; backgroundcolor: whitesmoke; `} // 3. Filled state ${(props) => props.filled && css` color: black; backgroundcolor: white; border: 1px solid green; `} `;

Our refinement splits the styling into three different manageable and easy-to-understand chunks. It’s a win.


Like the CSS helper, StyleSheetManager is a helper method for modifying how styles are processed. It takes certain props — like disableVendorPrefixes (you can check out the full list) — that help you opt out of vendor prefixes from its subtree.

import styled, {StyleSheetManager} from "styled-components"; const StyledCard = styled.div` width: 200px; backgroundcolor: white; `; const StyledNav = styled.div` width: calc(100% - var(--side-nav-width)); `; function Profile() { return ( <div> <StyledNav /> <StyleSheetManager disableVendorPrefixes> <StyledCard> This is a card </StyledCard> </StyleSheetManager> </div> ); }

disableVendorPrefixes is passed as a prop to <StyleSheetManager>. So, the styled components wrapped by <StyleSheetManager> would be disabled, but not the ones in <StyledNav>.

Easier Debugging

When introducing styled components to one of my colleagues, one of their complaints was that it’s hard to locate a rendered element in the DOM — or in React Developer Tools, for that matter. This is one of the drawbacks of styled components: In trying to provide unique class names, it assigns unique hashes to elements, which happen to be cryptic, but it makes the displayName readable for easier debugging.

import React from "react"; import styled from "styled-components"; import "./App.css"; const LoginButton = styled.button` background-color: white; color: black; border: 1px solid red; `; function App() { return ( <div className="App"> <LoginButton>Login</LoginButton> </div> ); }

By default, styled components render LoginButton as <button class="LoginButton-xxxx xxxx">Login</button> in the DOM, and as LoginButton in React Developer Tools, which makes debugging easier. We can toggle the displayName boolean if we don’t want this behavior. This requires a Babel configuration.

Note: In the documentation, the package babel-plugin-styled-components is specified, as well as a .babelrc configuration file. The issue with this is that, because we’re using create-react-app, we can’t configure a lot of things unless we eject. This is where Babel macros come in.

We’ll need to install babel-plugin-macros with npm or Yarn, and then create a babel-plugin-macros.config.js at the root of our application, with the content:

module.exports = { styledComponents: { displayName: true, fileName: false, }, };

With the fileName value inverted, the displayName will be prefixed with the file name for even more unique precision.

We also now need to import from the macro:

// Before import styled from "styled-components"; // After import styled from "styled-components/macro"; Conclusion

Now that you can programmatically compose your CSS, do not abuse the freedom. For what it’s worth, do your very best to maintain sanity in your styled components. Don’t try to compose heavy conditionals, nor suppose that every thing should be a styled component. Also, do not over-abstract by creating nascent styled components for use cases that you are only guessing are somewhere around the corner.

Further Resources
  1. Documentation, Styled Components
  2. Building a Reusable Component System with React.js and styled-components”, Lukas Gisder-Dubé
  3. Usage with Next.js
  4. Usage with Gatsby
(ks, ra, yk, al, il)
Categories: Design

Modern CSS Techniques To Improve Legibility

Wed, 07/22/2020 - 03:30
Modern CSS Techniques To Improve Legibility Modern CSS Techniques To Improve Legibility Edoardo Cavazza 2020-07-22T10:30:00+00:00 2020-07-25T22:04:08+00:00

We can read in many ways, and there are many different types of readers, each with their own needs, skills, language, and, above all, habits. Reading a novel at home is different from reading it on the train, just as reading a newspaper is different from browsing its online version. Reading, like any other activity, requires practice for someone to become fast and efficient. Basically, we read better those things that we are used to reading the most.

Which aspects should we take into consideration when designing and developing for reading? How can we create accessible, comfortable, inclusive experiences for all readers, including the most challenged and those affected by dyslexia?

Articles Dedicated To Accessibility

At Smashing, we believe a good website is an accessible website, one which is available to everyone, no matter how they browse the web. We’ve highlighted just some of the many articles that we’re sure will help you create more accessible sites and web apps. Explore more articles →

Spaces, Words, Sentences, And Paragraphs Units

On a web page, many units are available for us to adjust the font size of text. Understanding which unit to use is essential to setting the structure of a whole reading section. The reflowable nature of the web requires us to consider several aspects, such as the size of the viewport and the user’s reading preferences.

For this reason, the most suitable choices are generally em and rem, which are font-specific units. For example, setting the margins between paragraphs using ems helps to preserve the vertical rhythm as the text size changes. However, this can be a problem when a serif font is alternated with a sans-serif within a section. In fact, at the same font size, fonts can appear optically very different. Traditionally, the height of the lowercase “x” character (the x-height) is the reference for determining the apparent size of a character.

At the same font size, characters will optically appear very different. (Large preview)

Using the font-size-adjust rule, we can, therefore, optically render fonts of the same size, because the property will match the heights of the lowercase letters. Unfortunately, this property is currently available only in Firefox and in Chrome and Edge behind a flag, but it can be used as progressive enhancement using the @support check:

@supports (font-size-adjust: 1;) { article { font-size-adjust: 0.5; } }

It also helps with the swap from the fallback font to the one loaded remotely (for example, using Google Fonts).

The first example shows how switching the font works normally. In the second one, we are using font-size-adjust to make the swap more comfortable. (Large preview) Optimal Line Height We think typography is black and white. Typography is really white [...] It is the space between the blacks that really makes it.

— Massimo Vignelli, Helvetica, 2007

Because typography is more a matter of “whites” than ”blacks”, when we apply this notion to the design of a website or web application, we must take into account special features such as line height, margins between paragraphs, and line breaks.

Setting the font size by relying on the x-height helps with optimizing the line height. The default line height in browsers is 1.2 (a unitless value is relative to the font size), which is the optimal value for Times New Roman but not for other fonts. We must also consider that line spacing does not grow linearly with the font size and that it depends on various factors like the type of the text. By testing some common fonts for long-form reading, combined with sizes from 8 to 14 points, we were able to deduce that, on paper, the ratio between the x-height and the optimal line spacing is 37.6.

Acceptable line-spacing ranges. (Large preview)

Compared to reading on paper, screen reading generally requires more spacing between lines. Therefore, we should adjust the ratio to 32 for digital environments. In CSS, this empirical value can be translated into the following rule:

p { line-height: calc(1ex / 0.32); }

In the right reading contexts, this rule sets an optimal line height for both serif and sans-serif fonts, even when typographical tools are not available or when a user has set a font that overwrites the one chosen by the designer.

Define The Scale

Now that we have adjusted the font size and used the ex unit to calculate the line height, we need to define the typographical scale in order to correctly set the spacing between paragraphs and to provide a good rhythm to the reading. As said before, line spacing does not grow linearly but varies according to the type of text. For titles with a large font size, for example, we should consider a higher ratio for the line height.

article h1 { font-size: 2.5em; line-height: calc(1ex / 0.42); margin: calc(1ex / 0.42) 0; } article h2 { font-size: 2em; line-height: calc(1ex / 0.42); margin: calc(1ex / 0.42) 0; } article h3 { font-size: 1.75em; line-height: calc(1ex / 0.38); margin: calc(1ex / 0.38) 0; } article h4 { font-size: 1.5em; line-height: calc(1ex / 0.37); margin: calc(1ex / 0.37) 0; } article p { font-size: 1em; line-height: calc(1ex / 0.32); margin: calc(1ex / 0.32) 0; } Letter And Word Spacing

When working on legibility, we must also consider readers who are challenged, such as those with dyslexia and learning disabilities. Developmental dyslexia affects reading, and discussion and research regarding the causes are still ongoing. It is important to make use of scientific studies to understand the effects that visual and typographic variables have on reading.

For example, in a study that my company followed (“Testing Text Readability of Dyslexia-Friendly Fonts”), there was clear evidence that the glyph shapes of high-legibility fonts do not really assist reading, but wider spacing between characters (kerning) does. This finding was confirmed by another study on the effectiveness of increased spacing (“How the Visual Aspects Can Be Crucial in Reading Acquisition: The Intriguing Case of Crowding and Developmental Dyslexia”).

These studies suggest that we should exploit the dynamism and responsiveness of web pages by offering more effective tools, such as controls for handling spacing. A common technique when enlarging the size of characters is to adjust the spacing between letters and words through CSS properties such as letter-spacing and word-spacing.

See the Pen [Letter and word spacing]( by Edoardo Cavazza.

See the Pen Letter and word spacing by Edoardo Cavazza.

The problem with this is that letter-spacing acts unconditionally and breaks the kerning of the font, leading the page to render nonoptimal spaces.

Alternatively, we can use variable fonts to gain more control over font rendering. Font designers can parameterize spacing in a variable and non-linear way, and can determine how the weight and shape of a glyph can better adapt to the habits of the reader. In the following example, using the Amstelvar font, we are able to increase the optical size as well as spacing and contrast, as intended by the designer.

See the Pen [The optical size in variable fonts]( by Edoardo Cavazza.

See the Pen The optical size in variable fonts by Edoardo Cavazza.

The article “Introduction to Variable Fonts on the Web” has more detail on what variable fonts are and how to use them. And check out the Variable Fonts tool to see how they work.

Width And Alignment

To optimize reading flow, we also have to work on the width of the paragraph, which is the number of characters and spaces on a line. While reading, our eye focuses on about eight letters in a foveatio (i.e. the operation that is activated when we look at an object), and it is able to handle only a few consecutive repetitions. Therefore, line breaks are crucial: The moment of moving one’s focus from the end of a line to the beginning of the next is one of the most complex operations in reading and must be facilitated by keeping the right number of characters per type of text. For a basic paragraph, a common length is about 60 to 70 characters per line. This value can be easily set with the ch unit by assigning a width to the paragraph:

p { width: 60ch; max-width: 100%; }

Justification also plays an important role in reading across lines. Hyphenation support for languages is not always optimal in the various browsers; therefore, it must be checked. In any case, avoid justified text in the absence of hyphenation because the horizontal spacing that would be created would be an obstacle to reading.

/* The browser correctly supports hyphenation */ p[lang=”en”] { text-align: justify; hyphens: auto; } /* The browser does NOT correctly support hyphenation */ p[lang=”it”] { text-align: left; hyphens: none; }

Manual hyphenation can be used for languages that do not have native support. There are several algorithms (both server- and client-side) that can inject the &hyphen; entity within words, to instruct browsers where the token can be broken. This character would be invisible, unless it is located at the end of the line, whereupon it would render as a hyphen. To activate this behavior, we need to set the hyphens: manual CSS rule.

Foreground Contrast

The contrast of characters and words with the background is fundamental to legibility. The WCAG has defined guidelines and constraints for different standards (A, AA, AAA) governing the contrast between text and background. Contrast can be calculated with different tools, both for design and development environments. Keep in mind that automated validators are support tools and do not guarantee the same quality as a real test.

By using CSS variables and a calc statement, we can dynamically calculate the color that offers the best contrast with the background. In this way, we can offer the user different types of contrast (sepia, light gray, night mode, etc.), by converting the whole theme according to the background color.

article { --red: 230; --green: 230; --blue: 230; --aa-brightness: ( (var(--red) * 299) + (var(--green) * 587) + (var(--blue) * 114) ) / 1000; --aa-color: calc((var(--aa-brightness) - 128) * -1000); background: rgb(var(--red), var(--green), var(--blue)); color: rgb(var(--aa-color), var(--aa-color), var(--aa-color)); }

See the Pen [Automatic text contrast]( by Edoardo Cavazza.

See the Pen Automatic text contrast by Edoardo Cavazza.

In addition, with the introduction and cross-browser support of the prefer-color-scheme media query, it becomes even easier to manage the switch from light to dark theme, according to user preference.

@media (prefers-color-scheme: dark) { article { --red: 30; --green: 30; --blue: 30; } } Going Forward

Designing and developing for optimal reading require a lot of knowledge and the work of many professionals. The more this knowledge is spread across the team, the better off users will be. Below are some points to lead us to good results.

For Designers
  • Consider semantic structure as being part of the project, rather than a technical detail;
  • Document layout and font metrics, especially the why’s and how’s of your choices. They will help developers to correctly implement the design;
  • Reduce typographic variables as much as possible (fewer families, styles, and variants).
For Developers
  • Learn the principles of typography in order to understand the design decisions made and how to implement them;
  • Use units relative to font size to implement responsive layouts (paddings, margins, gaps) that scale to user preferences;
  • Avoid unrestrained manipulation of font metrics. Legibility might suffer when font constraints are not respected.
For Teams
  • Read and understand the principles of the WCAG;
  • Consider inclusion and accessibility as part of the project (rather than separate issues).

Reading is a complex activity. Despite the many resources on web typography and the academic papers that identify areas for improvements, there is no magical recipe for achieving good legibility. The number of variables to consider might seem overwhelming, but many of them are manageable.

We can set the optimal line height of a paragraph using the ex unit, as well as set a paragraph’s width using the ch unit, in order to respect the user’s preferred browser settings for font size and family. We can use variable fonts to adjust the spacing between letters and words, and we can manipulate the stroke of glyphs to increase contrast, helping readers with visual impairments and dyslexia. We can even automatically adjust text contrast using CSS variables, giving the user their preferred theme.

All of these help us to build a dynamic web page whose legibility is optimized according to the user’s needs and preferences. Finally, given that every little implementation or technological detail can make a huge difference, it is still essential to test users’ reading performance using the final artifact.

Related Resources (ra, yk, al, il)
Categories: Design
©2020 Richard Esmonde. All rights reserved.