You are here

Smashing Magazine

Subscribe to Smashing Magazine feed
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 24 min 17 sec ago

15 Questions To Ask Your Next Potential Employer

12 hours 14 min ago
15 Questions To Ask Your Next Potential Employer 15 Questions To Ask Your Next Potential Employer Robert Hoekman Jr 2019-09-20T12:30:59+02:00 2019-09-20T13:06:36+00:00

In my book “Experience Required”, I encourage in-house UX professionals to leave companies who refuse to advance their UX intelligence and capability. There are far too many companies these days who understand the value of UX to waste your time being a martyr for one who will only frustrate you. Your best chance of doing a good job is to avoid a bad position.

Smartly, during a recent Q&A about the book, an audience member asked how we can avoid taking these jobs in the first place. What kinds of questions, he wondered, can you ask during an interview to spot red flags before the company stabs the whole flagpole into your sacred UX heart?

Know What You Want To Know

There’s the usual stuff, sure, such as asking why the position you’re applying for is currently open. What the company’s turnover rate is like. Why that turnover rate is so low or high. A little Googling will easily enough net you a decent list of broad questions you can ask any employer.

But what you really want is to get UX-specific. You want to hone in on precisely what your life might be like should you take the position.

Your best chance of doing a good job is to avoid a bad position.

Sadly, I lacked a great answer at the time to the question about interview questions, so I let it eat at me until I woke up at three a.m two days later and started writing notes. That morning, I emailed my reply to the moderator.

Ask A Great Question, Then Shut Up

To devise the list below, I considered what kinds of things I’d wish a company knew and understood about UX prior to working with them. I can operate in all kinds of situations—as a UX and process innovation consultant, this has been my job, and pleasure, for nearly 13 years now—but I want to know from the start, every time, that the effort will be set up for success. These questions aim to uncover the dirty details that will tell me what I’m walking into.

Much like a good validation session or user interview, these questions are open-ended and designed to draw out thoughtful, long-winded responses. (One-word answers are useless.) I strongly recommend that when and if you ask them, you follow each question with a long, stealthy vow of silence. People will tell you all about who they are if you just shut up long enough to hear them do it. Stay quiet for at least ten seconds longer than you think is reasonable and you’ll get the world.

People will tell you all about who they are if you just shut up long enough to hear them do it.

I’d ask these questions of as many individuals as possible. Given that tech interviews are often hours-long and involve many interviewers, you should be able to grab yourself a wealth of good answers before you head out the door to process and sleep.

If, on the contrary, you are given too little time to ask all these questions, prioritize the ones you’re personally most concerned about, and then consider that insufficient interview time might be a red flag.

Important: The key to the answers you receive is to read between the lines. Listen to what is said, note what is not said, and decide how to interpret the answers you get. I’ve included some red flags to watch out for along with each question below.

The Questions

Let’s get right to it.

1. How does this company define UX? As in, what do you believe is the purpose, scope, and result of good UX work? Intent

Literally every person on Earth who is asked this question will give a slightly, or wildly, different answer than you expect or hope for. At the very least, the person interviewing you should have an opinion. They should have a sense of how the company views UX, what the various UX roles have to offer, and what effect they should have.

Red Flag(s)

The UX team has a very limited role, has no real influence, and the team, for the most part, is stretched so thin you could put them on a cracker.

2. How do the non-UX people on your product team currently participate in UX decisions?

Follow-ups: Describe a recent example of this kind of participation. What was the UX objective? How was that objective vetted as a real need? What did you do to achieve the objective, step-by-step? How did it turn out? What did you learn? Intent

Find out how the entire product team approaches UX and how collaborative and supportive they might be in acquiring and acting on good research insights.

Red Flag(s)

They don’t participate in UX decisions.

3. What UX roles exist in the organization, and what do they do? Intent

Determine where you’ll fit in, and how difficult it might be for you to gain influence, experience, or mentorship (depending on what you’re after). Also, build on the previous question about who does what and how.

Red Flag(s)

UX people at the company are heavily skilled in graphic design, and not so skilled in strategy. The current team members have limited influence. Your role will be similar. Strategy is handled by someone else, and it trickles down to the UX team for execution.

4. Who is your most experienced UX person and in what ways does that experience separate them from others? Intent

Determine the range of UX intelligence on the team from highest to lowest. Is the person at the top whip-smart and a fantastic leader? Does that person mentor the others and make them better?

Red Flag(s)

The interviewer cannot articulate what makes that person better or more compelling than others. If they can’t answer this question, you’re speaking to someone who has no business making a UX hiring decision. Ask to speak to someone with more inside knowledge.

Noteworthy, but not necessarily a red flag: If you learn that the most experienced person on the team is actually someone with a very sleight skill set, this can mean either there’s room for you to become an influencer, or the company puts so little value on UX that they’ve selected only employees with a small view of UX. The latter could mean you’ll spend all your time trying to prove the value of bigger UX involvement and more strategic work. You may like that sort of thing. I do. This would not be a red flag for me. It might be for you.

5. What are the company’s plans for UX long-term? (Expand it? Reduce it? How so, and why? Is there a budget for its expansion? Who controls it and how is it determined?) Intent

Map out your road for the next couple of years. Can you rise into the role you want? Or will you be stuck in a cul-de-sac with zero chance of professional growth?

Red Flag(s)

We plan to keep doing exactly what we do now, and what we do now is pretty boring or weak. Also, we have no budget—like, ever—so if you want to bring in a consultant, attend a seminar, hire another person, or run a comprehensive usability study with outside customers, well, good luck with that.

6. How do UX professionals here communicate their recommendations?

Follow-up: How could they improve? Intent

Learn how they do it now, and more importantly, whether or not it works.

Red Flag(s)

The interviewer has no answer, or—far worse—has an anti-answer that involves lots of arm-waving and ideas falling on deaf ears. The former can, again, mean the interviewer has no business interviewing a UX candidate. The latter can mean the UX team is terrible at communicating and selling its ideas. While this can be overcome with your much better communication skills, it will almost certainly mean the company has some baggage to wade through. Poor experiences in the past will put other product team members on defense. You’ll have to play some politics and work extra heard on building rapport to get anywhere.

7. Who tends to offer the most resistance to UX recommendations and methods and why?

Follow-up: And how much power does that person have? Intent

This person will either give you the most grief or will give you the great opportunity to improve your communication skills (remember: design is communication!). Knowing who it is up front and how that person operates can tell you what the experience will be like.

Red Flag(s)

Executives, because they distrust UX. If you lack support at the top, it will be a daily struggle to achieve anything substantive.

8. What do UX practitioners here do to advance their values and methods beyond project work? Please be specific. Intent

See how motivated the UX team is to perpetuate UX values to the rest of the company and improve how the team works.

Red Flag(s)

They don’t.

9. What do you think they should do differently? Why? Intent

Discover how your interviewer feels about UX. This is, after all, a person who has a say in hiring you. Presumably, this person will be a big factor in your success.

Red Flag(s)

Keep their noses out of product development, stop telling the engineers what to do (speaks to perception of pushy UX people).

10. Describe a typical project process. (How does it start? What happens first? Next? And then?) Intent

Find out if there is a process, what it looks like, and how well it aligns with your beliefs as a UX professional.

Red Flag(s)

You’ll be assigned projects from the top. You’ll research them, design a bunch of stuff in a vacuum with no way to validate and without any iteration method, and then you’ll hand all your work to the Engineering team, who will then have a thousand questions because you never spoke to each other until just now.

Bonus Question

How and when does the team try to improve on its process? (If it doesn’t, let’s call that a potential red flag as well.)

11. How has your company learned from its past decisions, and what have you done with those learnings? Intent

UX is an everlasting experiment. Find out if this company understands it’s supposed to learn from the work and become smarter as a result.

Red Flag(s)

No examples, no thoughts.

12. If this is an agency who produces work for clients: What kind of support or backup does this agency provide for its UX recommendations, and how much power does the UX group have to push back against wrongheaded client ideas?

Follow-ups: How does the team go about challenging those ideas? Provide a recent example. Intent

Find out how often you’ll be thrown under the proverbial bus when a client pushes back against what you know to be the right approach to a given problem. Your job will be to make intelligence-based recommendations; don’t torture yourself by working with people who refuse to hear them.

Red Flag(s)

The interviewer says the agency does whatever the clients demand. You will be a glorified wireframe monkey with no real power to change the world for the better.

13. How does the company support the UX group’s work and methods? Intent

Determine how the company as a whole thinks about UX, both as a team and a practice. Is UX the strange alien in the corner of the room, or is it embraced and participated in by every product team member?

Red Flag(s)

UX is a strange alien. Good luck getting anyone to listen to you.

14. What design tools (software) does your team use and why?

Follow-ups: How receptive are people to trying new tools? How does evolution happen? Intent

Know what software you should be familiar with, why the team uses it, and how you might go about introducing new tools that could be better in some situations.

Red Flag(s)

Gain insight into how the team thinks about the UI portion of the design process. Does it start with loose ideas drawn on napkins and gradually move toward higher-quality? Or does it attempt to start with perfection and end up throwing out a lot of work? (See the next question for more on this.)

15. Does a digital design start low-fi or high-fi, and what is the thinking behind this approach?

Follow-up: If you start lo-if, how does a design progress? Intent

You can waste a lot of hours on pixel-perfect work you end up throwing out. A company who burns through money like that is also going to be the first one to cut staff when things get tight. No idea should be carried through to its pixel-perfect end until it’s been collaborated on and vetted somehow, so you want to know that the company is smart enough to start lo-fidelity and move gradually to hi-fidelity. Hi-fi work should be the result of validation and iteration, not the start of it. A lo-fi > hi-fi process mitigates risk.

Red Flag(s)

All design work starts and ends in Photoshop or Sketch, and is expected to be 100% flawless and final before anyone sees what you’ve produced.

Running The Interview

In an unrelated Q&A years ago, a hiring manager asked how to spot a good UX professional during an interview. I answered that he should look for the person asking all the questions. I repeated this advice in Experience Required.

Now you can be the one asking all the questions.

And in doing so, not only will you increase your odds of being offered the gig, you’ll know long before the offer shows up whether to accept it.

If you, dear reader, have more ideas on how to scavenger-hunt a company’s red flags, we’re all ears. Tell us about it in the comments below.

(cc, il)
Categories: Design

How AI Is Helping Solve Climate Change

Thu, 09/19/2019 - 04:00
How AI Is Helping Solve Climate Change How AI Is Helping Solve Climate Change Nicholas Farmen 2019-09-19T13:00:59+02:00 2019-09-19T12:36:35+00:00

Have you heard of the French artist Marcel Duchamp? One of his most famous works is the “fountain” which was created from an ordinary bathroom urinal. In simply renaming this common object, Duchamp successfully birthed a completely new style of art.

The same can be done with AI. Why do humans only have to use this powerful invention to solve business-related issues? Why can’t we think a little more like Duchamp and use this ‘all-powerful’ technology to solve one of the scariest problems that mankind has ever faced?

The Global Threat Of Climate Change

If you’ve read any recent reports and predictions about the future of our climate, you’ve probably realized that mankind is running out of time to find a solution for the global threat of climate change. In fact, a recent Australian policy paper proposed a 2050 scenario where, well, we all die.

To those who aren’t scared of water levels rising 25 meters by 2050, there have been other studies that suggest human hardships are right around the corner. In March of 2012, the World Water Assessment Programme predicted that by 2025, 1.8 billion people on earth will be living in regions with absolute water scarcity.

So what data and research is leading scientists to believe there will be a water or food apocalypse scenario in the future?

According to NASA, the main cause of climate change is the rising amount of greenhouse gases in our atmosphere. And sadly, ‘mother earth’ is not doing this all by herself.

In 1830, humans began engaging in activities that released greenhouse gases, contributing to the rising temperatures that we are feeling today. Some of these activities I refer to include the burning of fossil fuels, the pollution of oceans, and deforestation. However, even the mass production of beef is contributing to climate change.

Now, you may be wondering how humans could combat and limit our greenhouse gas emissions. Obviously, we should be limiting all of the activities that I alluded to above. This would mean limiting our electricity, coal, and oil usage, planting trees, and sadly for many, giving up steak dinners altogether.

But would all of this be enough to undo centuries of atmospheric pollution? Is all of this even accomplishable before humans are forced to face the extinction of their species? I don’t know. Humans haven’t even been able to cease the production of beef, let alone our daily oil-guzzling automobiles and airplanes.

If only there was a very intelligent software that could run some emissions numbers, and tell us if all of these efforts would be enough to prevent future disaster scenarios...

Melting icebergs, a symbol of global warming. (Image source: Unsplash) (Large preview) AI Approaches And Environmental Use Cases

Solving any problem takes time. With climate change, it took scientists about 40 years to to gain any sort of understanding of the problem. And that’s fair — humans had to first study the climate to make sure climate change existed, then study the causes of climate change to see the role humans have played. But where are we today after all of this study? Still studying.

And the problem with climate change is that time is not on our side — mankind has to find and implement some solutions relatively fast. That’s where AI could help.

To date, there are two different approaches to AI: rules-based and learning-based. Both AI approaches have valid use cases when it comes to studying the environment and solving climate change.

Rules-based AI are coded algorithms of if-then statementsthat are basically meant to solve simple problems. When it comes to the climate, a rules-based AI could be useful in helping scientists crunch numbers or compile data, saving humans a lot of time in manual labor.

But a rules-based AI can only do so much. It has no memory capabilities — it’s focused on providing a solution to a problem that’s defined by a human. That’s why learning-based AI was created.

Learning-based AI is more advanced than rules-based AI because it diagnoses problems by interacting with the problem. Basically, learning-based AI has the capacity for memory, whereas rules-based AI does not.

Here’s an example: let’s say you asked a rules-based AI for a shirt. That AI would find you a shirt in the right size and color, but only if you told it your size and preferences. If you asked a learning AI for a shirt, it would assess all of the previous shirt purchases you’ve made over the past year, then find you the perfect shirt for the current season. See the difference?

When it comes to helping solve climate change, a learning-based AI could essentially do more than just crunch CO2 emission numbers. A learning-based AI could actually record those numbers, study causes and solutions, and then recommend the best solution — in theory.

AI Impacting Climate Change, Today

To most, AI is buzz word used to describe interesting tech software. But to the companies below, AI is starting to be seen as a secret weapon.

SilviaTerra

Forests are important for our climate. The carbon dioxide that’s emitted by many human activities is actually absorbed by trees. So if we just had more trees.

This is why SilviaTerra was brought to life.

Powered by the funds and technology of Microsoft, SilviaTerra uses AI and satellite imaging to predict the sizes, species, and health of forest trees. Why is this important? It means that conservationists are saved countless hours of manual fieldwork. It also means that we can help trees grow bigger, stronger, and healthier, so they can continue to help our climate.

DeepMind

Sometimes, we may ask ourselves, “What can’t Google do?” Well, it turns out Google can’t really do everything.

Looking to improve their costs (and potentially their carbon footprint), Google turned to a company called DeepMind. Together, the two companies developed an AI that would teach itself how to use only the bare minimum amount of energy necessary to cool Google’s data centers.

The result? Google was able to cut the amount of energy they use to cool their data centers by 35%. But that may not even be the coolest part! DeepMind’s co-founder, Mustafa Suleyman, said that their AI algorithms are general enough to where the two companies may be able to use them for other energy-saving applications in the future.

Although AI is still very controversial among many, it’s difficult not to agree on how it can help to improve sales, productivity, and even customer service. (Image source: Unsplash) (Large preview) Green Horizon Project

All of you data-lovers out there know that it’s hard to say you’re impacting something if you’re unable to measure your impact. This is why the Green Horizon Project came about.

IBM’s Green Horizon Project is an AI that creates self-configuring weather and pollution forecasts. IBM created the project with the hope that they could help cities become more efficient, one day.

Their aspirations became a reality in China. Between 2012 and 2017, IBM’s Green Horizon Project helped the city of Beijing decrease their average smog levels by 35%.

CycleGANs

So here’s a term you may never heard of in your life: “GAN.” It stands for Generative Adversarial Network. Basically, it’s a network that generates statistics or information without you having to do anything.

Why is the term important? Because automation is important when you have limited time and resources to solve a problem.

Intellectuals of Cornell University used GANs to create an AI to train itself to produce images that portray geographical locations before and after extreme weather events. The visuals produced by this AI could help scientists predict the impacts of certain climate changes, helping humans prioritize our combative efforts.

Software With The Potential To Impact Climate Change

In studying the number of AI that is already being used to have a positive impact on climate change, you may be thinking that we don’t need any more new software. And maybe you’re not wrong — why don’t we repurpose the software we do have?

With that being said, here are a few software with the potential to be secret weapons:

Airlitix

Airlitix is an AI and machine-learning software that is currently being used in drones. While it was originally developed to automate greenhouse management processes, it could quite easily be used to manage the health of national forests. Airlitix has the capacity to not only collect temperature, humidity, and carbon dioxide data, but the AI can also analyze soil and crop health.

But with humans needing to plant over 1.2 trillion trees to combat climate change, we should consider automating our efforts further. Instead of taking the time to tend to national parks, the Airlitix software could be built upon so that drones could plant our trees, release plant nutrients, or even deter forest arsonists.

Too many times, drones have proved to be useful during times of natural disasters. (Image source: Unsplash) (Large preview) Google Ads

Both Google and Facebook have very powerful AI software that they currently use to create relevant consumer ads using consumer browsing data. In fact, Google’s AI ‘Google Ads’ has helped their company earn hundreds of billions in revenue.

While revenue is cool, the Google Ads algorithm currently promotes consumer purchases relatively objectively. Imagine if the AI could be rewritten to prioritize the ads of companies that are offering sustainable products and services.

Nowadays, there isn’t much competition for Google. There’s Bing, Yahoo, DuckDuckGo, and AOL. (Out of the people I know, I don’t know any that use AOL.) If you’re feeling fearless, maybe you could develop a new search engine that helps connect consumers with environmentally-friendly companies.

Sure, it would be hard to compete with companies as large as Google, but you don’t have to compete forever to make a profit. There’s always a chance your startup gets acquired, and then you ride off into the sunset.

AlphaGo

While AlphaGo is an AI software that could help scientists find the next ‘wonder drug,’ it was originally created by DeepMind to teach itself how to master the game of chess. After beating the world’s best chess players, the AlphaGo AI has since moved on to conquer the strategy of more complex board games.

But what do board games have to do with climate change? Well, if the AlphaGo AI can outsmart humans in a game of chess, maybe it can outsmart us in coming up with creative ways to limit and reduce the number of greenhouse gases in our atmosphere.

Future Outlook For AI And Climate

As I see it, the purpose of AI is to assist mankind in solving problems. Climate change has proven to be a complex problem that humans are becoming great at studying, but I have yet to see a very positive future-outlook from environmentalists in the news.

If not to help humans influence climate change directly, couldn’t we use AI to portray doomsday scenarios that scare the world into coming together? Could we use AI to portray positive potential outlooks that would be possible if people were to do more in their daily lives to help triage climate issues?

Even with the latest Amazon fires, I didn’t see any tweets about the idea of using drones to combat the spread of flames. It’s clear to me that even with all of the impressive AI software and tech available to humans today, environmental use cases are still not widespread knowledge.

So my advice to readers is to try the ‘Duchamp approach’ — today. Consider the AI and tech that you use or develop regularly, and see if there’s a way to reimagine it. Who knows, you may be the one to solve a problem that has stumped some of the best climatologists and scientists of our time.


How Improving Website Performance Can Help Save The Planet

Climate change may not seem like an issue that should concern web developers, but the truth is that our work does have a carbon footprint, and it’s about time we started to think about that. Read more →

(cc, yk, il)
Categories: Design

My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

Wed, 09/18/2019 - 03:30
My Design Process Of The Cover Design For Smashing Magazine Print Issue #1 My Design Process Of The Cover Design For Smashing Magazine Print Issue #1 Veerle Pieters 2019-09-18T12:30:59+02:00 2019-09-18T21:05:44+00:00

Back in 2016, Vitaly Friedman asked me to design the cover and layout for a printed version of Smashing Magazine, a magazine for web designers and developers. The design I created back then for the cover and inside template layout, however, was shelved for a while as the project was paused for about two years owing to other priorities. Later, after Smashing Magazine launched its new website, a new style was born, and the design I had come up with didn’t really match anymore. So it was dropped.

(Large preview)

Around mid 2018, the project was reignited, and I was asked to design a new layout template for the magazine. Later, around the beginning of this year, I also redesigned the cover. Now, the pilot issue of a shiny new Smashing Magazine Print has been launched.

Old cover designs created back in 2016 for Smashing Magazine Print. (Large preview) (Large preview)

I’m very happy they chose my initial design of the table of contents, as I was really fond of it myself. The version I created later (see the above image to the right) was very different, since I went for something closer to the current design style.

(Large preview)

In my first design back in 2016, I could choose the typefaces, and I had total freedom over the design style. It was totally different — very geometric and more modernistic. So I was very happy to see that some of the designs were adopted in the magazine’s final layout, like the table of contents and this page design for the introduction.

Reshape to Fit the New Design Style

The challenge now was to reshape the design to fit the current style of orange-red roundness, and cartoon cats. The answer was, of course, very simple: start from scratch.

Brainstorming and Sketching

Fortunately, the theme of the first edition had been identified, which made it easier for me to think about a suitable illustration. Smashing Print #1 would be about ethics and privacy. My first idea in terms of the overall design concept was to try out something along the direction of Noma Bar’s negative space design style. That’s easier said than done, of course, but I thought it would be awesome if I could pull it off and come up with something clever like that.

(Large preview)

After writing down a few keywords (spying, watching, tracing), things like an eye, a keyhole and a magnifying glass came to mind as suitable subjects to use in my illustration. As for “tracing” I thought of a trail of digital data, which I saw in the shape of a perfect curvy line with ones and zeros. So I doodled a couple of basic ideas.

Inspiration Browsing

While designing this cover I did a lot of browsing around. Here are a couple of images that inspired me. The bottom-left one inspired me purely in terms of the layout. In the top-right one I really like the rounded shapes, plus its simplicity and contrasting colors. The middle-top and bottom-right ones use cute figures and a fun, vertical 2D approach. The top-left one has nice smooth shapes and colors, and I like its strong image. There were more images, for sure, but these five did it for me.

Images that inspired me for the cover design. (Large preview) First Design Choosing Colors

I often start a design by choosing my color palette first. The colors I picked here were chosen purely because I felt they go well together. I wasn’t sure I would use all of them, but somehow I’m used to having a color palette in circles placed above my artboard. Then I use the color picker tool to select the color fill I want to apply, or I select them all and make them global swatches.

Selecting a color palette. (Large preview)

Then I worked with the doodle of the magnifying glass as an eye in Illustrator and played around with a bit of color and composition. I thought adding some colored bars at the bottom would give the illustration an eye-catching touch. They represent digital data gathered from users, converted into analytical graphs.

(Large preview)

I ended up with the design shown to the left. (Ignore the name of the magazine, as this was changed later on.) I wasn’t sure how much of the Smashing orange-red I should use, so I tried out a version with a lot of orange as well, even though I preferred the other one.

While I did like the result, the idea of doing something with a trail also appealed to me as a second concept. I visualized a person walking around with a smartphone leaving a literal trail of all their interactions. That trail was then picked up, and zoomed in to and saved and analyzed. At the beginning of the trail I added a magnifying glass. I would have also mixed in some graph bars, but at this point I didn’t know where or how exactly I would incorporate them into my composition, though I was already playing with the idea of using some sort of rounded shape background, combined with some subtle patterns.

(Large preview)

Typically, I don’t sketch out my entire design. I only quickly doodle the idea and sketch out the elements I need in more detail, like the person with the phone. Once I had the concept fixed in my mind, I started out designing in Adobe Illustrator. First, I created a grid of guides to be used for the background shapes, and also for positioning the trail and figure. There were a couple of steps to get to this final design.

Final Design Setting Up a Grid

The inspiration image at the bottom left encouraged me to go for a layout with a lot of white space at the top for the title and some white space at the bottom to add three key articles. As for the illustration itself, I envisioned using a square grid, perhaps going all the way over the spine and back.

Final cover design shown in Adobe Illustrator with grid guides and layers panel. (Large preview)

I created this square grid and placed the guides in a separate layer. Once this was set up, I started with the walking man and his smartphone, positioning him somewhere at the top-left.

Next came the curvy path. I just drew an angled line on top of the grid and used the corner widget to convert these into perfect rounded corners. I was thinking of using ones and zeros on the trail, because that’s how I visualize digital data. I turned the curvy path into a fine dotted line with a very wide gap to use as a guide to place the numbers. Once I started to place the numbers on each dot, it looked way too busy, so I decided to place one tiny dot between each number.

The next thing in the process was the creation of the background. I only had a vague idea in my head: a composition of geometrical vertical shapes with rounded corners in different colors from the palette. During this phase, I did a lot of experimenting. I moved and recolored shapes over and over. Once I had the flat colored shapes finished, I started adding in patterns on top. I tried out tiny dot grids that I randomly shaped in length and width, and applied color to. This was all a matter of intuition, to be honest, trying out something, then trying out something else, comparing both and choosing what worked best: changing color, changing the transparency mode, opacity value, and so on.

The bar graphs and icons were created in the last phase, together with the magnifying glass, and the spine and back. I just kept the idea at the back of my head, and waited till I had the man and the background shapes ready. Finally, I added in some basic icons to refer to the type of action made on the data, such as geolocation.

Back Cover

As for the back cover, I had already envisioned the background composition going all the way around, only much lighter. That’s how I came up with the idea of using a light area in the center with a couple of intersecting colored lines there.

(Large preview)

In the final printed version, text is added in the center space, nicely framed in a rounded box with a yellow border, so the composition of the lines you see here has been removed and doesn’t match the printed version.

Spine

For the spine, I’d had the fun idea earlier of having the Smashing logo build up with each release (see image at the top of the article), but the tricky thing here is that each edition needs to have the exact same thickness or the whole concept falls apart. It wasn’t realistic since I wasn’t sure each edition would have exactly the same page count. I had to remember that the width of the spine could vary. So I came up with the idea of using some sort of pattern combinations that can vary in width, but still have the magazines connected.

(Large preview)

The general idea was also to use a different theme pattern for each issue. The pilot issue uses fine dots in combination with a capsules pattern. In the spine I use a couple of others. The idea is to achieve a coherent composition when you place or stack them in the right order, which serves also a motivation to buy all issues.

Categories: Design

A Pain-Free Workflow For Issue Reporting And Resolution

Tue, 09/17/2019 - 05:00
A Pain-Free Workflow For Issue Reporting And Resolution A Pain-Free Workflow For Issue Reporting And Resolution Suzanne Scacca 2019-09-17T14:00:59+02:00 2019-09-17T14:35:49+00:00

(This is a sponsored post.) Errors, bugs and other issues are bound to arise in web development. Even if they aren’t outright errors, clients often have feedback about how something was designed, where it was placed or how certain elements work. It’s just part of the gig.

It can be a very painful part of the gig, too.

Take this scenario, for instance:

Email #1 from client: “I can’t see the button anymore. Can you please put it back on the home page?”

Email #2 from you: “Which button are you referring to? Can you send me a screenshot?”

You try to call the client, but get their voicemail instead.

Email #3 from the client: “The button to book a demo.”

You look at the attached screenshot and see that the Book a Demo section is intact, but the button doesn’t show. You pull up the website on Chrome and Safari and see it in both browsers: a big blue button that says “Schedule Demo”. You pull it up on your iPhone and see it there, too.

Email #4 from you: “Can you tell me which device and browser you’re seeing the issue on?”

Email #5 from client: “My phone.”

You know how this chain of messages will go and it’s only going to lead to frustration on both ends. Not to mention the cost to your business every time you have to pause from work to try to interpret a bug report and then work through it.

Then, there’s the cost of bugs to your clients you have to think about. When something goes wrong after launch and your client is actively trying to send traffic to the website, a bug could hurt their sales.

When that happens, who do you think they’re going to come after?

A Pain-Free Workflow For Issue Reporting And Repairs

It doesn’t matter what the size of the bug or issue is. When it’s detected and reported, it needs to be dealt with. There are a number of reasons why.

For starters, it’s the only way you’re going to get your client to sign off on a project as complete. Plus, swift and immediate resolution of bugs leads to better relations with your client who sees how invested you are in creating an impressive (and error-free) website for their business. And, of course, the more efficiently you resolve errors, the quicker you can get back to finishing this job and moving on to others!

So, here’s what you need to do to more effectively and painlessly tackle these issues.

  1. Assign Someone To Triage
  2. Use An Issue Resolution Workflow
  3. Give Your Users A Bug Reporting Tool
  4. Give Your Triage Manager A Tracking Platform
  5. Work In A Local Testing Platform
  6. Always Close The Loop
1. Assign Someone To Triage

The first thing to do is decide who’s going to triage issues.

If you work on your own, then that responsibility is yours to own. If you work on a team, it should go to a project manager or dev lead that can manage reported issues just as effectively as they would manage the team’s workload.

This person will then be in charge of:

  • Monitoring for reported issues.
  • Adding the bugs to the queue.
  • Ushering them through the resolution workflow.
  • Resolving and closing up bug reports.
  • Analyzing trends and revising your processes to reduce the likelihood that recurring bugs appear again.

Once you know who will manage the process, it’s time to design your workflow and build a series of tools around it.

2. Use An Issue Resolution Workflow

Your triage manager can’t do this alone. They’re going to need a process they can closely follow to take each issue from Point A (detection) to Point B (resolution).

To ensure you’ve covered every step, use a visualization tool like Lucidchart to lay out the steps or stages of your workflow.

Here’s an example of how your flow chart might look:

An example of an issue reporting workflow built in Lucidchart. (Source: Lucidchart) (Large preview)

Let’s break it down:

You’ll start by identifying where the issue was detected and through which channel it was reported. This example doesn’t get too specific, but let’s say the new issue detected was the one mentioned before: the Book a Demo button is missing on the home page.

What happens when an issue is detected on a website. (Source: Lucidchart) (Large preview)

The next thing to do is to answer the question: “Who found it?” In most cases, this will be feedback submitted by your client from your bug-tracking software (more on that shortly).

Next, you’re going to get into the various stages your issues will go through:

An example of how to move tickets through an issue tracking system. (Source: Lucidchart) (Large preview)

This is the part of the process where the triage manager will determine how severe the issue of a missing Book a Demo button is (which is “Severe” since it will cost the client conversions). They’ll then pass it on to the developer to verify it.

Depending on how many developers or subject matter experts are available to resolve the issue, you might also want to break up this stage based on the type of bug (e.g. broken functionality vs. design updates).

Regardless, once the bug has been verified, and under what context (like if it were only on iPhone 7 or earlier), the ticket is moved to “In Progress”.

Finally, your flow chart should break out the subsequent steps for issues that can be resolved:

A sample workflow of how to resolve website issues and bugs. (Source: Lucidchart) (Large preview)

You can name these steps however you choose. In the example above, each step very specifically explains what needs to happen:

  • New Issue
  • In Progress
  • Test
  • Fix
  • Verify
  • Resolve
  • Close the Loop.

To simplify things, you could instead use a resolution flow like this:

  • New Issue
  • Todo
  • Doing
  • Done
  • Archive.

However you choose to set up your patch workflow, just make sure that the bug patch is tested and verified before you close up the ticket.

3. Give Your Users A Bug Reporting Tool

When it comes to choosing a bug reporting tool for your website, you want one that will make it easy for your team and clients to leave feedback and even easier for you to process it.

One such tool that does this well is called BugHerd.

Basically, BugHerd is a simple way for non-technical people to report issues to you visually and contextually. Since there’s no need to train users on how to get into the bug reporting tool or to use it, it’s one less thing you have to spend your time on in this process.

What’s more, BugHerd spares you the trouble of having to deal with the incessant back-and-forth that takes place when feedback is communicated verbally and out of context.

With BugHerd, though, users drop feedback onto the website just as easily as they’d leave a sticky note on your desk. What’s more, the feedback is pinned into place on the exact spot where the bug exists.

Let me show you how it works:

When you first add your client’s website to BugHerd (it’s the very first step), you’ll be asked to install the BugHerd browser extension. This is what allows BugHerd to pin the feedback bar to the website.

It looks like this:

How the BugHerd sidebar appears to clients and team members with the extension installed. (Source: BugHerd) (Large preview)

This pinned feedback bar makes it incredibly easy for clients to leave feedback without actually altering the live website.

This is what the bug tracker pop-up looks like:

BugHerd makes error collection from clients very easy. (Source: BugHerd) (Large preview)

As you can see, it’s a very simple form. And, really, all your clients need to do is select the element on the page that contains the bug, then enter the details. The rest can be populated by your triage manager.

As new feedback is added, comments are pinned to the page where they left it. For example:

Review all reported bugs from the BugHerd sidebar. (Source: BugHerd) (Large preview)

You’ll also notice in the screenshot above that tasks that have been assigned a severity level are marked as such. They’re also listed from top-to-bottom on how critical they are.

On your side of things, you have a choice as to where you view your feedback. You can open the site and review the notes pinned to each page. Or you can go into the BugHerd app and review the comments from your Kanban board:

This is the BugHerd dashboard your developers and triage manager can use. (Source: BugHerd) (Large preview)

By default, all bugs enter the Backlog to start. It’s your triage manager’s job to populate each bug with missing details, assign to a developer and move it through the steps to resolution.

That said, BugHerd takes on a lot of the more tedious work of capturing bug reports for you. For example, when you click on any of the reported bugs in your kanban board, this “Task Details” sidebar will appear:

A place to review all of the details about captured bugs. (Source: BugHerd) (Large preview)

This panel provides extra details about the issue, shows a screenshot of where it exists on the site, and also lets you know who left the comment.

What’s more, BugHerd captures “Additional Info”:

Click on 'Additional info' to reveal details about the browser, OS and code. (Source: BugHerd) (Large preview)

This way, you don’t have to worry about the client not providing you with the full context of the issue. These details tell you what device and browser they were on, how big the screen was and what color resolution they were viewing it through.

You also get a look at the code of the buggy element. If there’s something actually broken or improperly coded, you might be able to spot it from here.

All in all, BugHerd is a great tool to simplify how much everyone has to do from all sides and ensure each request is tackled in a timely manner.

4. Give Your Triage Manager A Tracking Platform

If you want to keep this workflow as simple as possible, you can use the BugHerd dashboard to track and manage your requests:

An example of the BugHerd dashboard when it’s in use. (Source: BugHerd) (Large preview)

Your triage manager and dev team will probably want to use something to complement the bug reporting capabilities of BugHerd. But good luck asking your client to use a platform like Jira to help you manage bugs.

In that case, I’d recommend adding another tool to this workflow.

Luckily for you, BugHerd seamlessly integrates with issue tracking and helpdesk software like Jira, Zendesk and Basecamp, so you don’t have to worry about using multiple tools to manage different parts of the same process. Once the connection is made between your two platforms, any task created in BugHerd will automatically be copied to your issue resolution center.

Now, if there’s a tool your team is already using, but that BugHerd doesn’t directly integrate with, that’s okay. You can use Zapier to help you connect with even more platforms.

For example, this is how easy it is to instantly create a “zap” that copies new BugHerd tasks to your Trello cards. And it all takes place from within BugHerd!

BugHerd helps users quickly integrate other apps like Zapier and Trello. (Source: BugHerd) (Large preview)

Once the connection is made, your triage manager can start working from the task management or issue tracking platform of its choosing. In this case, this is what happens when Zapier connects BugHerd and Trello:

This is a new task that was just created in BugHerd. (Source: BugHerd) (Large preview)

This is a new task I just created in BugHerd. Within seconds, the card was placed into the exact Trello project and list that I configured the zap for:

The BugHerd Zapier integration instantly copies new bug reports to Trello. (Source: Trello) (Large preview)

This will make your triage manager’s job much easier as they won’t be limited by the stages available in BugHerd while also still having all the same information readily at their fingertips.

5. Work In A Local Testing Platform

When bugs are reported, you don’t want to test and implement the assumed fixes on the live website. That’s too risky.

Instead, work on resolving issues from a local testing platform. This article has some great suggestions on local development tools for WordPress you can use for this.

These tools enable you to:

  • Quickly make a copy of your website.
  • Reproduce the bug with the same server conditions.
  • Test possible fixes until you find one that works.

Only then should you work on patching the bug on the website.

6. Always Close The Loop

Finally, it’s up to your triage manager to bring each issue to a formal close.

First, they should inform the client (or visitor) who originally reported the issue that it has been resolved. This kind of transparency and accountability will give your agency a more polished look while helping you build trust with clients who might be unnerved by discovering bugs in the first place.

Once things are closed client-side, the triage manager can then archive the bug report.

It shouldn’t end there though.

Like traditional project managers, a triage manager should regularly track trends as well as the overall severity of bugs found on their websites. The data might reveal that there’s a deeper issue at play. That way, your team can focus on resolving the underlying problem and stop spending so much time repairing the same kinds of bugs and issues.

Wrapping Up

Think about all of the ways in which issues and bugs may be reported: through a contact form, by email, over the phone, through chat or, worse, in a public forum like social media.

Now, think about all of the different people who might report these issues to you: your team, the client, a customer of your client, a person who randomly found it while looking at the website and so on.

There are just too many variables in this equation, which makes it easy to lose sight of open issues. Worse, when feedback comes through vague, subjective, or unable to account for without any context, it becomes too difficult to resolve issues completely or in a timely fashion.

With the right system of reporting, tracking and organizing feedback in place, though, you can bring order to this chaos and more effectively wipe out bugs found on your website.

(ms, ra, yk, il)
Categories: Design

Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

Fri, 09/13/2019 - 04:17
Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard Anselm Hannemann 2019-09-13T13:17:00+02:00 2019-09-13T12:34:38+00:00

Editor’s note: Please note that this is the last Monthly Web Development Update in the series. You can still follow the Web Development Reading List on Anselm’s site at https://wdrl.info. Watch out for a new roundup post format next month here on Smashing Magazine. A big thank-you to Anselm for sharing his findings and his thoughts with us during the past four years.

Do we make our lives too complex, too busy, and too rich? More and more people working with digital technology realize over time that a simple craft and nature are very valuable. The constant hunt to do more and get more productive, with even leisure activities that are meant to help us refuel our energy turning into a competition, doesn’t seem to be a good idea, yet currently, this is a trend in our modern world. After work, we feel we need to do two hours of yoga and be able to master the most complex of poses, we need a hobby, binge-watch series on Netflix, and a lot more. That’s why this week I want to encourage you to embrace a basic lifestyle.

“To live a life in which one purely subsists on the airy cream puffs of ideas seems enviably privileged: the ability to make a living merely off of one’s thoughts, rather than manual or skilled labor.”

Nadia Eghbal in “Basic”

What does basic stand for? Keep it real, don’t constantly do extra hours, don’t try to pack your workday with even more tasks or find more techniques to make it more efficient. Don’t try to hack your productivity, your sleep, let alone your meditation, yoga, or other wellness and sports activities. Do what you need to do and enjoy the silence and doing nothing when you’re finished. Living a basic life is a virtue, and it becomes more relevant again as we have more money to spend on unnecessary goods and more technology that intercept our human, basic thoughts on things.

News General
  • Chris Coyier asks the question if a website should work without JavaScript in 2019. It breaks down to a couple of thoughts that mainly conclude with progressive enhancement being more important than making a website work for users who actively turned off JavaScript.
Privacy UI/UX
  • In our modern world, it’s easy to junk things up. We’re quick to add more questions to research surveys, more buttons to a digital interface, more burdens to people. Simple is hard.
Web Performance
  • So many users these days use the Internet with a battery-driven device. The WebKit team shares how web content can affect power usage and how to improve the performance of your web application and save battery.
Scrolling a page with complex rendering and video playback has a significant impact on battery life. But how can you make your pages more power efficient? (Image credit) JavaScript Tooling
  • There’s a new tool in town if you want to have a status page for your web service: The great people from Oh Dear now also provide status pages.
  • Bastian Allgeier shares his thoughts on simplicity on the web, where we started, and where we are now. Call it nostalgic or not, the times when we simply uploaded a file via FTP and it was live on servers were easy days. Now with all the CI/CD tooling around, we have gotten many advantages in terms of security, version management, and testability. However, a simple solution looks different.
Accessibility
  • Adrian Roselli shares why we shouldn’t under-engineer text form fields and why the default CSS that comes with the browser usually isn’t enough. A pretty good summary of what’s possible, what’s necessary, and how to make forms better for everyone visiting our websites. It even includes high-contrast mode, dark mode, print styles, and internationalization.
Work & Life Going Beyond…

—Anselm

(cm)
Categories: Design

A Re-Introduction To Destructuring Assignment

Thu, 09/12/2019 - 05:00
A Re-Introduction To Destructuring Assignment A Re-Introduction To Destructuring Assignment Laurie Barth 2019-09-12T14:00:59+02:00 2019-09-12T12:22:44+00:00

If you write JavaScript you’re likely familiar with ES2015 and all the new language standards that were introduced. One such standard that has seen incredible popularity is destructuring assignment. The ability to “dive into” an array or object and reference something inside of it more directly. It usually goes something like this.

const response = { status: 200 data: {} } // instead of response.data we get... const {data} = response //now data references the data object directly const objectList = [ { key: 'value' }, { key: 'value' }, { key: 'value' } ] // instead of objectList[0], objectList[1], etc we get... const [obj, obj1, obj2] = objectList // now each object can be referenced directly

However, destructuring assignment is such a powerful piece of syntax that many developers, even those who have been using it since it was first released, forget some of the things it can do. In this post, we’ll go through five real-world examples for both object and array destructuring, sometimes both! And just for fun, I’ll include a wonky example I came across just the other day.

1.Nested Destructuring

Being able to access a top-level key inside an object, or the first element of an array is powerful, but it’s also somewhat limiting. It only removes one level of complexity and we still end up with a series of dots or [0] references to access what we’re really after.

As it turns out, destructuring can work beyond the top level. And there can be valid reasons for doing so. Take this example of an object response from an HTTP request. We want to go beyond the data object and access just the user. So long as we know the keys we’re looking for, that isn’t a problem.

const response = { status: 200, data: { user: { name: 'Rachel', title: 'Editor in Chief' }, account: {}, company: 'Smashing Magazine' } } const {data: {user}} = response // user is { name: 'Rachel', title: 'Editor in Chief'}

The same can be done with nested arrays. In this case, you don’t need to know the key since there is none. What you need to know is the position of what you’re looking for. You’ll need to provide a reference variable (or comma placeholder) for each element up and until the one you’re looking for (we’ll get to that later). The variable can be named anything since it is not attempting to match a value inside the array.

const smashingContributors = [['rachel', ['writer', 'editor', 'reader']], ['laurie', ['writer', 'reader']]] const [[rachel, roles]] = smashingContributors // rachel is 'rachel' // roles is [ 'writer', 'editor', 'reader' ]

Keep in mind that these features should be used judiciously, as with any tool. Recognize your use case and the audience of your code base. Consider readability and ease of change down the road. For example, if you’re looking to access a subarray only, perhaps a map would be a better fit.

2. Object And Array Destructuring

Objects and arrays are common data structures. So common, in fact, that one often appears inside the other. Beyond nested destructuring, we can access nested properties even if they are in a different type of data structure than the external one we’re accessing.

Take this example of an array inside an object.

const organization = { users: ['rachel', 'laurie', 'eric', 'suzanne'], name: 'Smashing Magazine', site: 'https://www.smashingmagazine.com/' } const {users:[rachel]} = organization // rachel is 'rachel'

The opposite use case is also valid. An array of objects.

const users = [{name: 'rachel', title: 'editor'}, {name: 'laurie', title: 'contributor'}] const [{name}] = users // name is 'rachel'

As it turns out, we have a bit of a problem in this example. We can only access the name of the first user; otherwise, we’ll attempt to use ‘name’ to reference two different strings, which is invalid. Our next destructuring scenario should sort this out.

3. Aliases

As we saw in the above example (when we have repeating keys inside different objects that we want to pull out), we can’t do so in the “typical” way. Variable names can’t repeat within the same scope (that’s the simplest way of explaining it, it’s obviously more complicated than that).

const users = [{name: 'rachel', title: 'editor'}, {name: 'laurie', title: 'contributor'}] const [{name: rachel}, {name: laurie}] = users // rachel is 'rachel' and laurie is 'laurie'

Aliasing is only applicable to objects. That’s because arrays can use any variable name the developer chooses, instead of having to match an existing object key.

4. Default Values

Destructuring often assumes that the value it’s referencing is there, but what if it isn’t? It’s never pleasant to litter code with undefined values. That’s when default values come in handy.

Let’s look at how they work for objects.

const user = {name: 'Luke', organization: 'Acme Publishing'} const {name='Brian', role='publisher'} = user // name is Luke // role is publisher

If the referenced key already has a value, the default is ignored. If the key does not exist in the object, then the default is used.

We can do something similar for arrays.

const roleCounts = [2] const [editors = 1, contributors = 100] = roleCounts // editors is 2 // contributors is 100

As with the objects example, if the value exists then the default is ignored. Looking at the above example you may notice that we’re destructuring more elements than exist in the array. What about destructuring fewer elements?

5. Ignoring Values

One of the best parts of destructuring is that it allows you to access values that are part of a larger data structure. This includes isolating those values and ignoring the rest of the content, if you so choose.

We actually saw an example of this earlier, but let’s isolate the concept we’re talking about.

const user = {name: 'Luke', organization: 'Acme Publishing'} const {name} = user // name is Luke

In this example, we never destructure organization and that’s perfectly ok. It’s still available for reference inside the user object, like so.

user.organization

For arrays, there are actually two ways to “ignore” elements. In the objects example we’re specifically referencing internal values by using the associated key name. When arrays are destructured, the variable name is assigned by position. Let’s start with ignoring elements at the end of the array.

const roleCounts = [2, 100, 100000] const [editors, contributors] = roleCounts // editors is 2 // contributors is 100

We destructure the first and second elements in the array and the rest are irrelevant. But how about later elements? If it’s position based, don’t we have to destructure each element up until we hit the one we want?

As it turns out, we do not. Instead, we use commas to imply the existence of those elements, but without reference variables they’re ignored.

const roleCounts = [2, 100, 100000] const [, contributors, readers] = roleCounts // contributors is 100 // readers is 100000

And we can do both at the same time. Skipping elements wherever we want by using the comma placeholder. And again, as with the object example, the “ignored” elements are still available for reference within the roleCounts array.

Wonky Example

The power and versatility of destructuring also means you can do some truly bizarre things. Whether they’ll come in handy or not is hard to say, but worth knowing it’s an option!

One such example is that you can use destructuring to make shallow copies.

const obj = {key: 'value', arr: [1,2,3,4]} const {arr, arr: copy} = obj // arr and copy are both [1,2,3,4]

Another thing destructuring can be used for is dereferencing.

const obj = {node: {example: 'thing'}} const {node, node: {example}} = obj // node is { example: 'thing' } // example is 'thing'

As always, readability is of the utmost importance and all of these examples should be used judicially. But knowing all of your options helps you pick the best one.

Conclusion

JavaScript is full of complex objects and arrays. Whether it’s the response from an HTTP request, or static data sets, being able to access the embedded content efficiently is important. Using destructuring assignment is a great way to do that. It not only handles multiple levels of nesting, but it allows for focused access and provides defaults in the case of undefined references.

Even if you’ve used destructuring for years, there are so many details hidden in the spec. I hope that this article acted as a reminder of the tools the language gives you. Next time you’re writing code, maybe one of them will come in handy!

(dm, yk, il)
Categories: Design

Moving Your JavaScript Development To Bash On Windows

Wed, 09/11/2019 - 03:30
Moving Your JavaScript Development To Bash On Windows Moving Your JavaScript Development To Bash On Windows Burke Holland 2019-09-11T12:30:59+02:00 2019-09-11T15:06:07+00:00

I’m one of those people who can’t live without their Bash terminal. This sole fact has made it difficult for me to do frontend work on Windows. I work at Microsoft and I’m on a Mac. It wasn’t until the new Surface hardware line came out a few years ago that I realized: I gotta have one of those.

So I got one. A Surface Book 2 running Windows 10 to be exact. I’m drafting this article on it right now. And what of my sweet, sweet Bash prompt? Well, I brought it along with me, of course.

In this article, I’m going to take an in-depth look at how new technology in Windows 10 enables you to run a full Linux terminal on Windows. I’ll also show you my amazing terminal setup (which was named “best ever” by “me”) and how you too can set up your very own Windows/Linux development machine.

If you’ve been craving some of that Surface hardware but can’t live without a Linux terminal, you’ve come to the right place.

Note: At the time of this writing, a lot of the items in this article will require you to use or switch to “preview” or “insiders” builds of various items, including Windows. Most of these things will be in the main Windows build at some point in the future.

Windows Subsystem For Linux (WSL)

The Windows Subsystem for Linux, or, “WSL” is what enables you to run Linux on Windows. But what exactly is this mad science?

The WSL, in its current incarnation, is a translation layer that converts Linux system calls into Windows system calls. Linux runs on top of the WSL. That means that in order to get Linux on Windows, you need to do three things:

  1. Enable the WSL,
  2. Install Linux,
  3. Always include three items in a list.

As it turns out, that translation layer is a tad on the slow side — kind of like me trying to remember if I need splice or slice. This is especially true when the WSL is reading and writing to the file system. That’s kind of a big problem for web developers since any proper npm install will copy thousands of files to your machine. I mean, I don’t know about you, but I’m not going to left-pad my own strings.

Version 2 of the WSL is a different story. It is considerably faster than the current version because it leverages a virtualization core in Windows instead of using the translation layer. When I say it’s “considerably faster”, I mean way, way faster. Like as fast as me Googling “splice vs slice”.

For that reason, I’m going to show how to install the WSL 2. At the time of writing, that is going to require you to be on the “Insider” build of Windows.

First things first: follow this short guide to enable the WSL on Windows 10 and check your Windows version number.

Once you have it installed, hit the Windows key and type “windows insider”. Then choose “Windows Insider Program Settings”.

(Large preview)

You’ll have a couple of different options as to which “ring” you want to be on. A lot of people I know are on the fast ring. I’m a cautious guy, though. When I was a kid I would go down the slide at the playground on my stomach holding on to the sides. Which is why I stay on the slow ring. I’ve been on it for several months now, and I find it to be no more disruptive or unstable than regular Windows.

It’s a good option if you want the WSL 2, but you don’t want to die on the slide.

(Large preview)

Next, you need to enable the “Virtual Machine Platform” feature in Windows, which is required by the WSL version 2. To get to this screen, press the Windows key and type “windows features”. Then select “Turn Windows Features on or off”. Select “Virtual Machine Platform”. The “Windows Subsystem for Linux” option should already be enabled.

(Large preview)

Now that the WSL is enabled, you can install Linux. You do this, ironically enough, directly from the Windows Store. Only in 2019 would I suggest that you “install Linux from the Windows store”.

There are several different distributions to choose from, but Ubuntu is going to be the most supported across all the tools we’ll configure later on — including VS Code. All of the instructions that come from here on out with assume a Ubuntu install. If you install a different distro, all bets are off.

Search for “Ubuntu” from the Windows Store. There will be three to choose from: Ubuntu, Ubuntu 18.04, and Ubuntu 16.04. Ubuntu really likes that 04 minor version number, don’t they?

(Large preview)

The “Ubuntu” distro (the first one in this screenshot) is the “meta version”, or rather a placeholder that just points to the latest version. As of right now, that’s 18.04.

I went with the meta version because later on I’ll show you how to browse the Linux file system with Windows Explorer and it’s kinda messy to have “Ubuntu 18.04” as a drive name vs just “Ubuntu”.

This install is pretty quick depending on your internet connection. It’s only about 215 megabytes, but I am on a gigabit connection over here and how do you know if someone is on a gigabit connection? Don’t worry, they’ll tell you.

Once installed, you’ll now have an “Ubuntu” app in your start menu.

(Large preview)

If you click on that, you’ll get a Bash terminal!

(Large preview)

Take a moment to bask in the miracle of technology.

By default, you’ll be running in the WSL version 1. To upgrade to version 2, you’ll need to open a PowerShell terminal and run a command.

Hit the “Windows” key and type “Powershell”.

(Large preview)

From the PowerShell terminal, you can see which version of the WSL you have by executing wsl --list --versbose.

(Large preview)

If you’re showing version 1, you’ll need to execute the --set-version command and specify the name of the instance (Ubuntu) and the version you want (2).

wsl --set-version Ubuntu 2 (Large preview)

This is going to take a bit, depending on how much meat your machine has. Mine took “some minutes” give or take. When it’s done, you’ll be on the latest and greatest version of the WSL.

The Is Your Brain On Linux… On Windows.

Linux is not Windows. WSL is not a bash prompt on top of a Windows operating system. It is a full operating system unto itself with its own folder structure and installed applications. If you install Node with the Windows installer, typing node in Linux is going to fail because Node is not installed in Linux. It’s installed on Windows.

The true magic of the WSL, though, lies in the way it seamlessly connects Windows and Linux so that they appear as one file system on your machine.

File And Folder Navigation

By default, the Ubuntu terminal drops you into your Linux home directory (or /home/your-user-name). You can move onto the Windows side by going to /mnt/c.

(Large preview)

Notice that some permissions are denied here. I would have to right-click the Ubuntu icon and click “Run as Administrator” to get access to these files. This how Windows does elevated permissions. There is no sudo on Windows.

Launching Applications

You can launch any Windows application from the Ubuntu terminal. For instance, I can open Windows Explorer from the Unbuntu terminal.

(Large preview)

This also works in reverse. You can execute any application installed on the Linux side. Here I am executing “fortune” installed in Linux from the Windows command line. (Because it ain’t a proper Linux install without random, meaningless fortunes.)

(Large preview)

Two different operating systems. Two different file systems. Two different sets of installed applications. See how this could get confusing?

In order to keep everything straight, I recommend that you keep all your JavaScript development files and tools installed on the Linux side of things. That said, the ability to move between Windows and Linux and access files from both systems is the core magic of the WSL. Don’t forget it, cause it’s what makes this whole setup better than just a standard Linux box.

Setting Up Your Development Environment

From here on out, I’m going to give you a list of opinionated items for what I think makes a killer Linux on Windows setup. Just remember: my opinions are just that. Opinions. It just happens that just like all my opinions, they are 100% correct.

Getting A Better Terminal

Yes, you got a terminal when you installed Ubuntu. It’s actually the Windows Console connected to your Linux distro. It’s not a bad console. You can resize it, turn on copy/paste (in settings). But you can’t do things like tabs or open new windows. Just like a lot of people use replacement terminal programs on Mac (I use Hyper), there are other options for Windows as well. The Awesome WSL list on Github contains a pretty exhaustive list.

Those are all fine emulators, but there is a new option that is built by people who know Windows pretty well.

Microsoft has been working on a new application called “Windows Terminal”.

(Large preview)

Windows Terminal can be installed from the Windows Store and is currently in preview mode. I’ve been using it for quite a while now, and it has enough features and is stable enough for me to give it a full-throated endorsement.

The new Windows Terminal features a full tab interface, copy/paste, multiple profiles, transparent backgrounds, background images — even transparent background images. It’s a field day if you like to customize your terminal, and I came to win this sack race.

Here is my current terminal. We’ll take a walk through some of the important tweaks here.

(Large preview)

Windows terminal is quite customizable. Clicking the “” arrow at the top left (next to the “+” sign) gives you access to “Settings”. This will open a JSON file.

Bind Copy/Paste

At the top of the file are all of the key bindings. The first thing that I did was map “copy” to Ctrl + C and paste to Ctrl + V. How else am I going to copy and paste in commands from Stack Overflow that I don’t understand?

{ "command": "copy", "keys": ["ctrl+c"] }, { "command": "paste", "keys": ["ctrl+v"] },

The problem is that Ctrl + C is already mapped to SIGINT, or the Interrupt/kill command on Linux. There are a lot of terminals out there for Windows that handle this by mapping Copy/Paste to Ctrl + Shift + C and Ctrl + Shift + V respectively. The problem is that copy/paste is Ctrl + C / Ctrl + V every other single place in Windows. I just kept pressing Ctrl + C in the terminal over and over again trying to copy things. I could not stop doing it.

The Windows terminal handles this differently. If you have text highlighted and you press Ctrl + C, it will copy the text. If there is a running process, it still sends the SIGINT command down and interrupts it. The means that you can safely map Ctrl + C / Ctrl + V to Copy/Paste in the Windows Terminal and it won’t interfere with your ability to interrupt processes.

Whoever thought Copy/Paste could cause so much heartache?

Change The Default Profile

The default profile is what comes up when a new tab is opened. By default, that’s Powershell. You’ll want to scroll down and find the Linux profile. This is the one that opens wsl.exe -d Ubuntu. Copy its GUID and paste it into the defaultProfile setting.

I’ve moved these two settings so they are right next to each other to make it easier to see:

(Large preview) Set The Background

I like my background to be a dark solid color with a flat-ish logo in the right-hand corner. I do this because I want the logo to be bright and visible, but not in the way of the text. This one I made myself, but there is a great collection of flat images to pick from at Simple Desktops.

The background is set with the backgroundImage property:

"backgroundImage": "c:/Users/YourUserName/Pictures/earth.png" (Large preview)

You’ll also notice a setting called “acrylic”. This is what enables you to adjust the opacity of the background. If you have a solid background color, this is pretty straightforward.

"background": "#336699", "useAcrylic": true, "acrylicOpacity": 0.5 (Large preview)

You can pull this off with a background image as well, by combining the arcylicOpacity setting with the backgroundImageOpacity:

"backgroundImage": "c:/Users/username/Pictures/earth-and-stars.png", "useAcrylic": true, "acrylicOpacity": 0.5 (Large preview)

For my theme, transparency makes everything look muted, so I keep the useAcrylic set to false.

Change The Font

The team building the Windows Terminal is also working on a new font called “Cascadia Code”. It’s not available as of the time of this writing, so you get the default Windows font instead.

The default font in the Windows Terminal is “Consolas”. This is the same font that the Windows command line uses. If you want that true Ubuntu feel, Chris Hoffman points out how you can install the official Ubuntu Mono font.

Here’s a before and after so you can see the difference:

"fontFace": "Ubuntu Mono" (Large preview)

They look pretty similar; the main difference being in the spacing of Ubuntu Mono which makes the terminal just a bit tighter and cleaner.

Color Schemes

The color schemes are all located at the bottom of the settings file. I copied the “Campbell” color scheme as a baseline. I try to match colors with their names, but I’m not afraid to go rogue either. I’ll map “#ffffff” to “blue” — I don’t even care.

(Large preview)

If you like this particular scheme which I’ve named “Earth”, I’ve put together this gist so you don’t have to manually copy all of this mess out of a screenshot.

Note: The color previews come by virtue of the “Color Highlight” extension for VS Code.

Change The Default Starting Directory

By default, the WSL profile drops you into your home directory on the Windows side. Based on the setup that I am recommending in this article, it would be preferable to be dropped into your Linux home folder instead. To do that, alter the startingDirectory setting in your “Ubuntu” profile:

"startingDirectory": "\\\\wsl$\\Ubuntu\\home\\burkeholland"

Note the path there. You can use this path (minus the extra escape slashes) to access the WSL from the Windows command line.

(Large preview) Install Zsh/Oh-My-Zsh

If you’ve never used Zsh and Oh-My-Zsh before, you’re in for a real treat. Zsh (or “Z Shell”) is a replacement shell for Linux. It expands on the basic capabilities of Bash, including implied directory switching (no need to type cd), better-theming support, better prompts, and much more.

To install Zsh, grab it with the apt package manager, which comes out of the box with your Linux install:

sudo apt install zsh

Install oh-my-zsh using curl. Oh-my-zsh is a set of configurations for zsh that improve the shell experience even further with plugins, themes and a myriad of keyboard shortcuts.

sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

Then it will ask you if you want to change your default shell to Zsh. You do, so answer in the affirmative and you are now up and running with Zsh and Oh-My-Zsh.

(Large preview)

You’ll notice that the prompt is a lot cleaner now. You can change the look of that prompt by changing the theme in the ~/.zshrc file.

Open it with nano, which is kind of like VIM, but you can edit things and exit when you need to.

nano ~/.zshrc

Change the line that sets the theme. There is a URL above it with an entire list of themes. I think the “cloud” one is nice. And cute.

(Large preview)

To get changes to the .zshrc picked up, you’ll need to source it:

source ~/.zshrc (Large preview)

Note: If you pick a theme like “agnoster” which requires glyphs, you’ll need a powerline infused version of Ubuntu Mono that has… glyphs. Otherwise, your terminal will just be full of weird characters like you mashed your face on the keyboard. Nerd Fonts offers one that seems to work pretty well.

Now you can do things like changing directories just by entering the directory name. No cd required. Wanna go back up a directory? Just do a ... You don’t even have to type the whole directory name, just type the first few letters and hit tab. Zsh will give you a list of all of the files/directories that match your search and you can tab through them.

(Large preview) Installing Node

As a web developer, you’re probably going to want to install Node. I suppose you don’t have to install Node to do web development, but it sure feels like it in 2019!

Your first instinct might be to install node with apt, which you can do, but you would regret it for two reasons:

  1. The version of Node on apt is dolorously out of date;
  2. You should install Node with a version manager so that you don’t run into permissions issues.

The best way to solve both of these issues is to install nvm (Node Version Manager). Since you’ve installed zsh, you can just add the nvm plugin in your zshrc file and zsh takes care of the rest.

First, install the plugin by cloning in the zsh-nvm repo. (Don’t worry, Git comes standard on your Ubuntu install.)

git clone https://github.com/lukechilds/zsh-nvm ~/.oh-my-zsh/custom/plugins/zsh-nvm

Then add it as a plugin in the ~/.zshrc file.

`nano ~/.zshrc` plugins (zsh-nvm git) (Large preview)

Remember to source the zshrc file again with source ~/.zshrc and you’ll see nvm being installed.

(Large preview)

Now you can install node with nvm. It makes it easy to install multiple side-by-side versions of node, and switch between them effortlessly. Also, no permissions errors when you do global npm installs!

nvm install --lts

I recommend this over the standard nvm install because the plugin gives you the ability to easily upgrade nvm. This is kind of a pain with the standard “curl” install. It’s one command with the plugin.

nvm upgrade Utilize Auto Suggestions

One of my very favorite plugins for zsh is zsh-autosuggestions. It remembers things you have typed in the terminal before, and then recognizes them when you start to type them again as well as “auto-suggests” the line you might need. This plugin has come in handy more times than I can remember — specifically when it comes to long CLI commands that I have used in the past, but can’t ever remember.

Clone the repo into the zsh extensions folder:

git clone https://github.com/zsh-users/zsh-autosuggestions ~/.oh-my-zsh/custom/plugins/zsh-autosuggestions

Then add it to your zsh plugins and source the zshrc file:

nano ~/.zshrc # In the .zshrc file plugins(zsh-nvm zsh-autosuggestions git) source ~/.zshrc

The plugin reads your zsh history, so start typing some command you’ve typed before and watch the magic. Try typing the first part of that long clone command above.

(Large preview)

If you hit ↹, it will autocomplete the command. If you keep hitting ↹, it will cycle through any of the commands in your history that could be a match.

Important Keyboard Shortcuts

There are a few terminal shortcuts that I use all the time. I find this with all of my tools — including VS Code. Trying to learn all the shortcuts is a waste of time because you won’t use them enough to remember them.

Here are a few that I use regularly:

Terminal Shortcut What does it do? Ctrl + L This clears the terminal and puts you back to the top. It’s the equivilant of typing “clear”. Ctrl + U This clears out the current line only. Ctrl + A Sends the cursor to the beginning of the command line. Ctrl + E Move to the end of the line. Ctrl + K Delete all the characters after the cursor.

That’s it! Everything else I’ve probably learned and then forgotten because it never gets any use.

Configuring Git(Hub/Lab/Whatevs)

Git comes on Ubuntu, so there is no install required. You can follow the instructions at your source control hoster of choice to get your ssh keys created and working.

Note that in the Github instructions, it tells you to use the “copy” utility to copy your ssh key. Ubuntu has the “xcopy” command, but it’s not going to work here because there is no interop between the Linux and Windows in terms of a clipboard.

Instead, you can just use the Windows Clipboard executable and call it directly from the terminal. You need to get the text first with cat, and then pipe that to the Windows clipboard.

cat ~/.ssh/id_rsa.pub | clip.exe

The Github docs tell you to make sure that the ssh-agent is running. It’s not. You’ll see this when you try and add your key to the agent:

(Large preview)

You can start the agent, but the next time you reboot Windows or the WSL is stopped, you’ll have to start it again. This is because there is no initialization system in the WSL. There is no systemd or another process that starts all of your services when the WSL starts. WSL is still in preview, and the team is working on a solution for this.

In the meantime, believe it or not, there’s a zsh plugin for this, too. It’s called ssh-agent, and it comes installed with oh-my-zsh, so all you need to do is reference it in the .zshrc file.

zsh-nvm zsh-autosuggestions ssh-agent git

This will start the ssh-agent automatically if it’s not running the first time that you fire up the WSL. The downside is that it’s going to ask you for your passphrase every time WSL is started fresh. That means essentially anytime you reboot your computer.

(Large preview) VS Code And The WSL

The WSL has no GUI, so you can’t install a visual tool like VS Code. That needs to be installed on the Windows side. This presents a problem because you have a program running on the Windows side accessing files on the Linux side, and this can result in all manor of quirks and “permission denied” issues. As a general rule of thumb, Microsoft recommends that you not alter files in the WSL side with Windows programs.

To resolve this, there is an extension for VS Code called “Remote WSL”. This extension is made by Microsoft, and allows you to develop within the WSL, but from inside of VS Code.

Once the extension is installed, you can attach VS Code directly to the Ubuntu side by opening the Command Palette (Ctrl + Shift + P) and select “Remote-WSL: New Window”.

(Large preview)

This opens a new instance of VS Code that allows you to work as if you were fully on the Linux side of things. Doing “File/Open” browses the Ubuntu file system instead of the Windows one.

(Large preview)

The integrated terminal in VS Code opens your beautifully customized zsh setup. Everything “just works” like it should when you have the Remote WSL extension installed.

If you open code from your terminal with code ., VS Code will automatically detect that it was opened from the WSL, and will auto-attach the Remote WSL extension.

VS Code Extensions With Remote WSL

The Remote WSL extension for VS Code works by setting up a little server on the Linux side, and then connecting to that from VS Code on the Windows side. That being the case, the extensions that you have installed in VS Code won’t automatically show up when you open a project from the WSL.

For instance, I have a Vue project open in VS Code. Even though I have all of the right Vue extensions installed for syntax highlighting, formatting and the like, VS Code acts like it’s never seen a .vue file before.

(Large preview)

All of the extensions that you have installed can be enabled in the WSL. Just find the extension that you want in the WSL, and click the “Install in WSL” button.

(Large preview)

All of the extensions installed in the WSL will show up in their own section in the Extensions Explorer view. If you have a lot of extensions, it could be slightly annoying to install each one individually. If you want to just install every extension you’ve got in the WSL, click the little cloud-download icon at the top of the ‘Local - Installed’ section.

(Large preview) How To Setup Your Dev Directories

This is already an opinionated article, so here’s one you didn’t ask for on how I think you should structure your projects on your file system.

I keep all my projects on the Linux side. I don’t put my projects in “My Documents” and then try and work with them from the WSL. My brain can’t handle that.

I create a folder called /dev that I put in the root of my /home folder in Linux. Inside that folder, I create another one that is the same name as my Github repo: /burkeholland. That folder is where all of my projects go — even the ones that aren’t pushed to Github.

If I clone a repo from a different Github account (e.g. “microsoft”), I’ll create a new folder in “dev” called /microsoft. I then clone the repo into a folder inside of that.

Basically, I’m mimicking the same structure as source control on my local machine. I find it far easier to reason about where projects are and what repos they are attached to just by virtue of their location. It’s simple, but it is highly effective at helping me keep everything organized. And I need all the help I can get.

(Large preview) Browsing Files From Windows Explorer

There are times when you need to get at a file in Linux from the Windows side. The beautiful thing about the WSL is that you can still do that.

One way is to access the WSL just like a mapped drive. Access it with a \\wsl$ directly from the explorer bar:

\\wsl$ (Large preview)

You might do this for a number of different reasons. For instance, just today I needed a Chrome extension that isn’t in the web store. So I cloned the repo in WSL, then navigated to it as an “Unpacked Extension” and loaded it into Edge.

One thing that I do with some frequency in Linux is to open the directory that contains a file directly from the terminal. You can do this in the WSL, too, by directly calling explorer.exe. For instance, this command opens the current directory in Windows Explorer.

$ explorer.exe .

This command is a bit cumbersome though. On Linux, it’s just open .. We can make that same magic by creating an alias in the ~/.zshrc.

alias open="explorer.exe" Docker

When I said all tooling should be on the Linux side, I meant that. That includes Docker.

This is where the rubber really starts to meet the road. What we need here is Docker, running inside of Linux running inside of Windows. It’s a bit of a Russian Nesting Doll when you write it down in a blog post. In reality, it’s pretty straightforward.

You’ll need the correct version of Docker for Windows. As of the time of this writing, that’s the WSL 2 Tech Preview.

When you run the installer, it will ask you if you want to use Windows containers instead of Linux containers. You definitely do. Otherwise, you won’t get the option to run Docker in the WSL.

(Large preview)

You can now enable Docker in the WSL by clicking on the item in the system tray and selecting “WSL 2 Tech Preview”:

(Large preview)

After you start the service, you can use Docker within the WSL just as you would expect to be able to. Running Docker in the WSL provides a pretty big performance boost, as well as a boost in cold start time on containers.

Might I also recommend that you install the Docker extension for VS Code? It puts a visual interface on your Docker setup and generally just makes it easier to work with Docker because you don’t have to remember all those command-line flags and options.

Get More Bash On Windows

At this point, you should get the idea about how to put Bash on Windows, and how it works once you get it there. You can customize your terminal endlessly and there are all sorts of rad programs that you can add in to do things like automatically set PATH variables, create aliases, get an ASCII cow in your terminal, and much more.

Running Bash on Windows opened up an entirely new universe for me. I’m able to combine Windows which I love for the productivity side, and Linux which I depend on as a developer. Best of all, I can build apps for both platforms now with one machine.

Further Reading

You can read more about Bash on Windows over here:

Special thanks to Brian Ketelsen, Matt Hernandez, Rich Turner, and Craig Loewen for their patience, help, and guidance with this article.

(rb, dm, il)
Categories: Design

Webflow: The Web Development Platform Of The Future

Tue, 09/10/2019 - 03:30
Webflow: The Web Development Platform Of The Future Webflow: The Web Development Platform Of The Future Nick Babich 2019-09-10T12:30:59+02:00 2019-09-10T11:35:42+00:00

(This is a sponsored article.) Time-to-market plays a crucial role in modern web design. Most product teams want to minimize the time required to go from the idea to a ready-to-use product without sacrificing the quality of the design along the way.

When it comes to creating a website, teams often use a few different tools: one tool for graphics and visual design, another for prototyping, and another for coding. Webflow attempts to simplify the process of web design by enabling you to design and develop at the same time.

Typical Problems That Web Designers Face

It’s important to start with understanding what challenges web design teams face when they create websites:

  • A disconnection between visual design and coding.
    Visual designers create mocks/prototypes in a visual tool (like Sketch) and hand them off to developers who need to code them. It creates an extra round of back-and-forth since developers have to go through an extra iteration of coding.
  • It’s hard to code complex interactions (especially animated transitions).
    Designers can introduce beautiful effects in hi-fi prototypes, but developers will have a hard time reproducing the same layout or effect in code.
  • Optimizing designs for various screens.
    Your designs should be responsive right from the start.
What Is Webflow?

Webflow is an in-browser design tool that gives you the power to design, build, and launch responsive websites visually. It’s basically an all-in-one design platform that you can use to go from the initial idea to ready-to-use product.

Here are a few things that make Webflow different:

  • The visual design and code are not separated.
    What you create in the visual editor is powered by HTML, CSS, and JavaScript.
  • It allows you to reuse CSS classes.
    Once defined, you can use a class for any elements that should have the same styling or use it as a starting point for a variation (base class).
  • It is a platform and as such, it offers hosting plans.
    For $12 per month, it allows you to connect a custom domain and host your HTML site. And for an additional $4 per month, you can use the Webflow CMS.
Building A One-Page Website Using Webflow

The best way to understand what the tool is capable of is to build a real product with it. For this review, I will use Webflow to create a simple landing page for a fictional smart speaker device.

Define The Structure Of The Future Page

While it’s possible to use Webflow to create a structure of your layout, it’s better to use another tool for that. Why? Because you need to experiment and try various approaches before finding the one that you think is the best. It’s better to use a sheet of paper or any prototyping tool to define the bones of your page.

It’s also crucial to have a clear understanding of what you’re trying to achieve. Find an example of what you want and sketch it on paper or in your favorite design tool.

Tip: You don’t need to create a high-fidelity design all of the time. In many cases, it’s possible to use lo-fi wireframes. The idea is to use a sketch/prototype as a reference when you work on your website.

(Large preview)

For our website, we will need the following structure:

  • A hero section with a large product image, copy, and a call-to-action button.
  • A section with the benefits of using our product. We will use a zig-zag layout (this layout pairs images with text sections).
  • A section with quick voice commands which will provide a better sense of how to interact with a device.
  • A section with contact information. To make contact inquiries easier for visitors, we’ll provide a contact form instead of a regular email address.
Create A New Project In Webflow

When you open the Webflow dashboard for the first time, you immediately notice a funny illustration with a short but helpful line of text. It is an excellent example of an empty state that is used to guide users and create the right mood from the start. It’s hard to resist the temptation to click “New Project.”

(Large preview)

When you click “New Project,” Webflow will offer you a few options to start with: a blank site, three common presets, and an impressive list of ready-to-use templates. Some of the templates that you find on this page are integrated with the CMS which means that you can create CMS-based content in Webflow.

(Large preview)

Templates are great when you want to get up and running very quickly, but since our goal is to learn how to create the design ourselves, we will choose “Blank Site.”

As soon as you create a new project, we will see Webflow’s front-end design interface. Webflow provides a series of quick how-to videos. They are handy for anyone who’s using Webflow for the first time.

(Large preview)

Once you’ve finished going through the introduction videos, you will see a blank canvas with menus on both sides of the canvas. The left panel contains elements that will help you define your layout’s structure and add functional elements. The right panel contains styling settings for the elements.

(Large preview)

Let’s define the structure of our page first. The top left button with a plus (+) sign is used to add elements or symbols to the canvas. All we have to do to introduce an element/visual block is to drag the proper item to the canvas.

(Large preview)

While elements should be familiar for anyone who builds websites, Symbols can still be a new concept for many people. Symbols are analogous to features of other popular design tools, like the components in Figma and XD. Symbols turn any element (including its children) into a reusable component. Anytime you change one instance of a Symbol, the other instances will update too. Symbols are great if you have something like a navigation menu that you want to reuse constantly through the site.

Webflow provides a few elements that allow us to define the structure of the layout:

  • Sections. Sections divide up distinct parts of your page. When we design a page, we usually tend to think in terms of sections. For instance, you can use Sections for a hero area, for a body area, and a footer area.
  • Grid, columns, div block, and containers are used to divide the areas within Sections.
  • Components. Some elements (e.g. navigation bar) are provided in ready-to-use components.

Let’s add a top menu using the premade component Navbar which contains three navigation options and placeholders for the site’s logo:

(Large preview)

Let’s create a Symbol for our navigation menu so we can reuse it. We can do that by going to “Symbols” and clicking “Create New Symbol.” We will give it the name “Navigation.”

Notice that the section color turned to green. We also see how many times it’s used in a project (1 instance). Now when we need a menu on a newly created page, we can go to the Symbols panel and select a ready-to-use “Navigation.” If we decide to introduce a change to the Symbol (i.e., rename a menu option), all instances will have this change automatically.

(Large preview)

Next, we need to define the structure of our hero section. Let’s use Grid for that. Webflow has a very powerful Grid editor that simplifies the process of creating the right grid — you can customize the number of columns and rows, as well as a gap between every cell. Webflow also supports nested grid structure, i.e. one grid inside the other. We will use a nested grid for a hero section: a parent grid will define the image, while the child grid will be used for the Heading, text paragraph, and call-to-action button.

(Large preview)

Now let’s place the elements in the cells. We need to use Heading, Paragraph, Button, and Image elements. By default, the elements will automatically fill out the available cells as you drag and drop them into the grid.

(Large preview)

While it’s possible to customize the styling for text and images and add real content instead of dummy placeholders, we will skip this step and move to the other parts of the layout: the zig-zag layout.

For this layout, we will use a 2×3 grid (2 columns × 3 rows) in which every cell that contains text will be divided into 3 rows. We can easily create the first cell with a 3-row grid, but when it comes to using the same structure for the third cell of the master grid, we have a problem. Since Webflow automatically fills the empty cells with a new element, it will try to apply the 3-row child grid to the third element. To change this behavior, we need to use Manual. After setting the grid selection to Manual, we will be able to create the correct layout.

(Large preview)

Similar to the hero section, we will add the dummy context to the grid sections. We will change the data after we finish with the visual layout.

(Large preview)

Now we need to define a section with voice commands. To save space, we will use a carousel. Webflow has a special element for that purpose: Slider.

(Large preview)

Once we have all the required elements in place, we can create a vertical rhythm by adjusting the position of every item that we use. First, we need to adjust the spacing of elements in grids. Change the margin and paddings and Align self for the image in order to place it in the center of the cell.

(Large preview)

Now it’s time to replace the dummy content with real content. To start adding images, we’ll need to click on the gear icon for the Image element and select the image of our choice.

(Large preview)

Notice that Webflow stores all images in a special area called Assets. Any media we add, whether it’s a video or image, goes straight to that area.

(Large preview)

After we introduce an image to the layout, we need to modify the Heading and Text sections.

(Large preview)

Webflow provides a visual style for every element we use in our design. Let’s take a Heading section as an example: It’s possible to play with font color, font, weight, spacing, shadows, and other visual properties of this object. Here is what we will have when adding real copy and playing with font color.

(Large preview)

Once we have a nice and clean hero section, we can add content to our zig-zag layout.

Notice that every time we style something, we give it a Selector (a class), so Webflow will know that the style should be applied specifically for this element. We can use the same class to style other elements. In our case, we need the same style for images, headings, descriptions, and links that we have in the zig-zag layout.

Applying the same “benefit” style for all images in the zig-zag section. (Large preview)

Webflow also allows creating combo classes — when one class is used as a base class, and another class is used to override the styling options of the base class. In the example below, we override the default font color of the Heading using the class “Zig-Heading-Second.” Combo classes can save you a lot of time because you won’t need to create a style from scratch.

Using a combo class for the Heading. The orange indicator is used to highlight the properties that were inherited from the base class. (Large preview)

Here is how our layout will look like after the changes:

(Large preview)

Webflow provides a very helpful feature for aligning content named “guide overlay” which can be located in the left menu panel. When you enable the guide, you will see the elements that are breaking the grid.

(Large preview)

After finishing with a zig-zag layout, we need to add information on voice commands in the Slider. Add a Heading section in a relevant slide and change the visual styling options of this object.

(Large preview)

It’s that simple!

Last but not least, we need to add a contact form to our website. Let’s add a section right underneath of Slider.

There are two ways we can add a form to the page. First, Webflow has a special element for web forms called Form Block. A form created using Form Block has three elements: Name, Email Address, and a Submit button. For our form, we will need a Message field. We can easily create one by duplicating the element Email Address and renaming it. By default, the Form Block has 100% width alignment, meaning it will take the entire width of the container. We will use the Grid settings to adjust the form width.

(Large preview)

Secondly, Webflow allows integrating custom code right in the page. It means that we can create a form in a tool like Typeform, copy the embed code it provides and place it to the component called Embed that we placed to the section. Note that embeds will only appear once the site has been published or exported — not while you’re designing the site.

(Large preview)

Once all elements are in place, we need to optimize our design for mobile. Almost half of the users (globally) access websites on mobile. What you can do within Webflow is to resize the browser window so that you can see how your design looks like with different breakpoints.

Let’s change our view to Mobile by clicking on the Mobile - Portrait icon.

(Large preview)

As you can see, the design looks bad on mobile. But it’s relatively easy to optimize the design using Webflow: It allows you to change the order of elements, the spacing between elements, as well as other visual settings to make the design look great on mobile.

(Large preview)

After we’re done making changes to our design, we have two options: we can export the design and use it on our own web hosting (i.e., integrate it into your existing CMS) or we can use Webflow’s own hosting provided. If we decide to use the second option, we need to click the Publish button and select the relevant publishing options, i.e. either publish it on the webflow.io domain or on a custom domain.

(Large preview)

If you decide to export the code, Webflow will prepare a full zip with HTML, CSS, and all the assets you’ve used to create your design. The exported code will help you build a solid foundation for your product.

Conclusion

Webflow is an excellent tool for building high-fidelity prototypes and inviting feedback from team members and stakeholders. People who will review your prototype won’t need to imagine how the finished product will behave and look — they can experience it instead!

The tool simplifies the transition from a prototype into a fully finished UI because you’re designing products with real code, as opposed to creating clickable mocks in Sketch or any other prototyping tool. You won’t waste time by using one piece of software to build prototypes and another to turning those prototypes into real products. Webflow solves this problem for you.

(ms, ra, il)
Categories: Design

Machine Learning For Front-End Developers With Tensorflow.js

Mon, 09/09/2019 - 05:00
Machine Learning For Front-End Developers With Tensorflow.js Machine Learning For Front-End Developers With Tensorflow.js Charlie Gerard 2019-09-09T14:00:59+02:00 2019-09-09T17:06:23+00:00

Machine learning often feels like it belongs to the realm of data scientists and Python developers. However, over the past couple of years, open-source frameworks have been created to make it more accessible in different programming languages, including JavaScript. In this article, we will use Tensorflow.js to explore the different possibilities of using machine learning in the browser through a few example projects.

What Is Machine Learning?

Before we start diving into some code, let’s talk briefly about what machine learning is as well as some core concepts and terminology.

Definition

A common definition is that it is the ability for computers to learn from data without being explicitly programmed.

If we compare it to traditional programming, it means that we let computers identify patterns in data and generate predictions without us having to tell it exactly what to look for.

Let’s take the example of fraud detection. There is no set criteria to know what makes a transaction fraudulent or not; frauds can be executed in any country, on any account, targeting any customer, at any time, and so on. It would be pretty much impossible to track all of this manually.

However, using previous data around fraudulent expenses gathered over the years, we can train a machine-learning algorithm to understand patterns in this data to generate a model that can be given any new transaction and predict the probability of it being fraud or not, without telling it exactly what to look for.

Core Concepts

To understand the following code samples, we need to cover a few common terms first.

Model

When you train a machine-learning algorithm with a dataset, the model is the output of this training process. It’s a bit like a function that takes new data as input and produces a prediction as output.

Labels And Features

Labels and features relate to the data that you feed an algorithm in the training process.

A label represents how you would classify each entry in your dataset and how you would label it. For example, if our dataset was a CSV file describing different animals, our labels could be words like “cat”, “dog” or “snake” (depending on what each animal represents).

Features on the other hand, are the characteristics of each entry in your data set. For our animals example, it could be things like “whiskers, meows”, “playful, barks”, “reptile, rampant”, and so on.

Using this, a machine-learning algorithm will be able to find some correlation between features and their label that it will use for future predictions.

Neural Networks

Neural networks are a set of machine-learning algorithms that try to mimic the way the brain works by using layers of artificial neurons.

We don’t need to go in-depth about how they work in this article, but if you want to learn more, here’s a really good video:

Now that we’ve defined a few terms commonly used in machine learning, let’s talk about what can be done using JavaScript and the Tensorflow.js framework.

Features

Three features are currently available:

  1. Using a pre-trained model,
  2. Transfer learning,
  3. Defining, running, and using your own model.

Let’s start with the simplest one.

1. Using A Pre-Trained Model

Depending on the problem you are trying to solve, there might be a model already trained with a specific data set and for a specific purpose which you can leverage and import in your code.

For example, let’s say we are building a website to predict if an image is a picture of a cat. A popular image classification model is called MobileNet and is available as a pre-trained model with Tensorflow.js.

The code for this would look something like this:

<html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Cat detection</title> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]"> </script> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"> </script> </head> <body> <img id="image" alt="cat laying down" src="cat.jpeg"/> <script> const img = document.getElementById('image'); const predictImage = async () => { console.log("Model loading..."); const model = await mobilenet.load(); console.log("Model is loaded!") const predictions = await model.classify(img); console.log('Predictions: ', predictions); } predictImage(); </script> </body> </html>

We start by importing Tensorflow.js and the MobileNet model in the head of our HTML:

<script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/1.0.1/tf.js"> </script> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"> </script>

Then, inside the body, we have an image element that will be used for predictions:

<img id="image" alt="cat laying down" src="cat.jpeg"/>

And finally, inside the script tag, we have the JavaScript code that loads the pre-trained MobileNet model and classifies the image found in the image tag. It returns an array of 3 predictions which are ordered by probability score (the first element being the best prediction).

const predictImage = async () => { console.log("Model loading..."); const model = await mobilenet.load(); console.log("Model is loaded!") const predictions = await model.classify(img); console.log('Predictions: ', predictions); } predictImage();

And that’s it! This is the way you can use a pre-trained model in the browser with Tensorflow.js!

Note: If you want to have a look at what else the MobileNet model can classify, you can find a list of the different classes available on Github.

An important thing to know is that loading a pre-trained model in the browser can take some time (sometimes up to 10s) so you will probably want to preload or adapt your interface so that users are not impacted.

If you prefer using Tensorflow.js as a NPM module, you can do so by importing the module this way:

import * as mobilenet from '@tensorflow-models/mobilenet';

Feel free to play around with this example on CodeSandbox.

Now that we’ve seen how to use a pre-trained model, let’s look at the second feature available: transfer learning.

2. Transfer Learning

Transfer learning is the ability to combine a pre-trained model with custom training data. What this means is that you can leverage the functionality of a model and add your own samples without having to create everything from scratch.

For example, an algorithm has been trained with thousands of images to create an image classification model, and instead of creating your own, transfer learning allows you to combine new custom image samples with the pre-trained model to create a new image classifier. This feature makes it really fast and easy to have a more customized classifier.

To provide an example of what this would look like in code, let’s repurpose our previous example and modify it so we can classify new images.

Note: The end result is the experiment below that you can try live here.

(Live demo) (Large preview)

Below are a few code samples of the most important part of this setup, but if you need to have a look at the whole code, you can find it on this CodeSandbox.

We still need to start by importing Tensorflow.js and MobileNet, but this time we also need to add a KNN (k-nearest neighbor) classifier:

<!-- Load TensorFlow.js --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script> <!-- Load MobileNet --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet"></script> <!-- Load KNN Classifier --> <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/knn-classifier"></script>

The reason why we need a classifier is because (instead of only using the MobileNet module) we’re adding custom samples it has never seen before, so the KNN classifier will allow us to combine everything together and run predictions on the data combined.

Then, we can replace the image of the cat with a video tag to use images from the camera feed.

<video autoplay id="webcam" width="227" height="227"></video>

Finally, we’ll need to add a few buttons on the page that we will use as labels to record some video samples and start the predictions.

<section> <button class="button">Left</button> <button class="button">Right</button> <button class="test-predictions">Test</button> </section>

Now, let’s move to the JavaScript file where we’re going to start by setting up a few important variables:

// Number of classes to classify const NUM_CLASSES = 2; // Labels for our classes const classes = ["Left", "Right"]; // Webcam Image size. Must be 227. const IMAGE_SIZE = 227; // K value for KNN const TOPK = 10; const video = document.getElementById("webcam");

In this particular example, we want to be able to classify the webcam input between our head tilting to the left or the right, so we need two classes labeled left and right.

The image size set to 227 is the size of the video element in pixels. Based on the Tensorflow.js examples, this value needs to be set to 227 to match the format of the data the MobileNet model was trained with. For it to be able to classify our new data, the latter needs to fit the same format.

If you really need it to be larger, it is possible but you will have to transform and resize the data before feeding it to the KNN classifier.

Then, we’re setting the value of K to 10. The K value in the KNN algorithm is important because it represents the number of instances that we take into account when determining the class of our new input.

In this case, the value of 10 means that, when predicting the label for some new data, we will look at the 10 nearest neighbors from the training data to determine how to classify our new input.

Finally, we’re getting the video element. For the logic, let’s start by loading the model and classifier:

async load() { const knn = knnClassifier.create(); const mobilenetModule = await mobilenet.load(); console.log("model loaded"); }

Then, let’s access the video feed:

navigator.mediaDevices .getUserMedia({ video: true, audio: false }) .then(stream => { video.srcObject = stream; video.width = IMAGE_SIZE; video.height = IMAGE_SIZE; });

Following that, let’s set up some button events to record our sample data:

setupButtonEvents() { for (let i = 0; i < NUM_CLASSES; i++) { let button = document.getElementsByClassName("button")[i]; button.onmousedown = () => { this.training = i; this.recordSamples = true; }; button.onmouseup = () => (this.training = -1); } }

Let’s write our function that will take the webcam images samples, reformat them and combine them with the MobileNet module:

// Get image data from video element const image = tf.browser.fromPixels(video); let logits; // 'conv_preds' is the logits activation of MobileNet. const infer = () => this.mobilenetModule.infer(image, "conv_preds"); // Train class if one of the buttons is held down if (this.training != -1) { logits = infer(); // Add current image to classifier this.knn.addExample(logits, this.training); }

And finally, once we gathered some webcam images, we can test our predictions with the following code:

logits = infer(); const res = await this.knn.predictClass(logits, TOPK); const prediction = classes[res.classIndex];

And finally, you can dispose of the webcam data as we don’t need it anymore:

// Dispose image when done image.dispose(); if (logits != null) { logits.dispose(); }

Once again, if you want to have a look at the full code, you can find it in the CodeSandbox mentioned earlier.

3. Training A Model In The Browser

The last feature is to define, train and run a model entirely in the browser. To illustrate this, we’re going to build the classic example of recognizing Irises.

For this, we’ll create a neural network that can classify Irises in three categories: Setosa, Virginica, and Versicolor, based on an open-source dataset.

Before we start, here’s a link to the live demo and here’s the CodeSandbox if you want to play around with the full code.

At the core of every machine learning project is a dataset. One of the first step we need to undertake is to split this dataset into a training set and a test set.

The reason for this is that we’re going to use our training set to train our algorithm and our test set to check the accuracy of our predictions, to validate if our model is ready to be used or needs to be tweaked.

Note: To make it easier, I already split the training set and test set into two JSON files you can find in the CodeSanbox.

The training set contains 130 items and the test set 14. If you have a look at what this data looks like you’ll see something like this:

{ "sepal_length": 5.1, "sepal_width": 3.5, "petal_length": 1.4, "petal_width": 0.2, "species": "setosa" }

What we can see is four different features for the length and width of the sepal and petal, as well as a label for the species.

To be able to use this with Tensorflow.js, we need to shape this data into a format that the framework will understand, in this case, for the training data, it will be [130, 4] for 130 samples with four features per iris.

import * as trainingSet from "training.json"; import * as testSet from "testing.json"; const trainingData = tf.tensor2d( trainingSet.map(item => [ item.sepal_length, item.sepal_width, item.petal_length, item.petal_width ]), [130, 4] ); const testData = tf.tensor2d( testSet.map(item => [ item.sepal_length, item.sepal_width, item.petal_length, item.petal_width ]), [14, 4] );

Next, we need to shape our output data as well:

const output = tf.tensor2d(trainingSet.map(item => [ item.species === 'setosa' ? 1 : 0, item.species === 'virginica' ? 1 : 0, item.species === 'versicolor' ? 1 : 0 ]), [130,3])

Then, once our data is ready, we can move on to creating the model:

const model = tf.sequential(); model.add(tf.layers.dense( { inputShape: 4, activation: 'sigmoid', units: 10 } )); model.add(tf.layers.dense( { inputShape: 10, units: 3, activation: 'softmax' } ));

In the code sample above, we start by instantiating a sequential model, add an input and output layer.

The parameters you can see used inside (inputShape, activation, and units) are out of the scope of this post as they can vary depending on the model you are creating, the type of data used, and so on.

Once our model is ready, we can train it with our data:

async function train_data(){ for(let i=0;i<15;i++){ const res = await model.fit(trainingData, outputData,{epochs: 40}); } } async function main() { await train_data(); model.predict(testSet).print(); }

If this works well, you can start replacing the test data with custom user inputs.

Once we call our main function, the output of the prediction will look like one of these three options:

[1,0,0] // Setosa [0,1,0] // Virginica [0,0,1] // Versicolor

The prediction returns an array of three numbers representing the probability of the data belonging to one of the three classes. The number closest to 1 is the highest prediction.

For example, if the output of the classification is [0.0002, 0.9494, 0.0503], the second element of the array is the highest, so the model predicted that the new input is likely to be a Virginica.

And that’s it for a simple neural network in Tensorflow.js!

We only talked about a small dataset of Irises but if you want to move on to larger datasets or working with images, the steps will be the same:

  • Gathering the data;
  • Splitting between training and testing set;
  • Reformatting the data so Tensorflow.js can understand it;
  • Picking your algorithm;
  • Fitting the data;
  • Predicting.

If you want to save the model created to be able to load it in another application and predict new data, you can do so with the following line:

await model.save('file:///path/to/my-model'); // in Node.js

Note: For more options on how to save a model, have a look at this resource.

Limits

That’s it! We’ve just covered the three main features currently available using Tensorflow.js!

Before we finish, I think it is important to briefly mention some of the limits of using machine learning in the frontend.

1. Performance

Importing a pre-trained model from an external source can have a performance impact on your application. Some objects detection model, for example, are more than 10MB, which is significantly going to slow down your website. Make sure to think about your user experience and optimize the loading of your assets to improve your perceived performance.

2. Quality Of The Input Data

If you build a model from scratch, you’re going to have to gather your own data or find some open-source dataset.

Before doing any kind of data processing or trying different algorithms, make sure to check the quality of your input data. For example, if you are trying to build a sentiment analysis model to recognize emotions in pieces of text, make sure that the data you are using to train your model is accurate and diverse. If the quality of the data used is low, the output of your training will be useless.

3. Liability

Using an open-source pre-trained model can be very fast and effortless. However, it also means that you don’t always know how it was generated, what the dataset was made of, or even which algorithm was used. Some models are called “black boxes”, meaning that you don’t really know how they predicted a certain output.

Depending on what you are trying to build, this can be a problem. For example, if you are using a machine-learning model to help detect the probability of someone having cancer based on scan images, in case of false negative (the model predicted that a person didn’t have cancer when they actually did), there could be some real legal liability and you would have to be able to explain why the model made a certain prediction.

Summary

In conclusion, using JavaScript and frameworks like Tensorflow.js is a great way to get started and learn more about machine learning. Even though a production-ready application should probably be built in a language like Python, JavaScript makes it really accessible for developers to play around with the different features and get a better understanding of the fundamental concepts before eventually moving on and investing time into learning another language.

In this tutorial, we’ve only covered what was possible using Tensorflow.js, however, the ecosystem of other libraries and tools is growing. More specified frameworks are also available, allowing you to explore using machine learning with other domains such as music with Magenta.js, or predicting user navigation on a website using guess.js!

As tools get more performant, the possibilities of building machine learning-enabled applications in JavaScript are likely to be more and more exciting, and now is a good time to learn more about it, as the community is putting effort into making it accessible.

Further Resources

If you are interested in learning more, here are a few resources:

Other Frameworks And Tools Examples, Models And Datasets Inspiration

Thanks for reading!

(rb, dm, yk, il)
Categories: Design

Smashing Magazine Is Thirteen!

Fri, 09/06/2019 - 03:00
Smashing Magazine Is Thirteen! Smashing Magazine Is Thirteen! Rachel Andrew 2019-09-06T12:00:59+02:00 2019-09-06T10:12:48+00:00

This week, Smashing Magazine turned thirteen years old. The web has changed a lot since Vitaly posted his first article back in 2006. The team at Smashing has changed too, as have the things that we bring to our community — with conferences, books, and our membership added to the online magazine.

One thing that hasn’t changed is that we’re a small team — with most of us not working fulltime for Smashing. Many in the team, however, have been involved with the magazine since the early days. You may not know them by name, but you will have enjoyed and benefited from their work. I enjoyed finding out what everyone gets up to outside of Smashing, and also how they came to be involved — I hope that you will, too.

Vitaly Friedman is the person you probably think of when you think of Smashing Magazine, and rightfully so. He posted his first article to the site on September 2nd, 2006. When asked what he likes to do outside of Smashing, he says,

“I’m a big fan of hiking, running, ironing and fruits! For me, ironing shirts is really like meditation — and I also loooooove music festivals (which is something most people don’t know about me as I tend to be quite obscure about that).”

Vitaly has done pretty much everything at Smashing at one time or another — web developer, writer, editor, designer, creative lead and curator. These days, he helps to keep us all on track with his vision for all of the Smashing things, and always has some new ideas! Vitaly (originally from Belarus) travels as much as I do, our company standups usually involve us reporting our current and next location and timezone! As you’ll discover, however, while Smashing Magazine is a German company, the team live — or have roots — all over the world.

Iris Lješnjanin is our senior editor on the magazine, and does an amazing job maintaining communication between our many authors, editors, and reviewers. She also does the majority of the subediting work on the magazine, trying to maintain the individual voices of our authors while ensuring the articles are easy to read for our worldwide audience. She has been part of Smashing since 2010, helping to develop the brand, mentoring in-house interns, and developing the process for working with authors and editors that keeps our daily publishing schedule rolling!

Iris grew up in Abu Dhabi, UAE, after the Bosnian War broke out, and moved to Germany to pursue her degree in linguistics. As I was gathering information for this article, she explained:

“I grew up multilingual, so it’s difficult for me not to love languages. Everything from the differences in tones, melodies, rhythms and cultural undertones of various languages is what will never cease to amaze me. Since I currently live in Freiburg, German is obviously the predominant language in my daily life alongside my mother tongue (Bosnian), but I try my best to learn new ones by listening to music, reading books and newspapers, watching TV series, and so on. One thing I find funny and interesting about languages is that, at the end of the day, they’re out of our control. Just like you can’t control who you meet in life, you can’t control which languages you learn. You meet them, get to know them, and fall in love with them.”

Unless you write for Smashing, you may never encounter Iris, however, her work is a key part of everything we do — a true behind-the-scenes superstar!

Another person who does a lot of work behind-the-scenes is Cosima Mielke, who joined Smashing in 2012 for a six-month long internship and is still working with us. Cosima is our e-book producer and editor, but gets involved in far more than that. She is behind the posts in the newsletter, and the ever-popular monthly wallpapers post, and many other editorial tasks that crop up.

Cosima loves being outside in nature, riding her bike, and creating things. Her background is not web development, and she told me,

“At Smashing, I’ve gained an entirely new look at the web which I only knew from a user’s perspective before I started working here. What fascinates me most is the strong community sense in the web community, that people are so open to sharing their knowledge and the tools they build to make life easier for everyone else — without asking for anything in return.”

As we cover such a wide range of topics here at Smashing, no one person can be an expert at all of them. Therefore, Iris and I are assisted by our subject-matter editors, some of who have been with us for a very long time.

One such editor is Alma Hoffmann. Originally from Puerto Rico, she moved to the USA to study for her MFA in Graphic Design and now teaches at the University of Alabama. Like so many of our Smashing crew, Alma is bilingual, though I believe she is the only one of the team who can claim to have been a ballroom dancer!

Alma first became involved with Smashing Magazine when she wrote an article in 2010. We perhaps didn’t have the editorial process then that we do now as she got a surprise when her first draft showed up live! She remembers,

“I emailed Vitaly thanking him and since then we have been in touch. He tested the waters by sending me articles to review and in 2013, he and Iris asked me to be the design editor. I wear that title like a badge of honor. Later on, in 2017, I was invited to be a speaker at the conference in Freiburg. I had a blast and met so many interesting people!”

Another of our editors is Michel Bozgounov. Like Alma, he originally became involved with SmashingMag by writing an article. After writing a second article in 2010, Vitaly asked him if he would like to edit a section of the magazine dedicated to Adobe Fireworks. Michel wrote an article when Adobe ultimately ended work on the product, however ,he now edits articles about the newer tools that have filled the gap — such as Sketch and Figma.

In his spare time, Michel loves to draw:

“It all started a few years ago, with a notebook, a fineliner, and a few watercolor pencils that I stole from my wife. Turned out I couldn’t stop drawing and for the last three years or so I imagine and then draw on paper small bits of a strange, but kind of fascinating world that I see in my mind — the world of Monsters & Carrots. For now, this world exists nowhere else but in my notebooks, and I showed only some small parts of it on Twitter.

Michel said that through working for Smashing,

“I learned how to be a better editor, and how to be more careful with words. I consider my experience at Smashing Magazine to be invaluable. I got in touch with so many people from all over the world and developed good online and offline friendships with many of the authors, experts, and editors that I worked with. Definitely, I can say that my job at Smashing Magazine opened many new doors and changed my life in a good way.”

When it comes to UX-related content, Chui Chui is one of our wonderful editors who works with authors to cover the most up-to-date topics on the magazine. Drew McLellan has recently taken on editing the coding section of the magazine, which includes everything from PHP to HTML, to JavaScript and more! If you write for Smashing Magazine it is likely that your main editorial contact will be with one of these editors, who will work with you to make sure your article is the best it can be.

Inge Emmler keeps us all on track with our expense receipts, and requests to spend money! In addition, she helps out our community when they get in touch. If your book order didn’t show up, then Inge is probably the person who will help you. She loves to be able to make our customers happy and remembers an anecdote from her time at Smashing where she sent a free e-book to one person, brightening their day despite the fact they had just lost their job.

When not helping our the Smashing community and chasing us for our expenses, Inge loves to do things with her hands, be that refurbishing her house, gardening, cooking, and more recently taking photographs of flowers.

Jan Constantin has been part of the team since 2012, between then and now has fulfilled a number of roles — office manager, event manager, junior editor, and fullfillment manager! The nature of a small team is that we all sometimes end up doing something quite different than we originally imagined. Jan enjoys rock climbing, tabletop games and Sci-fi/Fantasy. He confesses that despite working for Smashing all these years he still doesn’t know more than basic HTML.

Ricardo Gimenes is the creator of the Smashing mascot, and therefore is the person we all go to with requests for illustrations of cats involved in a variety of non-catlike activities. Ricardo told he is:

“A half-Brazilian half-Spanish designer who loves graphic and motion. I’ve been a kind of "gypsy" for the past 20 years since I’ve lived in 6 different countries working as a designer (Brazil, Italy, Germany, England, Japan, and now Sweden). I love board games — I have more than 80 games (and counting) in my collection. Every week, we have a board game/beer night with friends here at my home. I’m a big fan of football (and weekend player). I love to play guitar, blues, and rock and roll.”

Ricardo has been with Smashing since 2009, however, he didn’t meet Vitaly or the rest of the team in person for five years as he was based in Brazil. You can see his work all over the magazine and also at our conferences, as he designs each of the conferences to match the location and theme of the event.

Among many other things, Ricardo illustrated these posters for our Toronto Conference. (Photo credit Marc Thiele)

I was lucky enough to speak at the very first SmashingConf in Freiburg in 2012. Marc Thiele brought his expertise and knowledge of conference organization to that event. It was a great success and the SmashingConf series has gone from strength to strength, with events happening in Europe, America, and Canada. Marc is still involved with Smashing, offering advice and experience as a friend of the team and also serves on the Smashing Board, helping to shape the direction of the company. He also takes photos at many of our conferences — such as the one above. Marc told me that,

“Working on the Events team, it’s exciting to bring Smashing Conference to all those different places and many people. Creating the Smashing Conference in old town halls, in beautiful theatre and music venues, this is exciting and wonderful to see the outcome and the effect it has on many people attending the event.”

The conference team has grown since those early days. Amanda Annandale joined the team three years ago, and now produces our New York event and has also produced events in London and Toronto. Originally from a theater background, Amanda was a professional stage manager in the USA for ten years.

Producing SmashingConf NY has created a strange turn of events in Amanda’s life,

“For 10 years I was a professional stage manager in New York City, working on musicals, new performance pieces, dance, you name it. One place I worked in was the New World Stages. It was working an event at this venue that I met my husband! Now — nearly 8 years later, I’m back working at the same venue, but this time on the other side when we hold our SmashingConf NY event every year!”

Amanda has the help of Charis Rooda, also an experienced conference organizer outside of Smashing, who runs WebConf.asia and was involved running conferences in The Netherlands before moving to Hong Kong. Charis makes sure that our speakers know where they are supposed to be and when, and also takes care of much of the social media and website content around the conferences. When not working, Charis loves doing art puzzles, and tells me that,

“With 1000 pieces you think you’re never going to finish it, but when you start and keep on going, you will make it. Pretty much like running a conference!”

When asked what surprising thing she had learned while working at Smashing Charis told me,

“I learned how to use em dashes — the punctuation mark which can be used instead of a comma, not to be mistaken for the en dash or hyphen — and my life will never be the same.”

Mariona Cíller was part of the conference team but this year her role is transitioning to more broadly lead the Partnerships and Data side of the business. She has been part of the team since 2015 at SmashingConf Barcelona.

Mariona is a former web designer and developer, and describes herself as, “in love with science and technology, the open web, open-source hardware, and software”. She lives in a laboratory at her grandfather’s 1920’s embroideries factory which she remodeled over the past 5 years. Today, it is a digital fabrication laboratory (FabLab) connected to 1700+ labs from all over the world via the Fab Lab Network and the Center for Bits & Atoms at the Massachusetts Institute of Technology (MIT), where she graduated from the Fab Academy in 2015.

Mariona is currently studying for a Ph.D. in computer science and human-computer interaction at the Open University of Catalonia (UOC). Her research focuses on digital social inclusion programs for the neighborhood youth and community in Barcelona. She manages to find time to be a Mozillian and volunteer her time as a wrangler for MozFest2019!

I’ve learned a lot about many of the Smashing team while researching this piece, however, someone very familiar to me is Bethany Andrew — as she’s my daughter! Bethany has been doing some work for Smashing for a little over a year, first brought in to do some video editing work on the conference video. She still edits many of our videos and has also run a Smashing TV session. A trained dancer and singer, Bethany is part of a gospel choir in London, a true crime nerd, and a lover of Indian food. She said about her time at Smashing,

“It’s so lovely to now be working with everyone at Smashing. So many people have known me since I was a kid through my mum, or she’s always spoken about them. It’s nice now I’m all grown up (or trying to be) that I get to work with this lovely lot and develop my own friendships with them.”

The newest member of our team is Esther Fernández, who has joined Mariona to work on Partnerships and Data, and will be meeting the team for the first time in Freiburg at SmashingConf. I asked Esther to tell me something about her life outside of Smashing, and she said, “I’m a very curious person. I love the sensation of having learned something new by the end of the day. I get part of that knowledge through books — I’m an eager reader — but also through films and any kind of artistic expression. I have a self-taught knowledge in psychology and I really enjoy hiking, riding my bike, and having conversations with other inquisitive people.”

Then, there is me. Editor in Chief since October 2017, however, I felt part of Smashing long before that. My first Smashing article was published in June 2010 and I was part of the review panel for several years. In addition, I have had chapters in a number of Smashing books, and have spoken and run workshops at SmashingConf since the beginning. Smashing has been part of my own journey as a web developer, speaker, writer, and editor. I love to work with people who are committed to doing the best they can do, dedicated to the web platform and the community who work on it, which is why I’m very proud to be part of this team.

I hope that you, now feel you know us a little better. I certainly found out a lot about my colleagues while writing this. I love how much everyone feels a part of Smashing, whether they work a few hours a month or full time. And, the reason we do this at all? That should be left to Vitaly, who describes best how all of us feel about working on the magazine, conferences and all the other things we do.

“One incredible thing that keeps happening to me all the time is that people come to me and tell stories of how Smashing changed their lives many years ago. Maybe it’s just one article that really nailed it and helped a developer meet the deadline, and sometimes it’s those certain encounters at a conference that simply change your life. I vividly remember stories of people whom I’ve met at conferences when they were students, and who now have companies with dozens of employees. These stories change everything — we don’t hear them most of the time, sitting everywhere in the world and just writing, publishing and curating events, but there is always impact of our work at people around us. And that’s something we shouldn’t take lightly.” (ra, vf, il)
Categories: Design

Inspired Design Decisions: Alexey Brodovitch

Thu, 09/05/2019 - 04:30
Inspired Design Decisions: Alexey Brodovitch Inspired Design Decisions: Alexey Brodovitch Andrew Clarke 2019-09-05T13:30:59+02:00 2019-09-05T12:44:31+00:00

Before writing Art Direction for the Web, I began to study Alexey Brodovitch when I became fascinated by editorial and magazine design. I was drawn to his precision, in particular, the way Brodovitch brought photographs and text together.

Then, I became intrigued by his design process, how he sketched layouts, arranged them into spreads, and created a rhythm which flowed through his magazines. Later, I appreciated his rebellious rejection of anything he considered mediocre. I still love the advice he gave his students to look beyond his medium to inspiration from other forms of art, architecture, and design.

I like to think I share a little of Brodovitch’s rebelliousness, because I want to develop the web into a creative medium, rather than as merely a platform for products. An innovative approach to website design matters because it creates more engaging experiences and attractive visual designs. Those things can help our digital products and websites appeal to audiences and communicate better. In practical terms, better communication teaches how to use a product. It also helps them to understand why they should choose that product. This approach aids editorial, marketing, and news to communicate, and is crucial to developing an affinity between a brand and its audience.

There are few books currently available on Alexey Brodovitch’s work, the most definitive being Kerry William Purcell’s 2011 “Alexey Brodovitch.”

This monograph contains collections from throughout Brodovitch’s life, most interesting to me are his designs for Harper’s Bazaar magazine, some of which I’ll explore in this issue. This book will make a fabulous addition to your design collection.

  1. Inspired Design Decisions: Avaunt Magazine
  2. Inspired Design Decisions: Pressing Matters
  3. Inspired Design Decisions: Ernest Journal
  4. Inspired Design Decisions: Alexey Brodovitch
Inspired by Alexey Brodovitch © 2019 Henri Cartier-Bresson/Magnum Photos, 
courtesy Fondation Henri Cartier-Bresson, Paris. (Large preview)

Alexey Brodovitch was born in Russia in 1898. As a young man, Brodovitch fought in the White Army again the Bolsheviks during the Russian Civil War, which wasn’t the start in life I’d expected for someone who went on to become one of the most influential art directors of the twentieth century.

Forced into exile, Brodovitch moved to Paris where — while working as a scene painter — he met artists including Marc Chagall and was exposed to movements including Bauhaus, Constructivism, Dadaism, and Futurism.

A selection of Harper’s Bazaar magazine covers 1937–1939. Art direction by Alexey Brodovitch. Various artists. (Large preview) Constructivism

Whereas autonomous art was driven by the subconscious, and painters like Wassily Kandinsky and Joan Miro suppressed conscious control of the painting process, the constructivist movement sought to consciously and deliberately construct art.

This meant deliberately placing lines and shapes using geometry. The results were often striking, which lead to constructivism’s use in political art and propaganda. Constructivism influenced artists and designers throughout the twentieth century, including Neville Brody, who’s work on The Face Magazine I’ll cover later in this series.

Alexander Rodchenko

Seven years his senior, Alexander Rodchenko was a significant influence on Brodovitch and generations of artists, graphic designers, and photographers. Rodchenko was one of the founders of the constructivist movement.

A Yankee in Petrograd series by Aleksandr Rodchenko 1924. (Large preview)

In order to educate man to a new longing, everyday familiar objects must be shown to him with totally unexpected perspectives and in unexpected situations.

— Alexander Rodchenko

Photomontage — the technique of cutting and rearranging multiple images into a new work — was popular in Constructivism. This approach to composition inspired Brodovitch throughout his life, where he became renowned for the way he deliberately and precisely brought photography and typography together.

When Brodovitch began working part-time designing layouts for Cahiers d’Art — a contemporary art journal — and the influential design magazine Arts et Métiers Graphiques, he refined his techniques, creating striking magazine and poster designs. After beating Pablo Picasso into second place in a poster competition, Brodovitch established his own design studio and while still in his thirties became one of the most respected commercial artists in Paris.

Disillusioned by the art scene in Paris, in 1930 Brodovitch moved to the United States where he taught advertising design at what is today the Philadelphia University of the Arts. His goal was to develop American advertising to the level of design in Europe. He exposed his students to French and German magazines and taught them to reexamine their techniques. In the spirit of the Bauhaus School, Brodovitch took his students outside the classroom to find inspiration elsewhere, encouraging them to find their own creative style.

Brodovitch was “intolerant of mediocrity” and frequently asked his students to “astonish me.” Brodovitch’s courses focussed on design and photography in equal measure, so as well as influencing a generation of art directors and graphic designers, he taught photographers including Diane Arbus and Richard Avedon. He’d work with them later at Harper’s Bazaar magazine.

Brodovitch was asked by The Art Directors Club of New York to design their 1934 Annual Art Directors Exhibition. There, Carmel Snow — who at the time was editor-in-chief at the US edition of Harper’s Bazaar magazine — saw Brodovitch’s work for the first time. She said:

I saw a fresh, new conception of layout technique that struck me like a revelation: pages that “bled” beautifully cropped photographs, typography and design that were bold and arresting.

— Carmel Snow (The World of Carmel Snow. McGraw-Hill, 1962)

Astonish me

Harper’s Bazaar became Brodovitch’s most well-known project. Determined to keep his designs fresh, he frequently commissioned work from European artists including Jean Cocteau, Marc Chagall, and Man Ray. To keep Harper’s Bazaar readers surprised, Brodovitch explained:

Surprise quality can be achieved in many ways. It may be produced by a certain stimulating geometrical relationship between elements in the picture, or through the human interest of the situation photographed, or by calling our attention to some commonplace but fascinating thing we have never noticed before, or it can be achieved by looking at an everyday thing in a new interesting way.

— Alexey Brodovitch

'Fine Figger of a Woman.' Spread from Harper's Bazaar, November 1934.(Large preview)

It’s Brodovitch’s art direction for Harper’s Bazaar which has influenced designers since the 1940s. His designs were elegant, something Carmel Snow once described as “good taste, plus a dash of daring.” His knowledge of photography gave his work its classic feel. Brodovitch often cropped photographs in unexpected ways, and he placed them off-centre — sometimes bleeding them outside the margins of a page — to create compositions which were full of energy and movement.

One of Brodovitch’s sketches for a Harper's Bazaar layout, 1940–50. (Large preview)

Throughout his career at Harper’s Bazaar and beyond, Brodovitch frequently used content in photographs or illustrations to inform his placement and shape of the text. His colour choices were often bold, and he used large blocks of colour for emphasis.

I find Brodovitch’s design process as fascinating as his finished work because I learn more about how someone thinks by looking at their work-in-progress. Brodovitch began by designing his layouts as sketches on paper. Then, he arranged his spreads on the floor of his studio to create a well-paced magazine.

Scattering images

Early in his career, Brodovitch was inspired by Mehemed Fehmy Agha — the influential art director who designed the first double-page spread — and frequently scattered pictures across his pages like playing cards.

Left: For smaller sizes, I turn my group of images into a horizontally scrolling panel. Center: On medium-size screens, I scatter the images vertically to maintain visual hierarchy in portrait orientation. Right: For larger screens, I rearrange the images horizontally. (Large preview)

These might seem at first like a random arrangement of pictures, but they were deliberately placed and filled Brodovitch’s designs with movement. We can use the same technique today, even when designing flexible layouts.

For my first Brodovitch-inspired design, I scatter four differently sized images across the viewport. I can arrange these pictures either horizontally, vertically or even diagonally according to the screen dimensions.

To help me design a consistent experience across screen sizes, I form a storyboard from a short series of sketches.

Left: On medium-size screens, I scatter the images vertically to maintain visual hierarchy in portrait orientation. Right: For larger screens, I rearrange the images horizontally. (Large preview)

To develop this design, I use a combination of CSS Grid, Flexbox, and transforms. My markup is minimal and meaningful, using only three elements for layout; a top-level heading, a main element for my running text, and a figure element which contains my four images:

<h1 aria-label="Vespa"><svg>…</svg></h1> <main> <h2>It looks like a wasp!</h2> <div>…</div> </main> <figure>…</figure>

For smaller screens, I need only foundation styles for colour and typography as the normal flow, handles my single column layout. In normal flow, elements display one after another, depending on a document’s writing mode. If you’re new to these concepts, Rachel Andrew wrote an excellent primer.

To develop a small screen design consistent with larger screen sizes, I rotate images alternately within a horizontally scrolling figure element. As the browser default sets a flex container to no-wrap and arranges its flex items along a horizontal axis, I need only apply display: flex; to my figure element:

figure { display: flex; }

By ensuring my figure never exceeds the width of my viewport and setting the horizontal overflow to scroll, I develop a simple scrolling panel which may contain as many images as I need:

figure { max-width: 100vw; overflow-x: scroll; }

My figure element contains four SVG images. To maintain their aspect ratio when using Flexbox, I enclose each one in its own division:

<figure> <div><img src="img-1.svg" alt=""></div> <div><img src="img-2.svg" alt=""></div> <div><img src="img-3.svg" alt=""></div> <div><img src="img-4.svg" alt=""></div> </figure>

To fill all the horizontal space with images, I use the flex-grow property. flex-grow controls the ratio by which elements expand to occupy available space in a container, flex-shrink does the opposite. This property specifies the ratio that elements reduce in size when a container is narrower than their combined widths. I want my flex items to grow to fill any available space, but not shrink:

figure div { flex-grow: 1; flex-shrink: 0; }

Both flex-grow and flex-shrink allow the widths of elements to be completely fluid, with no restriction on how wide or narrow they can be. There are occasions when a flex item should start at a certain size before it grows or shrinks. In Flexbox, flex-basis provides a starting width for an element before it flexes:

figure div { flex-basis: 40%; /* flex: 1 0 40%; */ }

To shave a few bytes from my style sheet, I can combine those three properties into one by using the flex shorthand.

Choosing between Flexbox and Grid

People often ask me when to use Flexbox or Grid. Of course my answer depends on the style of element they’re implementing. For my set of four figure images, I can use Flexbox or Grid to develop a 2x2 arrangement where every image appears the same size. But, when I add or remove an image, the result using Flexbox isn’t what I’m looking for, as the final image expands to fill the entire width of my figure. By changing Flexbox to Grid, every item aligns to the grid which maintains their size.

Left: Using Flexbox, every item expands to fill the space available, so the final item is twice the size of the others. Right: With Grid, the elements stay aligned to my grid, which maintains their equal size. (Large preview)

It’s common in print to see design elements which overlap to create the illusion of depth. On the web, achieving this effect has been tricky, so design elements mostly stay separate. Grid enables designs which in the past were either difficult or even impossible to implement.

When a screen’s large enough to render my main and figure elements side-by-size, I apply display: grid; to the body element with column widths which match the 6+4 compound grid, I showed you in issue 2. I also set the gap between columns and rows:

@media screen and (min-width : 48em) { body { display: grid; grid-template-columns: 2fr 1fr 1fr 2fr 2fr 1fr 1fr 2fr; grid-column-gap: 2vw; grid-row-gap: 4vh; } }

Then, I place the heading, main, and figure elements onto that grid using line numbers:

h1 { grid-column: 7; grid-row: 1; } main { grid-column: 6 / -1; grid-row: 2; } figure { grid-column: 1 / 6; grid-row: 1 / 3; }

Finally, when a screen’s large enough and I can get the biggest impact from my scattered pictures, I reposition the heading, main, and figure elements into new positions on my grid:

@media screen and (min-width : 64em) { h1 { grid-column: 8; grid-row: 1; } main { grid-column: 2 / 6; grid-row: 1; } figure { grid-column: 1 / -1; grid-row: 2; } }

Then, I apply grid properties to my figure element to create four columns, each one larger than the last:

figure { grid-template-columns: 1fr 2fr 3fr 4fr; grid-gap: 0; align-items: end; }

I align my items to the end of their grid container, then nudge and rotate my images to give them the scattered look which Brodovitch inspired:

figure div:nth-of-type(1) { transform: translate(20px, -20px) rotate(-20deg); } figure div:nth-of-type(2) { transform: rotate(-5deg); } figure div:nth-of-type(3) { transform: rotate(10deg); } figure div:nth-of-type(4) { transform: translate(-60px, -80px) rotate(-20deg); }

People have told me that designs like these are appropriate for editorial design but not for products or websites which promote or sell them. This isn’t the case at all. Graphic design principles matter as much to a digital medium as they do for print. Diverse and interesting designs stand out whichever medium you choose to deliver them.

I used scattered pictures to bring this design for a new electric Vespa scooter to life. (Author’s recreation.) (Large preview)

Alexey Brodovitch instinctively knew how to combine photographs with written content, often turning text into shapes that contrasted with or mirrored the forms in his photography. Over the next few pages, I’ll show you how to adapt his inspired design techniques to the web.

Lesson 2: Mirroring pictures and text

Aexey Brodovitch made the double-page spread his playground, using its large canvas to carefully construct his compositions. Facing pages allowed Brodovitch the possibility to contrast or connect both sides of a spread.

'This Spring' spread from Harper’s Bazaar, March 1954. Photographer: Lilllian Bassman. (Large preview)

I need three structural elements to implement this next Brodovitch inspired design; a heading, main, and aside:

<h1 aria-label="Vespa"><img></h1> <main>…</main> <aside>…</aside> This larger screen design splits the viewport into two columns and uses colour and shape to reflect one half with the other. (Large preview)

Normal flow again handles my single column layout for small screens, and I need only foundation styles for colour and typography. These include background and foreground colours for the inverted aside:

aside { background-color: #ba0e37; color: #fff; }

Medium-size screens share those same foundation styles. I also reuse the same 6+4 compound grid from my previous design. To implement the asymmetric layout of elements inside my main and aside elements, I use Grid with four columns and a set of rows with a mixture of pixel and rem units:

@media screen and (min-width : 48em) { main, aside { display: grid; grid-template-columns: 2fr 1fr 1fr 2fr; grid-template-rows: 10rem 200px 10rem auto; } }

I add another unit, a viewport height unit, to those elements to ensure they always fill the full height of a screen:

main, aside { min-height: 100vh; }

My main and aside elements each contain a figure element and an SVG image and caption. To place those child elements on my grid, I apply display:contents; to my figure, and effectively remove it from the DOM for styling purposes, so its descendants take its place:

main figure { display: contents; }

Then, I place the images and captions into columns and rows using line numbers:

main img { grid-column: 1 / 4; grid-row: 1 / 5; } main figcaption { grid-column: 4 / -1; grid-row: 3; } aside img { grid-column: 2 / -1; grid-row: 1 / 5; } aside figcaption { grid-column: 1; grid-row: 3; align-self: center; }

Next, I apply styles for larger screens, first by creating a grid with two symmetrical columns, then placing my main and aside elements onto that grid using named lines:

@media screen and (min-width : 64em) { body { display: grid; grid-template-columns: [main] 1fr [aside] 1fr; } main { grid-column: main; } aside { grid-column: aside; } }

All that remains is for me to position the heading containing the two-tone Vespa logo. As I only need this version of the logo on larger screens, I use the picture element to swap a standard version for my two-tone variation:

<h1>
<picture>
<source srcset="logo-2-tone.svg" media="(min-width: 64em)">
<img src="logo.svg" alt="ihatetimvandamme">
</picture>
</h1>

Although Flexbox and Grid have mostly taken over as my main layout tools, there are still good uses for CSS positioning. As I want my logo in the centre of my layout, I position it half-way across the viewport, then use a transform to move it left, so its centre matched the background behind it:

h1 { position: absolute; top: 14rem; left: 50vw; z-index: 2; transform: translateX(-95px); } Carving text into shapes

Shapes add movement to a design which draws people in. They help to connect an audience with your story and make tighter connections between your visual and written content.

I’ve seen very few examples of using CSS Shapes which go beyond Basic Shapes, including circle(), ellipse(), inset(). Even instances of a polygon() are few and far between. Considering the creative opportunities CSS Shapes offer, this is disappointing. But, I’m sure that with a little inspiration and imagination, we can make more distinctive and engaging designs.

'If you don’t like full skirts.' Spread from Harper's Bazaar, March 1938. Photographer: George Hoyningen-Huene. (Large preview)

Inspired by Alexey Brodovitch, in my next design, the shape of the running text reflects shapes in the header image opposite. It often needs surprisingly little markup to develop dynamic and original layouts.

My Brodovitch inspired design uses CSS Shapes to echo the curves in the image within the column of running text. (Large preview)

To implement this design, I need just two structural elements; a header which also contains a picture element, and the main element for my running text:

<header> <picture>…</picture> </header> <main>…</main>

I’ve made a simple small screen design, so I only need colour and typography foundation styles. By using a picture element in my header, browsers swap the portrait orientation image — which best suits small screens — with a landscape variation when I introduce layout styles.

For larger screens, I apply an asymmetrical two-column grid to the body and place the header and main elements using named lines. I also ensure the grid extends to the full viewport height, regardless of the amount of content:

@media screen and (min-width : 64em) { body { display: grid; grid-template-columns: [header] 1fr [main] 1fr; max-width: 100vw; } header { grid-column: header; } main { grid-column: main; } }

I needn’t make a polygon path to flow my text around. Instead, I use a mask image which I add to the page using Generated Content applied to a pseudo-element:

main:before { content: ""; display: block; float: left; width: 170px; height: 800px; shape-outside: url('mask.svg'); }

Note: Watch out for CORS (cross-origin resource sharing) when using images to develop your shapes. You must host images on the same domain as your product or website. If you use a CDN, make sure it sends the correct headers to enable shapes. It’s also worth noting that the only way to test shapes locally is to use a web server. The file:// protocol simply won’t work.

When you need content to flow around a shape, use the shape-outside property. You must float an element left or right for shape-outside to have any effect.

'The Consensus of Opinion.' Spread from Harper's Bazaar, March 1936. Photographer: Man Ray. (Large preview)

You can also use Shapes to sculpt structural shapes from solid blocks of running text, in the style of Alexey Brodovitch.

The markup I need to implement this design is similar to my previous example, a header which again contains a picture element, and the main element for running text:

<header> <picture>…</picture> </header> <main>…</main> This design, inspired by Brodovitch’s 1936 'The Consensus of Opinion,' carves my running text into a slanted columm. (Large preview)

Inside my main element are two SVG images which I use to carve my running text into its shape. To prevent these presentational images being announced by screen readers, I give them both an aria-hidden attribute and a value of true:

<img src="shape-1.svg" alt="" aria-hidden="true"> <img src="shape-2.svg" alt="" aria-hidden="true"> <p>…</p>

Using what should be by now a familiar 6+4 compound grid, I apply Grid to the body element and place the header and main elements using line numbers:

@media screen and (min-width : 64em) { body { display: grid; grid-template-columns: 2fr 1fr 1fr 2fr 2fr 1fr 1fr 2fr; } header { grid-column: 1 / 5; } main { grid-column: 5 / 9; } }

Next, I rotate the header image clockwise by twenty degrees and place its transform-origin in the centre horizontally and vertically to ensure it stays in the middle of my header:

header picture { transform: rotate(20deg); transform-origin: 50% 50%; }

Finally, I float the first image left, and the second image right, so that my running text flows between them and mirrors the shape of the rotated image opposite. I don’t need a class attribute on these images. Instead, I use an attribute selector which targets the image source:

[src*="shape-1"], [src*="shape-2"] { width: 200px; height: 100vh; } [src*="shape-1"] { float: left; shape-outside: url('mask-1.svg'); } [src*="shape-2"] { float: right; shape-outside: url('mask-2.svg'); }

Tools like CSS Shapes now give us countless opportunities to capture readers’ attention and keep them engaged. I hope by now, you’re as excited about them as I am.

Developing pictures and text

When Brodovitch became frustrated with the commercial constraints of working on Harper’s Bazaar, in 1949, he began collaborating on Portfolio, the short-lived, but significant graphic design magazine. Running to just three issues, Portfolio allowed Brodovitch to focus on not just a magazine, but a “graphic experience.”

Left: 'Jackson Pollock.' Spread from Portfolio #3. Right: My Brodovitch inspired design uses three overlapping SVG images which are clipped using CSS clip-path and placed over background stripes implemented with a CSS gradient. (Large preview)

In issue 2 of Portfolio, Brodovitch designed a spread on the work of abstract-expressionist artist Jackson Pollock. In his design, shapes flow between columns which extend the full height of the page. I want to achieve the same style in my next Brodovitch inspired design, but whereas Brodovitch’s spread includes no copy, mine has two columns of running text. I also want the image behind my text to grow and shrink as the viewport changes size.

My markup for implementing this design contains only the bare minimum of main and aside elements:

<main>…</main> <aside>…</aside>

The aside also contains a division which I use to place the running text onto my grid, and a figure element containing the images I need to accomplish the design:

<aside> <div>…</div> <figure> <img src="clip-left.svg" alt=""> <img src="clip-center.svg" alt=""> <img src="clip-right.svg" alt=""> </figure> </aside>

This design feature should be visible at any screen size, so before implementing layout styles, I set up the figure’s three separate images. The left and right images are white on a red background, and the centre image has red outlines on a white background. My first task is to establish the figure element as a positioning context for its descendants and position them absolutely in the top-left of my figure:

figure { position: relative; } figure img { position: absolute; top: 0; left: 0; }

Next, I use the clip-path property to clip each of the images so that only one-third of them is visible:

figure img:nth-of-type(1) { z-index: 2; clip-path: polygon(0% 0%, 0% 100%, 33.33% 100%, 33.33% 0%); } figure img:nth-of-type(2) { z-index: 1; clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%); } figure img:nth-of-type(3) { z-index: 2; clip-path: polygon(100% 0%, 66.66% 0%, 66.66% 100%, 100% 100%); }

With these images positioned, I add layout styles for larger screens to place the main and aside elements onto my grid:

@media screen and (min-width : 64em) { body { grid-template-columns: 2fr 1fr 1fr 2fr 2fr 1fr 1fr 2fr; grid-column-gap: 2vw; } main { grid-column: 1 / 3; } aside { grid-column: 3 / -1; } }

To create the effect of three columns which extend to the full height of my page, I specify the full viewport height on my body element, then use a CSS gradient background image to give my aside element its column style. The percentage stop values in this gradient match the 33.33% widths of my clipped images:

body { height: 100vh; } aside { background-image: linear-gradient(to right, #ba0e37 33.33%, #fff 33.33%, #fff 66.66%, #ba0e37 66.66%); }

All that remains is to place my column of running text and figure in the aside, so I apply grid values to the aside and place the division and figure using line numbers. As these elements overlap on my grid, I use z-index to push the figure backwards and my running text forward:

aside { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-template-rows: 1fr 1fr; } aside div { z-index: 2; grid-column: 2; grid-row: 1 / 3; } figure { z-index: 1; grid-column: 1 / -1; grid-row: 2; }

This design may look complicated and on a first glance may not seem achievable across a range of screen sizes, but it’s surprisingly simple to develop. It retains its personality from small screens to larger ones and is flexible at every size in-between.

Flexible designs which include precisely positioned elements were previously tricky to implement using floats and CSS positioning. Modern CSS techniques have made these designs much more straightforward to accomplish, which I hope will result in a new generation of inspired designs.

Left: 'William Steig.' Spread from Portfolio #2, summer 1950. Right: This design, inspired by Brodovitch’s 1950 'William Steig,' uses CSS background gradients and SVG shapes. (Large preview)

My final design is again inspired by a spread from Portfolio’s second issue. Here, Brodovitch used a striking combination of black and white columns and a bold splash of colour. Implementing this design needs just two structural elements; a main and footer element:

<main>…</main> <footer>…</footer>

My main element contains a headline logo, an image, and a block of running text. The footer includes two divisions, each containing a set of SVG images to accomplish the effect:

<main> <h1 aria-label="Vespa"><svg>…</svg></h1> <img src="vespa.svg" alt="Vespa"> <h2>…</h2> <p>…</p> </main> <footer> <div>…</div> <div>…</div> </footer>

For medium-size screens, I place these elements on my grid using named lines:

@media screen and (min-width : 48em) { body { display: grid; grid-template-columns: [main] 1fr [footer] 1fr; min-height: 100vh; } main { grid-column: main; } footer { grid-column: footer; } }

To adjust my layout proportions for larger screens, I need only change values for grid-template-columns on the body element:

@media screen and (min-width : 64em) { body { grid-template-columns: [main] 2fr [footer] 3fr; } } Before implementing any design, I make a simple storyboard to demonstrate how my elements will flow across a selection of screen sizes. (Large preview)

Content in the main element needs only basic styling, but the footer which occupies the right half of larger screens requires more. To accomplish this footer’s striped column effect, I again use a linear gradient and four percentage-based colour stops:

footer { background-image: linear-gradient(to right, #000 33.33%, #fff 33.33%, #fff 66.66%, #000 66.66%); }

Each footer division contains a set of three SVG images:

<footer> <div> <img src="top-left.svg" alt=""> <img src="top-center.svg" alt=""> <img src="top-right.svg" alt=""> </div> <div> <img src="bottom-left.svg" alt=""> <img src="bottom-center.svg" alt=""> <img src="bottom-right.svg" alt=""> </div> </footer>

Styling the footer’s second division involves turning it into a flex container. As the browser’s default Flexbox styles include a horizontal flex-direction and no wrapping, I need just a single line of CSS:

footer div:nth-of-type(2) { display: flex; }

To ensure my images grow to fill the columns, but never exceed their width, I apply the flex-grow property and a value of 1, plus a max-width value of 33% to match my columns:

footer div:nth-of-type(2) img { flex: 1; max-width: 33%; }

The CSS clip-path property enables you to clip and element so that only part of it remains visible. These clipping paths can be any shape, from basic circles and eclipses to complex polygon shapes with any number of coordinates.

(Large preview)
  • 1. For the first image, I clip the right side leaving only the portion on the left visible (black area.)
  • 2. I position the second image absolutely and use a lower z-index value to send it to the bottom of my stacking order.
  • 3. For the third and final image, I clip the left side leaving only the portion on the right visible.
  • 4. My final result includes an SVG circle positioned over my images. A mix-blend mode tints the colour of elements below it in my stacking order.

For the images in the first division, I again position them absolutely within a positioning context, then use the clip-path property to clip each one so that only one-third of them is visible:

footer div:nth-of-type(1) { position: relative; min-height: 300px; } footer div:nth-of-type(1) img { position: absolute; top: 0; left: 0; } footer div:nth-of-type(1) img:nth-of-type(1) { clip-path: polygon(0% 0%, 0% 100%, 33.33% 100%, 33.33% 0%); } footer div:nth-of-type(1) img:nth-of-type(2) { clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 0% 100%); } footer div:nth-of-type(1) img:nth-of-type(3) { clip-path: polygon(100% 0%, 66.66% 0%, 66.66% 100%, 100% 100%); }

My final task is to add an SVG circle which uses a mix-blend-mode to blend its colour with elements behind it:

footer div:nth-of-type(1) svg { position: absolute; mix-blend-mode: darken; }

Since I discovered his work, Alexey Brodovitch has been the most substantial influence on the work I make for the web and how I encourage my students to think about theirs. His inability to accept anything mediocre has pushed me to think beyond what’s expected from product and website design. I hope my designs include his dash of daring.

NB: Smashing members have access to a beautifully designed PDF of Andy’s Inspired Design Decisions magazine and full code examples from this article.

  1. Inspired Design Decisions: Avaunt Magazine
  2. Inspired Design Decisions: Pressing Matters
  3. Inspired Design Decisions: Ernest Journal
  4. Inspired Design Decisions: Alexey Brodovitch
(ra, yk, il)
Categories: Design

Overflow And Data Loss In CSS

Wed, 09/04/2019 - 03:30
Overflow And Data Loss In CSS Overflow And Data Loss In CSS Rachel Andrew 2019-09-04T12:30:59+02:00 2019-09-04T12:35:19+00:00

CSS is designed to keep your content readable. If you consider an HTML document that is marked up with headings and paragraphs (with no CSS applied), it displays in the browser in a readable way. The headings are large and bold, and the paragraphs have space between them which is controlled by the browser default stylesheet. As soon as you want to change the layout of your page, however, you start to take some of the control into your own hands. In some cases, this will mean that you give yourself the job of dealing with overflow.

In this article, I’m going to take a look at the different ways we encounter overflow on the web. We’ll see how new layout methods and new values in CSS can help us to deal with overflow and create less fragile designs. I’ll also explain one of the fundamental concepts behind the design of CSS — that of avoiding data loss.

CSS Lists, Markers, And Counters

There is more to styling lists in CSS than you might think. Let’s take a look at lists in CSS, and moving onto some interesting features defined in the CSS Lists specification: markers and counters. Read more  →

What Do We Mean By Overflow?

If we look back a few years (before the advent of layout methods such as Flexbox and Grid Layout), then consider being handed a design like the one below. A very simple layout of three boxes, different amounts of content in each, but the bottom of those boxes needs to line up:

A neat set of boxes (Large preview)

With a floated layout, such a seemingly straightforward task was impossible. When floated, each box has no relationship to its neighbor; this means that has no way to know that the next door box is taller and to grow to match the height.

The box bottoms do not align (Large preview)

Sometimes, in an attempt to make things line up, designers would then restrict the height of the boxes by second-guessing the amount of content in order to make the boxes tall enough to match. Of course, things are never that simple on the web and when the amount of content differed, or the text size was larger, the text would start to poke out of the bottom of the box. It would overflow.

Overflow caused by fixing the box heights (Large preview)

This would sometimes lead to people asking how they could prevent too much content getting into the site. I’ve had people in support for my own CMS product asking how to restrict content for this very reason. They tell me that this extra content is “breaking the design”. For those of us who understood that not knowing how tall things were was a fundamental nature of web design, we would create designs that hid the lack of equal height boxes. A common pattern would have a gradient fading away — to mask the unevenness. We would avoid using background colors and borders on boxes. Or, we would use techniques such as faux columns to create the look of full-height columns.

This inability to control the height of things in relationship to other things therefore influenced web design — a technical limitation changing the way we designed. (I enjoy the fact that with Flexbox and Grid.) Not only did this problem disappear but the default behavior of these new layout methods is to stretch the boxes to the same height. The initial value of align-items is stretch, which causes the boxes to stretch to the height of the grid area or flex container.

See the Pen [Equal Height Boxes](https://codepen.io/rachelandrew/pen/VwZzxjV) by Rachel Andrew.

See the Pen Equal Height Boxes by Rachel Andrew.

In addition, CSS Grid gives us a nice way to ask for things to be at least a certain size, but grow larger if they need to. If you set a track size using the minmax() function, you can see a minimum and maximum size for the track. Setting rows to minmax(200px, auto) means that the track will always be at least 200 pixels in the block dimension — even if the grid items are empty. If, however, the content of a grid item means that it will be larger than 200 pixels, with the max set to auto it can grow. You can see this in the example below: The first row is 200 pixels as there are no items making it larger. The second row has a grid item with more content in than will fit, and so auto is being used and the track has grown larger than 200 pixels.

See the Pen [Minmax()](https://codepen.io/rachelandrew/pen/zYOdjKP) by Rachel Andrew.

See the Pen Minmax() by Rachel Andrew.

The minmax() function gives you the ability to create designs that look as if they have that perfect fixed size. In an ideal world (when the amount of content is pretty much as you expected), you will get those nice uniform rows. However, if additional content is added, there will be no overflow as there would be if you had fixed the height of the rows to 200 pixels. The row will expand; it might not be exactly what you wanted as a designer, but it will not be unreadable.

Inline Overflow

The potential for overflow happens whenever we restrict the size of things. In the above example, I am describing restriction in the block dimension, which horizontal language users will think of as height. However, we can also end up with overflow in the inline direction if we restrict the inline size or width of a box. This is something that we see in the “CSS is Awesome” meme:

The ‘CSS Is Awesome’ meme (Large preview)

The author of this meme commented on a CSS-Tricks post about it saying,

“I do have a slightly better grasp on the concept of overflow now, but at the time it just blew my mind that someone thought the default behavior should be to just have the text honk right out of the box, instead of just making the box bigger like my nice, sensible tables had always done.”

So why does CSS make the text “honk right out of the box” rather than grow the box?

In the meme, you have overflow in the inline direction. The word “awesome” is larger than the width applied to the box, and so it overflows. CSS fairly reasonably assumes that if you have made the box a certain width, you want the box that width. Perhaps it needs to fit into a layout which would break if boxes suddenly became larger than set.

That particular issue (i.e. the need to set sizes on everything in a layout and making sure they did not total more than the available inline size of the container) is something that our modern layout methods have addressed for us. If we imagine that our box had that absolute inline size so that it could fit in a line with other boxes in a float-based grid, today you might choose to approach that layout using Flexbox.

With the floated layout, you have to set the sizing on each item — potentially before you know what the content will be. You will then have big things forced into small containers and small things left with a lot of space around them.

Items in a floated layout need to have a width set (Large preview)

However, if we use Flexbox, we can allow the browser to work out how much space to assign each item. Flexbox will then ensure that bigger things get assigned more space while smaller things get less. This squishy sizing means that the box that contains the word “awesome” will grow to contain the box, and the text won’t honk right out or overlap anything else. Overflow issue solved; this behavior is really what Flexbox was designed for. Flexbox excels at taking a bunch of unevenly sized stuff and returning the most useful layout for those things.

Flexbox distributes space between the items (Large preview)

Outside of Flexbox, it is possible to tell our box to be as big as is needed for the content and no more. The min-content keyword can be used as a value for width or inline-size if working with flow-relative logical properties. Set width: min-content and the box grows just as big as is needed to contain the word “awesome”:

See the Pen [Awesome with min-content](https://codepen.io/rachelandrew/pen/LYPjmbJ) by Rachel Andrew.

See the Pen Awesome with min-content by Rachel Andrew. Avoiding Data Loss

The reason that the box overflows as it does (with the word escaping from the border of the box), is that the default value for the overflow property is visible. You could (if you wanted) manage that overflow in a different way. For example, using overflow: auto or overflow: scroll would give your box scrollbars. This is probably not something you want in this scenario, but there are design patterns where a scrolling box is appropriate.

Another possibility would be to decide that you are happy to crop the overflow by using overflow: hidden. Perhaps you might think that hiding the overflow would have been a better default, however, the fact that CSS chooses to make the overflow visible by default (and not hidden) is a clue to a core value of designing CSS. In CSS (as in most places), we try to avoid data loss. When we talk about data loss in CSS, we are typically describing some of your content going missing. In the case of overflow: hidden, the overflowing content disappears. This means that we have no way to get to it to see what we have missed out on.

In some cases, this could be a real problem. If you have managed to create a design so fragile that the button of your form is in the cropped-off area, your user has no ability to complete the form. If the final paragraph is trimmed off, we never know how the story ends! Also, the problem with things vanishing is that it isn’t always obvious that they have gone. As the designer, you may not spot the problem, especially if it only happens in certain viewport sizes in a responsive design. Your users may not spot the problem — they just don’t see the call to action, or think it is somehow their problem they can’t place their order and so go away. However, if things overflow messily, you will tend to spot it. Or, at worse, someone using the site will spot it and let you know.

So this is why CSS overflows in a messy, visible way. By showing you the overflow, you are more likely to get a chance to fix it than if it hides the overflow. With the overflow property, however, you get a chance to make the decision yourself about what should happen. If you would prefer the overflow be cropped (which may be the right decision in some cases), use overflow: hidden.

Data Loss And Alignment

The better alignment abilities we have gained in recent years also have the potential for data loss. Consider a column of flex items that are up against the edge of the viewport and with different sizes. Aligned to flex-start, the items all stick out more to the right. Aligned to center, however, the longer item would actually end up off the side of the viewport. Alignment could therefore cause data loss.

To prevent accidental data loss caused by alignment, CSS now has some new keywords which can be used along with the alignment properties. These are specified in the Box Alignment specification — the specification which deals with alignment across all layout methods including Grid and Flexbox. They are currently only supported in Firefox. In our example above, if we set align-items: safe center, then the final item would become aligned to start rather than forcing the content to be centered. This would prevent the data loss caused by the item being centered and therefore pushed off the side of the viewport.

If you do want the alignment (even if it would cause overflow), then you can specify unsafe center. You’ve then requested that the browser does your chosen alignment no matter what then happens to the content. If you have Firefox, then you can see the two examples: one with safe and the second with unsafe alignment.

See the Pen [Safe and unsafe alignment](https://codepen.io/rachelandrew/pen/QWLMrpE) by Rachel Andrew.

See the Pen Safe and unsafe alignment by Rachel Andrew.

In the talk on which I based this article, I described web design as being a constant battle against overflow. One of the truths of designing for the web is that it’s very hard to know how tall or how large (in the block dimension any element that contains text will be. However, as I have shown above, we have never had so many ways to manage overflow or the potential of overflow. This means that our designs can be far more resilient, and we can create patterns that will work with varying amounts of content. These might seem like small shifts in our capabilities, but I think the possibilities they open up to us are huge.

(il)
Categories: Design

Automating Website Deployments Through Buddy

Tue, 09/03/2019 - 03:30
Automating Website Deployments Through Buddy Automating Website Deployments Through Buddy Leonardo Losoviz 2019-09-03T12:30:00+02:00 2019-09-03T12:06:10+00:00

(This is a sponsored article.) Managing the deployment of a website used to be easy: It simply involved uploading files to the server through FTP and you were pretty much done. But those days are gone: Websites have gotten very complex, involving many tools and technologies in their stacks.

Nowadays, a typical web project may require to execute build tools to compress assets and generate the deliverable files for production, upload the assets to a CDN and invalidate stale ones, execute a test suit to make sure the code has no errors (for both client and server-side code), do database migrations (and, to be on the safe side, first execute a backup of the database), instantiate the desired number of servers behind a load balancer and deploy the application to them (through an atomic deployment, so that the website is always available), download and install the dependencies, deploy serverless functions, and finally notify the team that everything is ready through Slack or by email.

All this process sounds like a bit too much, right? Well, it actually is too much. How can we avoid getting overwhelmed by the complexity of the task at hand? The solution boils down to a single word: Automation. By automating all the tasks to execute, we will not dread doing the deployment (and having a trembling sweaty finger when pressing the Enter button), indeed we may not be even aware of it.

Automation improves the quality of our work, since we can avoid having to manually execute mind-numbing tasks again and again, which will enable us to use all our time for coding, and reassures us that the deployment will not fail due to human errors (such as overriding the wrong folder as in the old FTP days).

Introduction To Continuous Integration, Delivery, And Deployment

Managing and automating software deployment involves both tools and processes. In particular, Git as the version control system where to store our source code, and the availability of Git-hosting services (such as GitHub, GitLab and BitBucket) which trigger events when new code is pushed into the repository, enable to benefit from the following processes:

  • Continuous Integration
    The strategy of merging changes in the code into the main branch as often as possible, upon which automated tests against a build of the codebase are run to validate that the new code doesn’t introduce errors;
  • Continuous Delivery
    An extension to Continuous Integration which also automates the release process, enabling to deploy the project into production at any moment;
  • Continuous Deployment
    An extension to Continuous Delivery which automatically deploys the new code whenever it passes all required tests (as small a change it may contain), enabling to easily identify the source of any problem that might arise, and removing pressure off the team as it doesn’t need to deal with a "release day" anymore.

Adhering to these strategies has several benefits. The most immediate one is that our product can ship new features faster, indeed they can go live as soon as the team has finished coding them. The team can also receive feedback immediately (either from team members on a development environment, from the client on a staging environment, and from the users after it goes live) and be able to react straight away, thus creating a positive feedback loop. And because the whole process is fully automated, the team can save time and focus on the code, thus improving the quality of the product.

Continuous delivery enables getting feedback as early as possible. (Large preview) Introducing Buddy, A Tool For Automating Software Deployment

The popularity of Git has given rise to a new generation of tools to manage the complexity of software deployments. Buddy is one of these new tools, born with the goal of making it easy to implement Continuous Integration/Delivery/Deployment, while broadening the number of features our application can provide, improving its quality, and reducing its costs by allowing to incorporate the offerings of the best or cheapest cloud-based service providers (among them AWS, DigitalOcean, Google Cloud Platform, Cloudflare, Rackspace, Azure, and others) into our stacks. This way, for instance, our application can be hosted on GitHub, be protected from DDoS through Cloudflare, have its static files hosted through DigitalOcean, use serverless functions from AWS Lambda, and authenticate users through Firebase, and everything is handled seamlessly.

Buddy operates through the use of pipelines: Sets of actions defined by the developer in a specific order, executed either manually or automatically when executing a Git push, that deliver the application from a Git repository to wherever needed and transforming it as required. Pipelines are extremely flexible, enabling developers to add only the required actions and have them customized for their specific needs.

For instance, the following pipeline performs all required tasks to deploy some Node.js application: execute the build step, upload files to the server through SFTP, upload assets to AWS S3 and purge them from the CDN, restart the server and finally inform the team through Slack (as it can be appreciated in the image below, the pipeline can be self-explanatory):

An example of a pipeline to deploy a Node.js application. (Large preview)

We can create different pipelines for different environments, and execute special actions when the process fails (such as when a test was not successful when the server to deploy to is down, or others). For instance, the following pipeline (to deploy a Node.js & PHP app that uses DigitalOcean, Fortrabbit & AWS CloudFront for hosting) makes a backup of assets and purges the CDN only when deploying to production, and sends a notification to the team through Slack in case of failure:

Pipeline configured for different environments. (Large preview)

A noteworthy effect of configuring our pipelines with actions from different cloud-service providers is that we can conveniently switch among them whenever the need arises, making it easy to avoid vendor lock-in (this includes also changing the repository provider). Buddy offers slightly over 100 actions out of the box, and also allows developers to create and use their own actions. This image shows all the readily available actions:

Out of the box actions in Buddy. (Large preview) Creating A Pipeline

Let’s see how to create a simple pipeline to test and deploy a Node.js application, and send a notification to the team. The first step is to create a new project, during which you will be asked to select the project’s hosting provider (from among GitHub, GitLab, Bitbucket, Buddy Git Hosting, and your private Git server), and then to select the repository:

Selecting the hosting provider (Large preview)

Then we can create the pipeline, specifying when it must run (either manually, automatically after new code is pushed to the repository, or automatically every x amount of time) and from which branch:

Creating a new pipeline (Large preview)

Then we can add actions to the pipeline. For that, we simply click on the “+” button to add a new action, upon which we must configure it as needed. To build and test a Node.js application we add and configure a “Node.js” action:

Adding a Node.js action (Large preview)

After testing the application, we can deploy it by uploading it to our production server through SFTP. For this, we add an “SFTP” action, and configure it through custom-defined environment variables ${SFTP} and ${SFTP_USER}:

Adding an SFTP action (Large preview)

Finally, we send an email to the team with the results of the execution. For this, we add and configure the “Email” action:

Adding an Email action (Large preview)

That’s it. Our pipeline will look like this:

Pipeline finished (Large preview)

From this moment on, if the pipeline was configured to run when new code is pushed to the repository, doing a git push will trigger the execution of the pipeline.

Staying Constantly Up To Date

Web development is in a never-ending state of flux, with new tools and services being launched without a break, some of them often becoming a hot trend immediately and the new normal barely a few months later. Technologies seldom heard of a few years ago progressively gain importance and eventually become a must (voice search, machine learning, WebAssembly), new frameworks and libraries offer new ways of building sites (GraphQL, Gatsby, Next.js, Nuxt.js), and our applications need to be accessed from newly-invented devices (Amazon Echo, In-car systems). To keep our applications relevant, we must continuously evaluate the latest offerings and decide if to add them to our technology stack. Hence, it is extremely important that our platforms for developing the application do not restrict what technologies we can use.

Buddy deals with this issue by continuously collecting feedback from its users about what they need (through user polls, their forum, communication channels, and tweets), and its team strives to deliver the required features. The Buddy blog provides a glimpse of the intense pace of development: For instance, in the last few months they implemented features for building static apps and websites with Gatsby, deploying to UpCloud and to Google Cloud Functions, triggering pipelines with webhooks, integrating with Firebase, building and running Docker containers on AWS ECS, and many others.

Conclusion

Automation has become a must to avoid being overwhelmed by the complexity of modern website deployment. We can make use of Continuous Integration/Delivery/Deployment (readily feasible by hosting our source code through Git) to shorten the time needed for delivering new features into our applications and getting feedback from the users.

Buddy helps in this task, enabling developers to create pipelines to execute actions concerning a wide array of technologies and cloud-service providers, and combining these actions in any possible way to satisfy the most particular needs.

You can check out Buddy for free for 2 weeks and, if you need to host your data, you can also install it on your own premises.

(ms, ra, yk, il)
Categories: Design

Moving From Sketch To Figma: A Case Study Of Migrating Design Systems

Mon, 09/02/2019 - 03:30
Moving From Sketch To Figma: A Case Study Of Migrating Design Systems Moving From Sketch To Figma: A Case Study Of Migrating Design Systems Buzz Usborne 2019-09-02T12:30:59+02:00 2019-09-02T11:46:54+00:00

For the past year, every time I got frustrated with Sketch, a colleague of mine suggested I try Figma. Then, when I wrote an article about building our design system in Sketch, I got a bunch of feedback from people telling me to try Figma. And recently, Linda, our Head of Design at Help Scout, asked me, “Hey Buzz, shouldn’t we be using Figma?”

I couldn’t fight it anymore… I just had to try Figma!

This isn’t a love letter to Figma or a harsh review of Sketch. Instead, it’s a cautionary tale for anyone who is thinking of moving tools. This is the story of how everything panned out, and the specifics of migrating a design system from one platform to another.

Understanding The Cost

The first thing to consider is that there’s a cost involved in switching tools — a consideration not usually factored into the conversation whenever there’s a #designtwitter pile-on. Only one-person teams can afford to change design tools at will; for busy teams, it’s not so easy.

The difficulty for us at Help Scout was the fact that our design system is built as multiple, interdependent Sketch Libraries managed with GitHub. We also have multiple in-flight projects, processes and vast documentation that all depend on Sketch files. And don’t forget the monumental effort involved in training and moving an entire team onto a new tool whilst simultaneously doing actual work!

Contributing to Help Scout’s design system happened through GitHub. (Large preview)

There’s also a financial cost involved in someone (in this case, that’d be me) taking the time away from business-as-usual work to research and document all this good stuff. Point is, if you work in an established design team, you’ll know that changing tools is about as easy as moving offices.

But that’s how this works. Tools are “sticky” just by virtue of being hard to leave. Suffice to say, this wasn’t going to be a decision we made lightly.

Kicking The Tires

With the understanding that my decision would have an impact on the whole team and organization, I started by spending two full days exploring Figma. I watched videos, I spoke to other designers who use it often and I played with the tool… a lot! Essentially, I explored how easy it would be to move our Sketch components over. A question that came to mind was whether it would be as easy as opening a .sketch file in Figma?

Unsurprisingly, no.

It turns out that Figma and Sketch — while similar in layout and functionality — have some key differences in how they allow components to be overridden. This was the kicker. Figma allows for color, type and effects (shadows, etc.) to be customized by the user, whereas Sketch will only allow pre-determined overrides. Because of the limitations Sketch imposes on overriding components, we’d built our original design system around that — allowing full color, border and style control using a complex system of masks and building-block components.

Over-complicated? Yes. But it worked great for us.

Here’s a simple card symbol in Sketch which was made from five nested symbols that were necessary in order for us to achieve the level of flexibility we required. This is the exact kind of thing that doesn’t import well into Figma.

A preview of how we brought Figma-level overrides to Sketch (Large preview)

While this complexity in Sketch allowed us the level of flexibility Figma offers out-the-box, it meant that almost any component imported from Sketch brought an unnecessary level of complexity along with it. In order for us to use Figma, we’d need to rebuild everything from scratch to strip each component down to the essentials.

Decision Time!

Given the above, my initial decision was that although I thought Figma was the better tool, the stronger company, and the safer long-term bet, it was going to be too difficult and costly to switch. Re-building entire libraries is a big job! It was like breaking up before we’d even given it a chance.

“It’s not you, it’s us.”

But as it happens, Figma are Help Scout’s customers. On hearing our decision to stick with Sketch, our Head of Sales set up a call with the Figma product team — not necessarily to change anyone’s minds, but to share our experiences, more like as friends do. They were understandably cool about the whole thing, but asked whether they could talk to me about my decisions. And that was an opportunity not to be missed!

In the days leading up to my conversation with the Figma team, I decided to jump back into the tool — at the very least to give myself enough understanding to be able to talk with confidence and not look like a total amateur in front of people who knew a lot more in this area than me. By the time I spoke with the team, I was a convert — in just those couple of extra days, I realized how much more productive and collaborative we’d be as a team with these kinds of features at our disposal. The cost of switching hadn’t changed, but my opinion of whether the cost was worth it had. Help Scout’s Head of Design made a compelling point to that effect too: If we feel like we’ll make the switch someday, then why not today?

So my conversation with Figma ended up being more along the lines of, “Give me some advice on how to make this work,” which they graciously did.

How To Switch

So it’s possible that you might be in the same spot I was; you want to move tools but are faced with the monumental task of rebuilding hundreds of components, styles, and a load of documentation. Well, friend, you’re going to need to take a deliberate and systematic approach to this. Your mileage may vary, but this is how I moved Help Scout’s entire design system to Figma in just a week.

  1. Split Your Libraries
  2. Lean Heavily On Styles (+ Documentation)
  3. Show How Components Extend
  4. Organize Properly
  5. Importing vs. Re-Building
  6. Get Your Team On Board
  7. Go All In
1. Split Your Libraries

This applies to creating Sketch libraries too, but I strongly suggest splitting design systems into separate sub-libraries that cover different parts of your ecosystem. In our case, we have Core which contains components applicable to any designer (brand assets, illustrations, icons, etc.), then domain-specific documents. This approach makes migration a bit easier to handle when you’re moving things over in organized chunks.

Our design libraries, separated by team. (Large preview)

In our case, migrating to Figma involved beginning with the Core elements — which were then used to build out subsequent libraries.

2. Lean Heavily On Styles (+ Documentation)

Figma has “Styles” that work in the same way you’re used to seeing Type Styles working in Sketch, but with the added benefit that these also apply to color and effects. With this in mind, it’s really useful to define all your colors and shared elements in one single library, then document them.

An example of how styles are documented within each library (Large preview) 3. Show How Components Extend

Since Figma allows much greater control over how components can be extended, you’ll probably end up with fewer components than you had in Sketch — instead of “button solid color” and “button outlined,” in Figma you’ll just need “button”. Because of this, I found that it was important to document the different ways a component can be extended directly within the library itself.

For example, only one component is required to re-create an entire two-sided chat conversation in Figma. But a new designer would never know what overrides to apply, so it’s important to visually demonstrate whenever it’s possible. Here’s the same component being used in six different ways:

An example of how a single Figma component can construct an entire conversation (Large preview) 4. Organize Properly

I quickly abandoned trying to replicate the naming structure I had in the original Sketch files because of subtle differences in how Figma’s file system works. Ultimately, the aim is to make sure components are in a logical place and easy to find, and the best way I found to achieve that was to carefully organize my Pages by category (e.g., Forms), Frames by group classification (e.g., Inputs) and Components by individual element (e.g., Error). Being specific about naming makes components super easy to find — especially by people who didn’t originally create them.

Naming is important! (Large preview) 5. Importing vs. Re-Building

Phew, I wish I had good news here about the physical act of importing Sketch components (for a lot of things, namely individual elements like icons which you can import from Sketch and it’ll all work out great). However, for more complex components (especially ones that involve masks and nested symbols), you’re better off re-creating the components from scratch. Yes, it’s extremely painful, but on the upside, you’ll get really good at using Figma in a very short time!

My workflow in Figma for re-creating the more complex Sketch components was literally to screenshot then “trace” them in Figma. As ridiculous as this sounds, it turned out to be much faster than importing from Sketch and removing the unnecessary elements. And I’m a little bit ashamed to say that I love this kind of work, but also, turns out that this workflow was more effective.

(But of course, if you’re migrating simpler components like icons, then Figma’s importing capabilities will serve you just fine.)

An insight into my day (Large preview) 6. Get Your Team On Board

As a 100% remote team, most things we do at Help Scout are well communicated — this was no different. So while the team was aware of the impending tool switch, it wasn’t until I had finished the design system that they got the nudge.

At this point, I gave a 20-minute demo video explaining Figma, some basics on how to use it, and some of the cool improvements they’ll find to their workflow when using components. This turned out to be a hit and definitely softened the blow for people who were perhaps a little hesitant about the move at first.

The original video that I shared with my team 7. Go All In

Part of my initial research involved seeing whether we could maintain our design system in Sketch and Figma simultaneously. I’m certain it can be done, but it’s a bit of a stretch for us given our fairly small team size and the fact we have no single person or team dedicated to the upkeep of our libraries. But instead of keeping what we had in place, I decided to go all-in on Figma.

This meant creating and updating all documentation and employee onboarding to reference the new stuff which forced me to address the migration of anything that referenced the old stuff — including existing development processes and designer hand-off. Ultimately, drawing a line in the sand meant that we were all committed to making this a success.

Of course, the Sketch libraries still exist; they’re just no longer documented nor updated. And in terms of migration, in-flight projects continue to use Sketch files (although some designers have chosen to migrate their work to Figma), whereas new projects use Figma. It’s a clean break.

Conclusion: Make A Plan!

It’s hard to conclude an article like this without sounding like I have all the answers — which I most certainly do not. But my advice to anyone switching tools is to take it slow. Put in the research, make a plan of attack, figure out the cost then weigh up whether you’re prepared to pay it — this applies whether you’re moving to Figma, Sketch, InVision Studio, Adobe XD, Framer X or some other trendy new tool I haven’t heard of yet.

For us, time will tell, but I’m still pretty confident we made the right call!

Further Reading (mb, yk, il)
Categories: Design

Blissful Thoughts And Embracing Change (September 2019 Wallpapers Edition)

Sat, 08/31/2019 - 00:00
Blissful Thoughts And Embracing Change (September 2019 Wallpapers Edition) Blissful Thoughts And Embracing Change (September 2019 Wallpapers Edition) Cosima Mielke 2019-08-31T09:00:00+02:00 2019-08-31T08:06:24+00:00

Lush green slowly turning into yellows, reds, and browns in the Northern hemisphere; nature awakening from its slumber in the Southern part of the world: September is a time of change. A chance to leave old habits behind and embrace the beginning of something new. And, well, sometimes a small change of routines is already enough to spark fresh inspiration and, who knows, maybe even great ideas.

With that in mind, we embarked on our monthly wallpapers challenge more than nine years ago, and since then, artists and designers from all across the globe have accepted the challenge and submitted their designs to it to cater for a bit of variety on the screens we look at so often. Of course, it wasn’t any different this time around.

This post features their wallpapers for September 2019. All of them come in versions with and without a calendar, so it’s up to you to decide if you want to have the month at a glance or keep things simple. As a bonus goodie, we also collected some timeless favorites from past years’ editions at the end of this post. A big thank-you to all the artists who have submitted their wallpapers and are still diligently continuing to do so. Happy September!

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Submit your wallpaper

We are always looking for creative designers and artists to be featured in our wallpapers posts. So if you have an idea for an October wallpaper, please don’t hesitate to submit your design. We’d love to see what you’ll come up with. Join in! →

Stay Or Leave?

Designed by Ricardo Gimenes from Sweden.

Bear Time

Designed by Bojana Stojanovic from Serbia.

National Video Games Day Delight

“September 12th brings us National Video Games Day. US-based video game players love this day and celebrate with huge gaming tournaments. What was once a 2D experience in the home is now a global phenomenon with players playing against each other across statelines and national borders via the internet. National Video Games Day gives gamers the perfect chance to celeberate and socialize! So grab your controller, join online and let the games begin!” — Designed by Ever Increasing Circles from the United Kingdom.

Finding Jaguar

“Nature and our planet have given us life, enabled us to enjoy the most wonderful place known to us in the universe. People have given themselves the right to master something they do not fully understand. We dedicate this September calendar to a true nature lover, Vedran Badjun from Dalmatia, Croatia, who inspires us to love our planet, live in harmony with it and appreciate all that it has to offer. Amazon, Siberia and every tree or animal on the planet are treasures we lose every day. Let’s change that!” — Designed by PopArt Studio from Serbia.

Celebrate Like A Hispanic

“September marks the start of the Hispanic Heritage Month, a multicultural tradition we should all be proud of.” — Designed by Yaiza Narganez Gomez from Belgium.

Cheerful September

“Wanted to create something colorful and happening for this month.” — Designed by Ciara from India.

Cozy Times

“As the days are getting shorter and colder, fall is here again. Enjoy these cozy times!” — Designed by Melissa Bogemans from Belgium.

Blissful Thoughts

Designed by Thamil G from Chennai, India.

Give Life A Chance

“Life is all about taking chances. God is going to give you all the opportunities in life, it’s on you to take the chance and make a successful life out of it.” — Designed by Pragya from India.

Even The Cactus Needs A Little Moist

“Even the toughest of hearts need a tiny bit of gentleness and kindness just like the cactus that needs to be nourished with a little bit of water and sunlight to stay bright and blooming in the bumpy journey of life.” — Designed by Sweans Technologies from London.

Do Better

“Character is what you do when no one else is watching. A friend recently posted the 2nd half of this quote on their Instagram and it’s been my mantra lately.” — Designed by Marie Newell from Missouri, USA.

For Poor Children

“We created this wallpaper, wanted to show what we wanted and done!” — Designed by Vạn Đăc Phúc from Vietnam.

The Mythical Land Of School

“Going back to school is always a thrill no matter how big or small you are, facing new knowledge and challenges is one of the most satisfying feelings a human can encounter in life.” — Designed by Maria Keller from Mexico.

Online Learning

“Online learning is the most popular way learning nowadays and, thus, I created a view which represents that.” — Designed by Ritu from India.

Put Some Green Everywhere

“I took this photo in Chaumont France, at the garden festival. For example, plants and concrete are in good association in that corner. That’s why I think we should put more plants in the cities and everywhere.” — Designed by Philippe Brouard from France.

Oldies But Goodies

Some things are too good to be forgotten. That’s why we dug out some September favorites from our wallpapers archives. Please note that these designs don’t come with a calendar. Enjoy!

Cacti Everywhere

“Seasons come and go, but our brave cactuses still stand. Summer is almost over, and autumn is coming, but the beloved plants don’t care.” — Designed by Lívia Lénárt from Hungary.

No More Inflatable Flamingos!

“Summer is officially over and we will no longer need our inflatable flamingos. Now, we’ll need umbrellas. And some flamingos will need an umbrella too!” — Designed by Marina Bošnjak from Croatia.

Funny Cats

“Cats are beautiful animals. They’re quiet, clean and warm. They’re funny and can become an endless source of love and entertainment. Here for the cats!” — Designed by UrbanUI from India.

Talk Like A Pirate Day

“This calendar was inspired by International Talk Like a Pirate Day (September 19) — one of the many obscure and quirky days we celebrate in New Orleans. Our fair, colorfully corrupt city has entertained its share of outlaws over the years, but none as infamous as the pirate Jean Lafitte, a Frenchman who terrorized sailors and ships in the Gulf of Mexico and distributed his booty from a warehouse in New Orleans in the early 1800s. This calendar is a playful tribute to all of the misfits, outcasts and swashbucklers who call New Orleans home.” — Designed by Sonnie Sulak from New Orleans, LA.

Geometric Autumn

“I designed this wallpaper to remind everyone that autumn is here and they are still reading the best design website, Smashing Magazine” — Designed by Advanced Web Ranking from Romania.

Summer Ending

“As summer comes to an end, all the creatures pull back to their hiding places, searching for warmth within themselves and dreaming of neverending adventures under the tinted sky of closing dog days.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Flower Soul

“The earth has music for those who listen. Take a break and relax and while you drive out the stress, catch a glimpse of the beautiful nature around you. Can you hear the rhythm of the breeze blowing, the flowers singing, and the butterflies fluttering to cheer you up? We dedicate flowers which symbolize happiness and love to one and all.” — Designed by Krishnankutty from India.

Penguin Family

“Penguins are sociable, independent and able to survive harsh winters. They work as a team to care for their offspring and I love that!” — Designed by Glynnis Owen from Australia.

Shades Of Summer

“You can never have too many sunglasses” — Designed by Marina Eyl from Pennsylvania, USA.

Be The Wind Of Change

“Be the wind of change. Nature inspired us in creating this wallpaper as well as the Scorpion’s song “Wind of change” we dedicate to all creatives worldwide.” — Designed by Design19 from Romania.

Laughing In Flowers

“A colorful wallpaper to brighten up your day.” — Designed by Shavaughn Haack from South Africa.

Colors Of September

“I love September. Its colors and smells” — Designed by Juliagav from Ukraine.

Tsukimi

“The moon will become the roundest in mid-autumn and Japanese will eat Dango (sweet rice dumpling) while admiring the moon.” — Designed by Evangeline Neo from Japan.

Red Beetle

Designed by Oxana Kostromina from Russia/Germany.

It’s September But I Can Still Ride The Waves

“Summer seems to be over… but the weather is still warm and we definitely can enjoy the sea for a little longer. So… let’s go and ride the waves! Are you coming?” — Designed by WebOlution from Greece.

Join In Next Month!

Thank you to all designers for their participation. Join in next month!

Categories: Design

VuePress: Documentation Made Easy

Fri, 08/30/2019 - 04:00
VuePress: Documentation Made Easy VuePress: Documentation Made Easy Ben Hong 2019-08-30T13:00:59+02:00 2019-08-30T13:35:49+00:00

When it comes to any project that requires any user interaction (e.g., end users, maintainers, etc.), there is one critical factor that makes the difference between success and failure: good documentation. This holds true regardless of how small or large your project is. After all, short of providing one-on-one support for your project, documentation is the first line of defense for users who are trying to solve a problem. And whether you like it or not, you will never hear from users who give up after being unable to solve their problem due to inadequate documentation.

The Challenges Of Creating Good Documentation

When it comes to writing good documentation, there are four recurring issues that teams often encounter:

  1. Documentation is often out of date.
    While no documentation for a project can be a frustrating experience, it is arguably worse to have outdated documentation. After all, the purpose of having documentation is to provide users with an official body of knowledge that they can rely on. When it is out of date, users are wasting their time and ultimately losing trust in your product.

    The primary reason that documentation becomes outdated is that documentation maintenance is separate from code changes. Without investing significant time and energy, this can be difficult to solve because:
    1. Documentation usually lives in a third-party service like Confluence or a Wiki,
    2. Developers are typically more interested in writing code than documentation.
  2. Users are unable to easily provide feedback.
    No matter how good you think your documentation is, it is ultimately meaningless without testing it with real users who can provide feedback. As mentioned earlier, a common bias when evaluating the effectiveness of things like documentation is failing to account for users who were unable to solve their problems and gave up. Since no team would ever be able to account for every scenario of how users might use your product, users must have an easy and reliable way to give feedback.
  3. Documentation is often written for by power users for power users.
    The flaw with using standard tools like wikis or README files is that they often only cater to a specific set of users who often have a pre-existing knowledge of the library and/or technology stack. As a result, it is fairly simple for them to navigate the space and find what they need. However, new users lack this prior knowledge and thus often require a much more immersive experience to engage them.

    Examples of this include:
    • A well-designed website,
    • Search capability,
    • Guided side navigation,
    • Easily identifiable meta information (i.e., last updated date),
    • Immersive content that extends beyond a wall of text that is difficult to easily comprehend.
  4. Poor infrastructure that makes documentation difficult to maintain.
    As if writing good documentation that users can understand is not difficult enough, the ease in which a developer can write and/or maintain documentation is critical to its long term viability. So, for every additional barrier that developers must deal with to write and/or maintain documentation, the more likely it is that it will ultimately fail. As a result, it is critical that the authoring experience and maintenance of any documentation be as seamless and engaging as possible.

If only there was a tool that could do all of these things for us…

Enter VuePress

When first hearing of VuePress, one might be tempted to guess that it is an amalgamation of Vue.js and WordPress. Instead, you should think of VuePress as:

Vue.js + Printing Press

Because when all is said and done, VuePress is a static site generator!

Some of you might be thinking, “Wait. Another static site generator? What’s the big deal?”

While there are a number of tools that are static site generators, VuePress stands out from amongst the pack for a single reason: its primary directive is to make it easier for developers to create and maintain great documentation for their projects.

Why Is VuePress So Powerful For Creating Documentation?

The answer is simple. It was designed with one goal in mind: to help developers create great documentation sites while maintaining a fun authoring experience. This means that it provides a framework for developers to:

  • Create beautiful documentation sites,
  • Come with pre-built features essential to all documentation sites,
  • Optimize the authoring experience to make it as simple as updating a Markdown file.
VuePress Can Co-Exist With Your Existing Codebase

This is one of the main reasons why I strongly recommend it. When it comes to maintaining documentation, one way to guarantee it will go out of date is to make it difficult for developers to update docs when writing code. If you make the authoring experience difficult by forcing developers to update things in two different places, this will cause a lot of friction and often results in documentation getting left to the wayside. This is commonly seen when developers have to maintain a third party tool like a wiki, in addition to the codebase itself.

Because it is a static site generator, this means that it can live in the same exact repo as your code.

A sample application with VuePress docs existing next to a Vue CLI scaffolded app (Large preview)

As you can see in this sample web app structure, your code would live in the src directory as normal, but you would simply have a docs directory to contain all of your documentation. This means you get the benefits of:

  • All documentation is now version controlled;
  • Pull requests can now contain both documentation and code changes;
  • Creating separate scripts for running local instances of your code and docs at the same time;
  • Utilize build pipelines to determine whether new documentation site deployments go in sync with code deployments or not.
The Default Theme Comes With Standard Configuration

Writing documentation is hard enough as it is, so VuePress offloads a lot of the decisions that one normally has to make and has a bunch of built-in defaults to make your authoring experience easy:

  • Writing content is done primarily with Markdown.
    This means that you can leverage your existing knowledge of Markdown syntax to style and format your text.
An example of how the Markdown is rendered in VuePress (Large preview)
  • Code syntax highlighting.
    If you were building a site all on your own, you would need to wrestle with color syntax highlighting libraries. But you’re in luck because you can add code blocks in VuePress is so easy since everything is ready to go with zero configuration.
An example of how code block samples render in VuePress (Large preview)
  • Front matter for defining page-level meta data.
    Even though you’re authoring in a Markdown file, you can use front matter (like YAML, JSON, or TOML) to define metadata for your page to make it even easier to manage your content!
--- title: Document Like a Pro lang: en-US description: Advice for best practices tags: - documentation - best practices ---
  • Custom Markdown containers.
    In case you didn’t know, Markdown has extensions to add even more useful shortcuts to create beautiful UI components like custom containers. And since they are so useful in documentation, VuePress has already configured it so you can use it right out of the box:
A demonstration of custom containers rendered in VuePress (Large preview) Built-In Search Functionality

Let’s face it. No matter how much time we spend writing great documentation, it will ultimately amount to being useless if the users can’t find it. There are generally two approaches to this:

  1. Wait for search engine robots to slowly crawl your site in hopes that one day your users will be able to find the right page on your site. Not a great solution.
  2. Build your own search functionality, but this can be difficult to implement for static sites as there’s no server-side code running to create search indexes and perform the lookups. Not to mention this is taking development time away from the product itself. So this isn’t great either.

Luckily for us, VuePress is here to save the day once again!

VuePress comes with a built-in search functionality that generates its own “search engine” — you read that right. Without any additional database setup or configuration, VuePress is setup to scrape your entire docs to generate a simple search engine that will surface all of your h1s and h2s to your user.

An example of basic searching in VuePress’ default theme (Large preview)

Now, some of you may be thinking,

“What if I want something that will provide lower-level indexing for searching?”

Well, VuePress has got you covered there too because it is designed to easily integrate with Algolia DocSearch which can provide that functionality to you for free if you meet their requirements:

An example of Algolia DocSearch in action (Large preview) Sidebar Navigation Is As Simple As Toggling The Feature On Or Off

For anyone who has ever been responsible for managing content, you know how complicated it can be to built a sidebar that has nested items and then track what position the reader is on while scrolling down. So, why spend the time on that when you could be writing better docs? With VuePress, the sidebar is as simple as toggling on the front matter for a page:

A demo of how the sidebar can be automatically generated through headings (Large preview) Automatic Generation Of Important Meta-Data That’s Commonly Overlooked

One of the most frustrating things that a user can encounter is out-of-date documentation. When a user follows the steps and has trouble understanding why something doesn’t work, being able to easily find out the last updated date can be incredibly useful to both the user and maintainers of the project.

With a simple configuration, VuePress can ensure automatically output the last updated date on the page so your users always know the last time it was updated.

A screenshot to show the ‘Last Updated’ metadata field (Large preview)

On top of that, with a little bit of configuration, VuePress also makes it incredibly easy for users to contribute to your docs by automatically generating a link at the bottom of every single page that allows users to easily create a pull request to your docs.

A demo of an automatically generated ‘Edit’ link which allows users to easily submit PRs to the docs (Large preview)

It doesn’t get much easier than that for your users.

Deployment On Any Static Hosting Site

Since VuePress is a static site generator at its core, this means that you can deploy it on any popular hosting platform such as:

  • Netlify
  • GitHub Pages
  • GitLab Pages
  • Heroku
  • Now

All you need to do to build the site is run vuepress build {{ docsDir }} with where your directory lives and you’ll have everything you need to deploy it live on the web!

Note: For more in-depth guides on how to do this, check out the official deployment guides for VuePress.

Leveraging Vue Inside Your Markdown File

I know, I know. We can use Vue.js in our Markdown?! Yes, you read that right! While technically optional, this is probably one of the most exciting aspects of VuePress because it allows you to enhance your Markdown content like you’ve never been able to do before.

Define Repetitive Data In One Place And Have It Update Everywhere With Interpolation

In the example below, you’ll see a brief example of how you can leverage local variables (like the ones define in the frontmatter) as well as globally defined ones (like the site title):

--- title: My Page Title author: Ben Hong --- # {{ $page.title }} Welcome to {{ $site.title }}! My name is {{ $page.author }} and I'll be your guide for today! Use Vue Components Within Markdown

I’ll give you a moment to collect yourself after reading this, but yes, live Vue components with a full Vue instance can be yours for the taking if you want! It will take a little bit more work to configure, but this is to be expected since you are creating custom content that will be running in your documentation.

Here is a quick example of what a Counter component would look like in a Markdown file:

A demo of using Vue Components within Markdown (Large preview)

This is perhaps the most powerful part of customization available to documentation since it means you now have the freedom to create custom content extends far beyond the abilities of standard Markdown. So whether it’s providing a demo, or some interactive code, the ideas are endless!

Next Steps

If you want to set up a beautiful documentation site for your users to learn how to use your product, it doesn’t get much easier than VuePress. And even though it might be easy to assume that VuePress should only be used by projects that use Vue.js, this could not be further from the truth. Here are just some examples of the different types of projects leveraging VuePress for their documentation sites:

At the end of the day, regardless of whether you use VuePress or not, I hope this has helped to inspire you to create better documentation for your users.

Further Resources

There are many cool things that I did not cover here in this article (e.g. theming, blogging, and so on), but if you would like to learn more, check out these resources:

(dm, yk, il)
Categories: Design

Bottom Navigation Pattern On Mobile Web Pages: A Better Alternative?

Thu, 08/29/2019 - 04:30
Bottom Navigation Pattern On Mobile Web Pages: A Better Alternative? Bottom Navigation Pattern On Mobile Web Pages: A Better Alternative? Arthur Leonov 2019-08-29T13:30:59+02:00 2019-08-29T13:17:00+00:00

Whenever you hear of “mobile navigation”, what’s the first thing that comes to mind? My guess would be the hamburger slide-out menu. This design pattern had been in use since the first responsive design days, and even though a lot has changed since then, this particular pattern has not. Why is that?

How did we start using the top navigation with the hamburger menu in the first place? Is there a better alternative? In this article, I will try to explore these questions.

The History Of The Top Navigation And The Hamburger

The first hamburger menu icons started appearing in the ‘80s. It was designed by Norm Cox for the Xerox Star — the world’s first graphical user interface. He also designed the document icon for the same interface. This piece of history was uncovered by Geof Allday (who actually emailed Norm Cox). You can read the whole email response by clicking here. Later, it was seen on Windows 1 & and DOS.

The current mobile navigation — as we know it — was popularized by Ethan Marcotte’s “Responsive Web Design” book back in 2011. Since then, the top navigation and the hamburger became the industry’s standard.

The Mobile Phone Screen Size Doubled In 10 Years

Since the original iPhone, mobile sales have been increasing year after year. 2019 is the first year that the market reached saturation point and the sales have started to decrease. But that doesn’t mean people are not using phones. By 2020, we will spend 80% of our time on the Internet on mobile phones, reports Quartz and Ciodive. Compare that to 2010, when only a fourth of Internet users were phone-based.

As phone sales increased, screen sizes have more than doubled, too. The average screen size of smartphones has increased from 3.2 inches all the way to 5.5 inches. In 2017, device makers started to adopt the taller 18:9 aspect ratio with 5.7-inch and 6-inch 18:9 displays. Now, we are starting to see 6-inch 18:9 displays become the new standard in flagships as well as in the mid-range price segments, as they have more screen area than 5.5-inch 16:9 displays, XDA-Developers reports.

An overview of how the mobile screen sizes have changed (Image source: Scientamobile) (Large preview)

Basically, the mobile phone screen size is getting bigger and bigger. That’s fine, but how do we adapt our design patterns to reflect these changes?

Thumb-Driven Design

I first heard of the term “thumb-driven design” from Vitaly Friedman. It’s based on the Steven Hoober’s and Josh Clark’s research on how people hold their devices.

The gist of it is that in nearly every case, three basic grips were most common. 49% held their phones with a one-handed grip, 36% cradled the phone in one hand and jabbed with the finger or thumb of the other, and the remaining 15% adopted the two-handed BlackBerry-prayer posture, tapping away with both thumbs, states Josh Clark. Steven Hoober had found that 75% of users touch the screen with only one thumb. Hence, the term thumb-driven design.

There are three main ways in which we hold our phones. (Large preview)

In 2016, Samantha Ingram wrote an article named “The Thumb Zone: Designing For Mobile Users” which further explores these ideas. She defined easy-to-reach, hard-to-reach and in-between areas.

Thumb-zone mapping explained by Samantha Ingram (Large preview)

However, I would argue, that with increasing phone sizes, the mapping has shifted a bit:

New thumb-zone mapping adjusted to larger screen sizes (Large preview)

When the phones were small, most areas were easy to reach. As our screens got bigger, the top part became virtually impossible to touch without adjusting your phone. From the example above, we can see where the most expensive screen real estate is. Yet, it’s often neglected on web pages. How can we fix this?

Bottom Navigation Pattern

Every now and then, bottom navigation pattern pops up on the web. The idea itself is quite simple: move the navigation bar further down.

Slack web page navigation reimagined with new thumb-zone mapping (Large preview)

Positioning the navigation bar at the bottom makes it easier for users to click on the menu icon, while secondary items can be moved to the top. Basically, you simply switch the order. Mobile apps have been using this logic with the tap bar pattern. It’s not a new idea in itself, but it’s still not as popular in web design as it is in app design.

This is not a foolproof solution since it raises a few critical questions, but it’s a worthy alternative. Let’s explore some of the questions that may come up.

Primary And Secondary Items

As the top of the screen is becoming hard to reach, placing the primary menu items closer to the bottom is a better alternative. But what about the other things that are just as important?

I propose two ideas to tackle this problem:

  1. Placing the search bar or any non-primary items to the top;
  2. CTA buttons should remain at the bottom next to the menu items as it is a vital part of the navigation.
A wireframe of reimagined primary and secondary navigation items (Large preview) How Will This Affect Scrolling With Large Menus?

Some websites have extensive menus, submenus and everything in between. Naturally, there will be scrolling involved. How does flipping the primary/secondary items work in this scenario?

A wireframe of a reimagined large menu (Large preview)

Make the primary and secondary items (menu link, logo, search input) fixed while leaving the menu list scrollable. That way, your users will be able to reach the critical things they need.

Where Do You Place The Logo?

You might have concerns about the logo placement. There are two ways to go about it:

  • Placing the logo at the bottom might be a bit awkward, however, the thumb will most likely not obstruct it. It can be missed, though, as we tend to scan top to bottom.
  • A more reasonable option is to keep the logo at the top of the page, but not to have it fixed. Make it a part of the content so it goes away as you scroll. That way, people will still be able to see it perfectly.

As you can see, I used the menu label in the wireframe. Kevin Robinson had found that putting a label next to the icon increased engagement by 75%:

A wireframe of the logo placed at the top while the menu can be found at the bottom. (Large preview) How Does This Work With Handlebars?

Some operating systems and browsers tend to use the bottom area of the screen for their own purposes. iOS handlebars can get in the way of bottom navigation. Make sure the navigation is spacious enough to accommodate the iOS safe area.

iOS Handlebars and safe areas (Large preview)

If you place the logo dead in the center, the link might clash with the handlebar functionality. A bit of padding will do the trick.

Will The Users Adjust To This Pattern Or Find It Disorentating?

As I was writing this article, I kept thinking of whether this would turn out into a big redesign or a simple usability improvement for users navigating through your website. After all, according to Jakob’s Law, users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they’re already familiar with.

As a counter-argument to Jakob’s Law, I would like to propose Fitts Law. It argues that the time to acquire a target is a function of the distance and size of the target. Basically, the smaller and further away the target is, the higher the interaction cost. NN/g has a wonderful video explaining this in more detail:

“A bottom hamburger menu icon will have a much lower interaction cost compared to the top menu icon because it’s closer. By placing the menu CTA near the thumb, we are allowing the user to reach it’s end goal faster. Would the users find the feature disorientating if it lowers their interaction cost? Probably not.” How Will This Integrate With The Tap Bar Pattern?

A tap bar patterns lists three to five most common first-level actions to click on a single row. You may have seen it in popular apps and some websites:

Tap bar design by Mengyuan Sun (Large preview)

Hamburger menus have sparked a lot of controversy over the years. Just take a few moments to read this article, and this one, and this one, and most importantly, this one. You’ll then understand why the tap bar became the preferred navigation pattern in mobile app design.

Nielsen argues that hidden navigation (hamburger menu) significantly decreases user experience both on mobile and desktop. On mobile, people used the hidden navigation in 57% of the cases, and the combo navigation in 86% of the cases, i.e. 1.5 times more! The combo navigation that Nielsen refers to is a tab bar pattern combined with a hamburger menu — here’s an example:

The Samsung app example from Rizki Rahmat Ridha for Muzli (Large preview)

It might seem like the tap bar is the perfect solution, but it has its problems too. Fabian Sebastian raised a good point that it only works on top-level views. It does not work with secondary navigation items. To solve this problem, a hamburger/tap bar hybrid was born. If you pay attention to the Samsung app, you’ll see that the last item on the menu is the “*More*” button which calls up the hamburger menu.

In essence, the bottom navigation pattern integrates quite well into the tap bar pattern if you want to combine both of them. The best place to look for good examples is in the mobile app world.

Some Popular Websites Reimagined

I opened up Photoshop and did a quick mockup of a few popular websites in order to explain that changing the navbar to go bottom-up is not that difficult.

Let’s first take a look at Bloomberg:

The Bloomberg website with a reimagined bottom navigation (Large preview)

Next, let’s take a look at Invision:

The Invision website with a reimagined bottom navigation (Large preview)

Last but not least, the good ol’ Reddit:

The Reddit website with a reimagined bottom navigation (Large preview)

Yes, this idea does raise questions, but it’s simple enough to be adapted to the web. It does make a usability difference as the interaction cost is much lower.

That Sounds Great, But How Do I Convince My Clients?

You, as the designer, might see the potential of this pattern, but what if your client or your boss doesn’t? I would answer this problem with a couple of arguments:

  • Mobile apps have been placing valuable menu items to the bottom for years already. Just send them these two articles for starters:
  • I had noticed cases in which popular mobile apps started to shift important bits to the bottom. A good example is Uber. For them, the search bar is one of the most important items on the screen. In the old design, its position was at the top. Now, they’ve shifted it to the bottom. Could we be on to something here?
The old and new Uber search bar design (Large preview)

Shifting important navigation items to the bottom is not a new thing in mobile app design. It’s just that — for some reason — the web industry has not caught up on this just yet.

Summary

The facts are quite clear: Phones are getting bigger, and some parts of the screen are easier to interact with than others. Having the hamburger menu at the top provides too big of an interaction cost, and we have a large number of amazing mobile app designs that utilize the bottom part of the screen. Maybe it’s time for the web design world to start using these ideas on websites as well?

I understand that all of this is not a foolproof solution for all use cases, but it’s worth a shot. It helps make the experience just a tad bit better. I’m interested in hearing your thoughts below!

Useful Reading Resources Further Related Resources (cc, il)
Categories: Design

Beyond The Browser: Getting Started With Serverless WebAssembly

Wed, 08/28/2019 - 04:30
Beyond The Browser: Getting Started With Serverless WebAssembly Beyond The Browser: Getting Started With Serverless WebAssembly Robert Aboukhalil 2019-08-28T13:30:59+02:00 2019-08-28T12:36:01+00:00

Now that WebAssembly is supported by all major browsers and more than 85% of users worldwide, JavaScript is no longer the only browser language in town. If you haven’t heard, WebAssembly is a new low-level language that runs in the browser. It’s also a compilation target, which means you can compile existing programs written in languages such as C, C++, and Rust into WebAssembly, and run those programs in the browser. So far, WebAssembly has been used to port all sorts of applications to the web, including desktop applications, command-line tools, games and data science tools.

Note: For an in-depth case study of how WebAssembly can be used inside the browser to speed up web applications, check out my previous article.

WebAssembly Outside The Web?

Although most WebAssembly applications today are browser-centric, WebAssembly itself wasn’t originally designed just for the web, but really for any sandboxed environment. In fact, there’s recently been a lot of interest in exploring how WebAssembly could be useful outside the browser, as a general approach for running binaries on any OS or computer architecture, so long as there is a WebAssembly runtime that supports that system. In this article, we’ll look at how WebAssembly can be run outside the browser, in a serverless/Function-as-a-Service (FaaS) fashion.

WebAssembly For Serverless Applications

In a nutshell, serverless functions are a computing model where you hand your code to a cloud provider, and let them execute and manage scaling that code for you. For example, you can ask for your serverless function to be executed anytime you call an API endpoint, or to be driven by events, such as when a file is uploaded to your cloud bucket. While the term “serverless” may seem like a misnomer since servers are clearly involved somewhere along the way, it is serverless from our point of view since we don’t need to worry about how to manage, deploy or scale those servers.

Although these functions are usually written in languages like Python and JavaScript (Node.js), there are a number of reasons you might choose to use WebAssembly instead:

  1. Faster Initialization Times
    Serverless providers that support WebAssembly (including Cloudflare and Fastly report that they can launch functions at least an order of magnitude faster than most cloud providers can with other languages. They achieve this by running tens of thousands of WebAssembly modules in the same process, which is possible because the sandboxed nature of WebAssembly makes for a more efficient way of obtaining the isolation that containers are traditionally used for.
  2. No Rewrites Needed
    One of the main appeals of WebAssembly in the browser is the ability to port existing code to the web without having to rewrite everything to JavaScript. This benefit still holds true in the serverless use case because cloud providers limit which languages you can write your serverless functions in. Typically, they will support Python, Node.js, and maybe a few others, but certainly not C, C++, or Rust. By supporting WebAssembly, serverless providers can indirectly support a lot more languages.
  3. More Lightweight
    When running WebAssembly in the browser, we’re relying on the end user’s computer to perform our computations. If those computations are too intensive, our users won’t be happy when their computer fan starts whirring. Running WebAssembly outside the browser gives us the speed and portability benefits of WebAssembly, while also keeping our application lightweight. On top of that, since we’re running our WebAssembly code in a more predictable environment, we can potentially perform more intensive computations.
A Concrete Example

In my previous article here on Smashing Magazine, we discussed how we sped up a web application by replacing slow JavaScript calculations with C code compiled to WebAssembly. The web app in question was fastq.bio, a tool for previewing the quality of DNA sequencing data.

As a concrete example, let’s rewrite fastq.bio as an application that makes use of serverless WebAssembly instead of running the WebAssembly inside the browser. For this article, we’ll use Cloudflare Workers, a serverless provider that supports WebAssembly and is built on top of the V8 browser engine. Another cloud provider, Fastly, is working on a similar offering, but based on their Lucet runtime.

First, let’s write some Rust code to analyze the data quality of DNA sequencing data. For convenience, we can leverage the Rust-Bio bioinformatics library to handle parsing the input data, and the wasm-bindgen library to help us compile our Rust code to WebAssembly.

Here’s a snippet of the code that reads in DNA sequencing data and outputs a JSON with a summary of quality metrics:

// Import packages extern crate wasm_bindgen; use bio::seq_analysis::gc; use bio::io::fastq; ... // This "wasm_bindgen" tag lets us denote the functions // we want to expose in our WebAssembly module #[wasm_bindgen] pub fn fastq_metrics(seq: String) -> String { ... // Loop through lines in the file let reader = fastq::Reader::new(seq.as_bytes()); for result in reader.records() { let record = result.unwrap(); let sequence = record.seq(); // Calculate simple statistics on each record n_reads += 1.0; let read_length = sequence.len(); let read_gc = gc::gc_content(sequence); // We want to draw histograms of these values // so we store their values for later plotting hist_gc.push(read_gc * 100.0); hist_len.push(read_length); ... } // Return statistics as a JSON blob json!({ "n": n_reads, "hist": { "gc": hist_gc, "len": hist_len }, ... }).to_string() }

We then used Cloudflare’s wrangler command-line tool to do the heavy lifting of compiling to WebAssembly and deploying to the cloud. Once done, we are given an API endpoint that takes sequencing data as input and returns a JSON with data quality metrics. We can now integrate that API into our application.

Here’s a GIF of the application in action:

Instead of running the analysis directly in the browser, the serverless version of our application makes several POST requests in parallel to our serverless function (see right sidebar), and updates the plots each time it returns more data. (Large preview)

The full code is available on GitHub (open-source).

Putting It All In Context

To put the serverless WebAssembly approach in context, let’s consider four main ways in which we can build data processing web applications (i.e. web apps where we perform analysis on data provided by the user):

Four different architectural choices that we can take for apps that process data. (Large preview)

As shown above, the data processing can be done in several places:

  1. Server-Side
    This is the approach taken by most web applications, where API calls made in the front-end launch data processing on the back-end.
  2. Client-Side JavaScript
    In this approach, the data processing code is written in JavaScript and runs in the browser. The downside is that your performance will take a hit, and if your original code wasn’t in JavaScript, you’ll need to rewrite it from scratch!
  3. Client-Side WebAssembly
    This involves compiling data analysis code to WebAssembly and running it in the browser. If the analysis code was written in languages like C, C++ or Rust (as is often the case in my field of genomics), this obviates the need to rewrite complex algorithms in JavaScript. It also provides the potential for speeding up our application (e.g. as discussed in a previous article).
  4. Serverless WebAssembly
    This involves running the compiled WebAssembly on the cloud, using a FaaS kind of model (e.g. this article).

So why would you choose the serverless approach over the others? For one thing, compared to the first approach, it has the benefits that come with using WebAssembly, especially the ability to port existing code without having to rewrite it to JavaScript. Compared to the third approach, serverless WebAssembly also means our app is more lightweight since we don’t use the user’s resources for number crunching. In particular, if the computations are fairly involved or if the data is already in the cloud, this approach makes more sense.

On the other hand, however, the app now needs to make network connections, so the application will likely be slower. Furthermore, depending on the scale of the computation and whether it is amenable to be broken down into smaller analysis pieces, this approach might not be suitable due to limitations imposed by serverless cloud providers on runtime, CPU, and RAM utilization.

Conclusion

As we saw, it is now possible to run WebAssembly code in a serverless fashion and reap the benefits of both WebAssembly (portability and speed) and those of function-as-a-service architectures (auto-scaling and per-per-use pricing). Certain types of applications — such as data analysis and image processing, to name a few — can greatly benefit from such an approach. Though the runtime suffers because of the additional round-trips to the network, this approach does allow us to process more data at a time and not put a drain on users’ resources.

(rb, dm, yk, il)
Categories: Design

Figma Tips To Kick-Start Your Design Workflow

Tue, 08/27/2019 - 04:30
Figma Tips To Kick-Start Your Design Workflow Figma Tips To Kick-Start Your Design Workflow Philippe Hong 2019-08-27T13:30:59+02:00 2019-08-27T12:42:59+00:00

I have made the switch to Figma almost two years ago and I have no regrets so far. In one of my previous blog posts on the topic, I made an in-depth review of Figma, and I’m glad I could help other designers make the switch. After two years of working with this tool, I got really familiar with it and now I’d like to share with you twenty tips that I use every day and which help me work a bit faster and be more effective.

Note About Shortcuts

Most shortcuts are written for both Windows and Mac, where the Ctrl key on Windows corresponds to the Cmd key on the Mac, and Alt is used for both Alt (Windows) and Option/Alt (Mac).

For example, Ctrl/Cmd + Alt + C is Ctrl + Alt + C on Windows and Cmd + Alt/Option + C on the Mac.

Note: This article is for designers who want to try Figma or already are exploring some of its features. To get the most from the article, some experience with Figma Design would be nice to have, but not required.

1. How To Import Multiple Images At The Same Time

We use pictures and images in our designs all the time, and it would be very useful if we could make the process of changing single and multiple images more easy and straightforward.

In Figma, you have the ability to import multiple images (using the shortcut Ctrl/Cmd + Shift + K) and then place them one by one in the layers (objects) in which you want them to appear. This is quite handy because you can see the images being imported and then placed in realtime.

A quick look of how to import multiple images in Figma (Large preview) 2. Better Renaming Options By Using The Layers Batch Rename Feature

Sometimes (and I really mean many times!), we need to rename a group of layers when we need to prepare our design for export (export as icons, or as a set of images), or just when we need to perform a “deep clean” process inside a design file.

In Figma, you have the ability to batch rename layers (and frames) which is a really handy feature. You can rename the entire layer name or just a portion of it. You can also find and rename a specific character in a layer name, and you can add a different number to each layer that will be later exported as a separate file. You can also do a search and replace by just typing in the “Match” field.

I find this feature extremely useful.

A quick look on how to batch rename layers in Figma (Large preview)

Note about layers: If you’re relatively new to Figma, the following Figma help page will shed some light on layers, frames, objects, groups of objects, and more.

3. Using An Emoji In The Frame Name To Display Its Current Work State

Since we started using Figma in our design team, our workflow has been more collaborative, as we usually work on the same design files, and sometimes we even work on them simultaneously.

To know which Frame or screen is still work in progress, and which one is ready (final variant completed), we add an emoji (Windows shortcut: Win key + . or Win key + ; / Mac shortcut: Cmd + Ctrl + space) before the frame name so everyone can see at a glance the frame’s current state.

An example of current state emoji I use in my projects (Large preview) 4. Re-Organizing Items

One of Figma’s great features is the ability to re-organize items inside a Frame. It’s very handy when used on icons, lists or tabs as shown below:

A quick look on how to re-organize your items in Figma (Large preview) Use Proper Naming To Organize Your Styles (Texts, Colors, Effects)

Local Styles is one of the best features in Figma. It allows you to create a design system or guideline for all components so you can reuse them easily. And if you change the Master Style, it changes all the components linked to it. Super powerful! However, you can get lost with all your styles if you don’t name and categorize them the right way. I’ll share with you how I structured my styles in Figma — read on!

5. Text Styles Naming

You can organize your text styles in subcategories by adding a “/”. For example, I would add a “Heading” and “/” so I’ll have all my headings inside the category “Heading.” Sounds fancy but it’s easier to navigate when you have a lot of different font sizes. Works for Texts and also Colors.

My List of Text styles naming convention (Large preview) My List of Text styles naming convention (Large preview) 6. Adding A Description For Each Style As A Guide

It can be handy to know where to use different components by adding a quick description of how to use the style and where, especially when you have a team of designers. You can add a description when editing the text style, color style or any components.

How to add a description for each style (Large preview) 7. How To Switch Instance From The Sidebar

A lot of times, we end up with a lot of components, icons, etc., so the dropdown menu to switch instance is probably not the best way to do this. The little trick is that you can, from the sidebar, drag the component by holding Alt + Ctrl/Cmd to the component you want to switch. Easier and faster!

How to switch instance from the sidebar (Large preview) 8. How To Copy/Paste All Properties

When duplicating an element or when I just want to copy the style of an element, I can quickly copy the element’s properties (Ctrl/Cmd + Alt + C) and paste them (Ctrl/Cmd + V) on a new element. It’s quite handy for images and styling elements with multiple properties, e.g., fill and stroke, etc.

A quick view of the copy/paste property feature (Large preview) 9. How To Copy/Paste A Single Property

Another shortcut that I found very useful is the ability to copy a single property — and you can select which property to copy! Select the property from the right panel (like shown in the video) and with a simple Ctrl/Cmd + C and then Ctrl/Cmd + V paste it on another object. I found this to be very useful for images.

You can select a single property to copy like shown in this video. (Large preview) 10. Search For Elements With The Same Properties, Instance, Style, And So On

When you have a complex design file, or you just want to tidy up your design system, it’s quite handy to be able to search for elements with the same property (a specific color, for example), and then change the color to a Color Style. Super-useful after you’re settled with the design system and need to better organize all the components!

The ‘Same Properties’ menu in Figma helps you to select all. (Large preview) 11. Use The Scale Tool To Resize Objects And Their Properties

I found it useful to be able to scale an element and its properties (stroke, effects applied to the object, etc.) all at the same time with the Scale tool (K). I found Figma a bit easier than Sketch in this regard, as you don’t have to select the size of the object. When you scale the object, both the object’s dimensions and its properties will resize proportionally. And by holding Shift, you’ll also keep the ratio while expanding or downsizing the object.

Note: If you need to change the size of an object without changing its properties (stroke, effects, etc.), use the Select tool to select the object, then resize it by using the Properties panel. If you use the Scale tool and resize the object, then both the object’s size and its properties will resize.

The difference between the normal resize and the scale tool (Large preview) 12. Resize A Frame Without Resizing The Layers Inside It

When designing for different screen resolutions, you want to be able to resize the screen frame without having to resize all the elements inside the frame. In order to do that, hold Ctrl/Cmd while you perform the resize operation. Magic!

A quick view of resizing frame without resizing the layers inside (Large preview) 13. Create Graphs/Arc In Seconds

With Figma, you can create graphs/arc in literally seconds! No more cutting paths on a circle to create a custom graph. Here’s how you create a loading arc — and all those values can be precisely controlled from the Properties panel on the right.

A quick look on how to create graphs in seconds (Large preview) 14. Change Spacing On The Go

I love Figma’s feature that allows you to change the spacing for a group of elements. It makes it super easy to lay out a group of elements around your screen. I use this feature for multiple elements but also for single elements as well.

A quick look on how to change the spacing between objects (Large preview) 15. Component Keywords For Easy Search

When you’re starting to have lots of components, it becomes difficult sometimes to find a specific component in your library. That’s when the component keywords come in handy. You can add keywords to any component so even though the component’s name is different, you’ll have the keywords which will allow to find it more easily. You’ll find an example below:

Add keywords in components for easy searching (Large preview) 16. Restore An Earlier Version Of A Design File Or Share The Link To An Earlier Version

I love the feature to be able to go back to a previous version of the file I am currently working on.

No matter the reason (you made a mistake, or a client asks you to switch to an earlier version, etc.), it is really handy to be able to go back in time to a previous version. And not only that, but Figma also lets you copy the link to the previous version so you don’t have to delete the most recent version of the file. Smart!

Going back in time in your history version (Large preview) 17. UI Kit Libraries To Kick-Start Your Projects

I often use the UI kit libraries to kick-start my projects. For example, I use the Wireframy Kit whenever I need to design some wireframes. I just need to activate the library, and I’m ready to go! I also often use Bootstrap Grid and Figma Redlines. (There’s a ton of free assets available — check them out and pick the ones you need.)

One of the UI Kit “Wireframy” which I use. (Large preview) 18. Use GIFs In Prototypes

Figma just added the ability to add GIF files to your prototypes, thus adding the possibility to add user interaction animations within your prototypes. Here’s a preview of it from Aris Acoba:

It does work

Categories: Design

Creating A Shopping Cart With HTML5 Web Storage

Mon, 08/26/2019 - 05:30
Creating A Shopping Cart With HTML5 Web Storage Creating A Shopping Cart With HTML5 Web Storage Matt Zand 2019-08-26T14:30:59+02:00 2019-08-26T13:06:56+00:00

With the advent of HTML5, many sites were able to replace JavaScript plugin and codes with simple more efficient HTML codes such as audio, video, geolocation, etc. HTML5 tags made the job of developers much easier while enhancing page load time and site performance. In particular, HTML5 web storage was a game changer as they allow users’ browsers to store user data without using a server. So the creation of web storage, allowed front-end developers to accomplish more on their website without knowing or using server-side coding or database.

Online e-commerce websites predominantly use server-side languages such as PHP to store users’ data and pass them from one page to another. Using JavaScript back-end frameworks such as Node.js, we can achieve the same goal. However, in this tutorial, we’ll show you step by step how to build a shopping cart with HTML5 and some minor JavaScript code. Other uses of the techniques in this tutorial would be to store user preferences, the user’s favorite content, wish lists, and user settings like name and password on websites and native mobile apps without using a database.

Many high-traffic websites rely on complex techniques such as server clustering, DNS load balancers, client-side and server-side caching, distributed databases, and microservices to optimize performance and availability. Indeed, the major challenge for dynamic websites is to fetch data from a database and use a server-side language such as PHP to process them. However, remote database storage should be used only for essential website content, such as articles and user credentials. Features such as user preferences can be stored in the user’s browser, similar to cookies. Likewise, when you build a native mobile app, you can use HTML5 web storage in conjunction with a local database to increase the speed of your app. Thus, as front-end developers, we need to explore ways in which we can exploit the power of HTML5 web storage in our applications in the early stages of development.

I have been a part of a team developing a large-scale social website, and we used HTML5 web storage heavily. For instance, when a user logs in, we store the hashed user ID in an HTML5 session and use it to authenticate the user on protected pages. We also use this feature to store all new push notifications — such as new chat messages, website messages, and new feeds — and pass them from one page to another. When a social website gets high traffic, total reliance on the server for load balancing might not work, so you have to identify tasks and data that can be handled by the user’s browser instead of your servers.

Project Background

A shopping cart allows a website’s visitor to view product pages and add items to their basket. The visitor can review all of their items and update their basket (such as to add or remove items). To achieve this, the website needs to store the visitor’s data and pass them from one page to another, until the visitor goes to the checkout page and makes a purchase. Storing data can be done via a server-side language or a client-side one. With a server-side language, the server bears the weight of the data storage, whereas with a client-side language, the visitor’s computer (desktop, tablet or smartphone) stores and processes the data. Each approach has its pros and cons. In this tutorial, we’ll focus on a simple client-side approach, based on HTML5 and JavaScript.

Note: In order to be able to follow this tutorial, basic knowledge of HTML5, CSS and JavaScript is required.

Project Files

Click here to download the project’s source files. You can see a live demo, too.

Overview Of HTML5 Web Storage

HTML5 web storage allows web applications to store values locally in the browser that can survive the browser session, just like cookies. Unlike cookies that need to be sent with every HTTP request, web storage data is never transferred to the server; thus, web storage outperforms cookies in web performance. Furthermore, cookies allow you to store only 4 KB of data per domain, whereas web storage allows at least 5 MB per domain. Web storage works like a simple array, mapping keys to values, and they have two types:

  • Session storage
    This stores data in one browser session, where it becomes available until the browser or browser tab is closed. Popup windows opened from the same window can see session storage, and so can iframes inside the same window. However, multiple windows from the same origin (URL) cannot see each other’s session storage.
  • Local storage
    This stores data in the web browser with no expiration date. The data is available to all windows with the same origin (domain), even when the browser or browser tabs are reopened or closed.

Both storage types are currently supported in all major web browsers. Keep in mind that you cannot pass storage data from one browser to another, even if both browsers are visiting the same domain.

Build A Basic Shopping Cart

To build our shopping cart, we first create an HTML page with a simple cart to show items, and a simple form to add or edit the basket. Then, we add HTML web storage to it, followed by JavaScript coding. Although we are using HTML5 local storage tags, all steps are identical to those of HTML5 session storage and can be applied to HTML5 session storage tags. Lastly, we’ll go over some jQuery code, as an alternative to JavaScript code, for those interested in using jQuery.

Add HTML5 Local Storage To Shopping Cart

Our HTML page is a basic page, with tags for external JavaScript and CSS referenced in the head.

<!DOCTYPE HTML> <html lang="en-US"> <head> <title>HTML5 Local Storage Project</title> <META charset="UTF-8"> <META name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <META NAME='rating' CONTENT='General' /> <META NAME='expires' CONTENT='never' /> <META NAME='language' CONTENT='English, EN' /> <META name="description" content="shopping cart project with HTML5 and JavaScript"> <META name="keywords" content="HTML5,CSS,JavaScript, html5 session storage, html5 local storage"> <META name="author" content="dcwebmakers.com"> <script src="Storage.js"></script> <link rel="stylesheet" href="StorageStyle.css"> </head>

Below is the HTML content for the page’s body:

<form name=ShoppingList> <div id="main"> <table> <tr> <td><b>Item:</b><input type=text name=name></td> <td><b>Quantity:</b><input type=text name=data></td> </tr> <tr> <td> <input type=button value="Save" onclick="SaveItem()"> <input type=button value="Update" onclick="ModifyItem()"> <input type=button value="Delete" onclick="RemoveItem()"> </td> </tr> </table> </div> <div id="items_table"> <h3>Shopping List</h3> <table id=list></table> <p> <label><input type=button value="Clear" onclick="ClearAll()"> <i>* Delete all items</i></label> </p> </div> </form> Adding JavaScript To The Shopping Cart

We’ll create and call the JavaScript function doShowAll() in the onload() event to check for browser support and to dynamically create the table that shows the storage name-value pair.

<body onload="doShowAll()">

Alternatively, you can use the JavaScript onload event by adding this to the JavaScript code:

window.load=doShowAll();

Or use this for jQuery:

$( window ).load(function() { doShowAll(); });

In the CheckBrowser() function, we would like to check whether the browser supports HTML5 storage. Note that this step might not be required because most modern web browsers support it.

/* =====> Checking browser support. //This step might not be required because most modern browsers do support HTML5. */ //Function below might be redundant. function CheckBrowser() { if ('localStorage' in window && window['localStorage'] !== null) { // We can use localStorage object to store data. return true; } else { return false; } }

Inside the doShowAll(), if the CheckBrowser() function evaluates first for browser support, then it will dynamically create the table for the shopping list during page load. You can iterate the keys (property names) of the key-value pairs stored in local storage inside a JavaScript loop, as shown below. Based on the storage value, this method populates the table dynamically to show the key-value pair stored in local storage.

// Dynamically populate the table with shopping list items. //Step below can be done via PHP and AJAX, too. function doShowAll() { if (CheckBrowser()) { var key = ""; var list = "<tr><th>Item</th><th>Value</th></tr>\n"; var i = 0; //For a more advanced feature, you can set a cap on max items in the cart. for (i = 0; i <= localStorage.length-1; i++) { key = localStorage.key(i); list += "<tr><td>" + key + "</td>\n<td>" + localStorage.getItem(key) + "</td></tr>\n"; } //If no item exists in the cart. if (list == "<tr><th>Item</th><th>Value</th></tr>\n") { list += "<tr><td><i>empty</i></td>\n<td><i>empty</i></td></tr>\n"; } //Bind the data to HTML table. //You can use jQuery, too. document.getElementById('list').innerHTML = list; } else { alert('Cannot save shopping list as your browser does not support HTML 5'); } }

Note: Either you or your framework will have a preferred method of creating new DOM nodes. To keep things clear and focused, our example uses .innerHTML even though we’d normally avoid that in production code.

Tip: If you’d like to use jQuery to bind data, you can just replace document.getElementById('list').innerHTML = list; with $(‘#list’).html()=list;.

Run And Test The Shopping Cart

In the previous two sections, we added code to the HTML head, and we added HTML to the shopping cart form and basket. We also created a JavaScript function to check for browser support and to populate the basket with the items in the cart. In populating the basket items, the JavaScript fetches values from HTML web storage, instead of a database. In this part, we’ll show you how the data are inserted into the HTML5 storage engine. That is, we’ll use HTML5 local storage in conjunction with JavaScript to insert new items to the shopping cart, as well as edit an existing item in the cart.

Note: I’ve added tips sections below to show jQuery code, as an alternative to the JavaScript ones.

We’ll create a separate HTML div element to capture user input and submission. We’ll attach the corresponding JavaScript function in the onClick event for the buttons.

<input type="button" value="Save" onclick="SaveItem()"> <input type="button" value="Update" onclick="ModifyItem()"> <input type="button" value="Delete" onclick="RemoveItem()">

You can set properties on the localStorage object similar to a normal JavaScript object. Here is an example of how we can set the local storage property myProperty to the value myValue:

localStorage.myProperty="myValue";

You can delete local storage property like this:

delete localStorage.myProperty;

Alternately, you can use the following methods to access local storage:

localStorage.setItem('propertyName','value'); localStorage.getItem('propertyName'); localStorage.removeItem('propertyName');

To save the key-value pair, get the value of the corresponding JavaScript object and call the setItem method:

function SaveItem() { var name = document.forms.ShoppingList.name.value; var data = document.forms.ShoppingList.data.value; localStorage.setItem(name, data); doShowAll(); }

Below is the jQuery alternative for the SaveItem function. First, add an ID to the form inputs:

<td><b>Item:</b><input type=text name="name" id="name"></td> <td><b>Quantity:</b><input type=text name="data" id="data"></td>

Then, select the form inputs by ID, and get their values. As you can see below, it is much simpler than JavaScript:

function SaveItem() { var name = $("#name").val(); var data = $("#data").val(); localStorage.setItem(name, data); doShowAll(); }

To update an item in the shopping cart, you have to first check whether that item’s key already exists in local storage, and then update its value, as shown below:

//Change an existing key-value in HTML5 storage. function ModifyItem() { var name1 = document.forms.ShoppingList.name.value; var data1 = document.forms.ShoppingList.data.value; //check if name1 is already exists //Check if key exists. if (localStorage.getItem(name1) !=null) { //update localStorage.setItem(name1,data1); document.forms.ShoppingList.data.value = localStorage.getItem(name1); } doShowAll(); }

Below shows the jQuery alternative.

function ModifyItem() { var name1 = $("#name").val(); var data1 = $("#data").val(); //Check if name already exists. //Check if key exists. if (localStorage.getItem(name1) !=null) { //update localStorage.setItem(name1,data1); var new_info=localStorage.getItem(name1); $("#data").val(new_info); } doShowAll(); }

We will use the removeItem method to delete an item from storage.

function RemoveItem() { var name=document.forms.ShoppingList.name.value; document.forms.ShoppingList.data.value=localStorage.removeItem(name); doShowAll(); }

Tip: Similar to the previous two functions, you can use jQuery selectors in the RemoveItem function.

There is another method for local storage that allows you to clear the entire local storage. We call the ClearAll() function in the onClick event of the “Clear” button:

<input type="button" value="Clear" onclick="ClearAll()">

We use the clear method to clear the local storage, as shown below:

function ClearAll() { localStorage.clear(); doShowAll(); } Session Storage

The sessionStorage object works in the same way as localStorage. You can replace the above example with the sessionStorage object to expire the data after one session. Once the user closes the browser window, the storage will be cleared. In short, the APIs for localStorage and sessionStorage are identical, allowing you to use the following methods:

  • setItem(key, value)
  • getItem(key)
  • removeItem(key)
  • clear()
  • key(index)
  • length
Shopping Carts With Arrays And Objects

Because HTML5 web storage only supports single name-value storage, you have to use JSON or another method to convert your arrays or objects into a single string. You might need an array or object if you have a category and subcategories of items, or if you have a shopping cart with multiple data, like customer info, item info, etc. You just need to implode your array or object items into a string to store them in web storage, and then explode (or split) them back to an array to show them on another page. Let’s go through a basic example of a shopping cart that has three sets of info: customer info, item info and custom mailing address. First, we use JSON.stringify to convert the object into a string. Then, we use JSON.parse to reverse it back.

Hint: Keep in mind that the key-name should be unique for each domain.

//Customer info //You can use array in addition to object. var obj1 = { firstname: "John", lastname: "thomson" }; var cust = JSON.stringify(obj1); //Mailing info var obj2 = { state: "VA", city: "Arlington" }; var mail = JSON.stringify(obj2); //Item info var obj3 = { item: "milk", quantity: 2 }; var basket = JSON.stringify(obj3); //Next, push three strings into key-value of HTML5 storage. //Use JSON parse function below to convert string back into object or array. var New_cust=JSON.parse(cust); Summary

In this tutorial, we have learned how to build a shopping cart step by step using HTML5 web storage and JavaScript. We’ve seen how to use jQuery as an alternative to JavaScript. We’ve also discussed JSON functions like stringify and parse to handle arrays and objects in a shopping cart. You can build on this tutorial by adding more features, like adding product categories and subcategories while storing data in a JavaScript multi-dimensional array. Moreover, you can replace the whole JavaScript code with jQuery.

We’ve seen what other things developers can accomplish with HTML5 web storage and what other features they can add to their websites. For example, you can use this tutorial to store user preferences, favorited content, wish lists, and user settings like names and passwords on websites and native mobile apps, without using a database.

To conclude, here are a few issues to consider when implementing HTML5 web storage:

  • Some users might have privacy settings that prevent the browser from storing data.
  • Some users might use their browser in incognito mode.
  • Be aware of a few security issues, like DNS spoofing attacks, cross-directory attacks, and sensitive data compromise.
Related Reading (dm, il)
Categories: Design
©2019 Richard Esmonde. All rights reserved.