Search Engine Optimization Checklist (PDF)

Original Source:

Search engine optimization (SEO) is an essential part of a website’s design, and one all too often overlooked. The most beautiful, spectacular site in the world won’t do anyone much good if people can’t find it on Google (or Bing, or DuckDuckGo).

Implementing SEO best practice doesn’t just give you the best chance possible of ranking well in search engines; it makes your websites better by scrutinizing quality, design, accessibility, and speed, among other things. It’s a daunting world for those who aren’t familiar with it (and even those who are at times), so this checklist breaks down key factors to consider when undertaking an audit.

For an overview of the SEO community — publications, thought leaders, podcasts, documentation, forums, things like that — I humbly point you towards the Smashing Guide To The World of Search Engine Optimization.

If you’re ready to get stuck in, read on.

Table Of Contents

Get Ready: A Healthy Mindset
Setting Realistic Goals
Defining The Environment
Testing And Monitoring
Quick Wins

Note: You can also just download the checklist (PDF, 158 KB). Happy optimizing, everyone!

Get Ready: A Healthy Mindset

Establishing A Shared SEO Culture
Done properly SEO is not something you implement once then walk away from never to think about again. It’s something that ought to be carefully maintained over years. One of the reasons audits can feel so overwhelming is because long-neglected SEO piles up into a big problem. Well maintained SEO runs like a dream, and is better placed to adapt to the turbulence of algorithm updates. Communicate the value of SEO, and don’t do it by lecturing. Following best practice usually means a better website, more organic traffic, and happier visitors. Win, win, win.
Quality, Not ‘Quality’
There is sometimes more talk about quality than there is commitment to it. Behind all the stats, tools, and quick wins there sits one simple SEO truth: it is your job to make the site as good as it possibly can be. Only then can you hope to be better than all the other sites you’re competing with for search queries. From UX design to copywriting, quality content takes commitment, passion, and time. Be ready to face your site’s limitations and work to improve them — for your sake as much as anyone else’s. Great content is so, so much easier to optimize than bad content is.
A Holistic Approach
Strong SEO is the sum total of a website, it’s not something to saddle one department (or person) with. It can be bolted on to an extent, but that’s never as good as when it’s woven into the site’s DNA. Implementing SEO well means open communication between different members of the team — from SEO execs to writers to developers. Before you even start, understand everyone likely has a role to play.

“Where Does SEO Belong In Your Web Design Process?” by Suzanne Scacca

Join The Community
Guides like this cover as much as they can but there’s no getting away from the fact that SEO is constantly evolving. It is a huge industry, with its own publications, thought leaders, podcasts, video series, and more. Take advantage of those resources, plug into the SEO world. Just following a handful of reputable Twitter accounts and listening to a podcast or two a month will go a long way.

A Smashing Guide To The World Of Search Engine Optimization

Setting Realistic Goals

Prioritizing Metrics
Online metrics are almost limitless. Like, literally. They just won’t stop. Numbers are useful, but if you’re not careful they’ll be the ones calling the shots rather than you. Don’t let KPIs be the tail that wags the dog. Work out what your priorities are, how you can measure progress, and the limitations of the available data. The answers to these questions vary from site to site.

Lighthouse for automated web page audits
Go Auditor

Goals border on meaningless if you don’t have a time frame for them. It doesn’t have to be the be all and end all (SEO never stops, after all) but by giving yourself a date to work towards gives you a target, and a ready made opportunity to reassess what you’re doing. Have a schedule and stick to it as best you can. This also means having a plan for tracking and analysing search data. Putting a few minutes aside each week adds up nicely over six months.
Keyword Research
This is absolutely essential to targeted SEO. If you don’t know what keywords you’re hoping to rank for how can you possibly target them? What are popular search terms in your field? What are your competitors ranking for? What is your website already ranking well for and why? With Google Search Console you can see exactly where your pages are (or aren’t) ranking for different keywords.

Google Keyword Planner
Google Trends
Moz Keyword Explorer
Ahref Keyword Generator
Keyword Overview by SEMrush

Size Up The Competition
The internet is a big place. Odds are you’re not the only one trying to rank for certain searches. Competition is fierce, and that’s good. It forces websites to improve themselves. Analyse rival websites and note what they’re doing well, as well as what you do or can do better. Remember, search engines just want to connect searchers with the best results for their queries. Being the best means being better than everyone else.

Ahref Site Explorer
Topics by SEOmonitor

Involve Colleagues In Setting Goals
SEO is a deceptively big topic that affects all aspects of a site, so it’s only reasonable to involve your colleagues when setting ambitious yet achievable goals. Everyone knows something you don’t, and you might be surprised by how much smoother SEO implementation can be when everyone’s on board with it.

Defining The Environment

A lot of SEO revolves around how you organize content, and more than anything else you need to organize content well for mobile devices. More people browse on mobile devices than on desktops. In acknowledgement of this trend, Google went fully mobile-first in early 2020. This means the mobile version of your website is what crawlers look at and index. Fabulous desktop layouts are great, but SEO, like the web, is now a mobile-first world.

Google Mobile-Friendly Test
Bulk Mobile Friendly Test by Experte
Resizer by Material Design (view websites on different devices side by side)

Google’s Monopoly
For better or worse, search is currently monopolised by one company — Google. It continues to dominate the space, handling more than 90% of global mobile searches, and 70% of desktop. There are others of course — Bing, Waibu, DuckDuckGo, and more — but for the time being SEO gravitates around Google. Tick their boxes while keeping an eye on the wider terrain, which isn’t as static as you might think.


Quality Content
That’s right, folks. All the SEO in the world will only get you so far if a website’s content is rubbish. There’s no question that there exist bad websites that perform well, but more and more are weeded out with each update. What does quality content look like? There are countless articles on the topic, but here are a few things to be aiming for — clear, original, properly sourced, well written, accessible, and honest. Search engines (generally) want to connect searchers with high-quality results.

What Is Great Content? by Search Engine Journal
Google’s Quality Rater Guidelines

Meta Titles And Descriptions
Eat your sprouts, cross your i’s and dot your t’s, and use descriptive meta titles. Every web page should have a meta title and meta description. The title should tell people and web crawlers alike what the page is about. Meta descriptions are purely for browsers’ benefit — crawlers don’t look at them. Think of them as little blurbs for when that page pops up in search results. Entice the reader.

Image Alt Text
A depressing number of websites don’t do this properly. It’s so easy, and so helpful. Every image on your website should have alt text describing what the image shows. This helps crawlers understand your visual content, and allows screen readers to describe what visually impaired web browsers cannot see. Alt text also improves your chances of appearing in image search results.
Internal Links
A few years back scientists discovered that ancient Roman concrete gets stronger over time. Internal links are a prime example of a similar phenomenon in SEO. When you create a new website, linking to other, relevant parts of your site makes for a solid foundation. Continuing to do it over time makes it even stronger. Not only do internal links make websites easier to browse, but they also provide crucial context for search engine crawlers. Each one makes a site’s SEO that little bit stronger.
External Links
Some SEO types get a bit precious about ‘link juice’, loath to directing people away from their own site. While this is great for shoving people down funnels, it’s pretty slimy behavior. It’s bad for readers and it’s bad for SEO. If you cite something, link to it. If you quote someone, link to the source. Citing one’s sources is writing 101, and again, it provides context to your own content. It helps search engines to understand the type of website you are, and what sort of company you keep. Scour through your copy and make sure the appropriate external links are there.

Linkbuilding: The Citizen’s Field Guide
How To Help Your Clients Get More Backlinks Through Design

Clear Structure Markup
This is so simple and so, so important. Just like meta titles and descriptions show what a page is about, following best practice for HTML makes page structure clear and easy to understand. Use the right tags in the right places, make sure headings are arranged logically. A great way to do this is to strip away CSS and look at pages in pure HTML. If the structure isn’t obvious there then there’s work still to do. Google’s free Lighthouse assessment is good at spotting problems of this kind.
Structured Data
Semantic markup is becoming increasingly important to SEO, and web design in general. It makes your website’s content machine readable, which in turn makes it easier to crawl, understand, index, and return as sophisticated search results. There are plenty of plugins to help with this, or if you’re feeling daring the markup is simpler than you might think to add yourself. Schema has emerged as the language of choice for search engines, with Google, Microsoft, Yahoo, and others all collaborating on its development. Our guide on structured data is a good place to start.

Google Rich Result Tester

Every website should have a site map. It’s the ultimate reference point for web crawlers on how pages are organized and where to find all the content you want to be found. What would a metro system be without a map? Or a library without clearly marked sections? Take the time to do this properly as doing so will save you a lot of time in the long run. A badly organized, unmapped website is typically unpleasant for both people and crawlers to browse.
Descriptive, Logical URL Structure
This is a little one, but well worth standardising early. Use clear, succinct URL structures. This denotes both site structure and page content.
E.g. is infinitely clearer than One is clear to people and algorithms alike; the other is a random jumble of letters. Take the time to establish formats for different post types then stick to them.
Multimedia Content
Search engines like to see variety on pages — provided it loads quickly. A blog post with relevant images, audio clips, and an embedded video is likely to be more engaging than a plain text blog post. Never add these things just for the sake of adding them, but don’t be shy about getting creative. This is the internet; you can do just about anything.
Assets Optimization
Whatever media assets you have on-site, for goodness sake optimize them. Compressing image files is the most obvious example here, and often overlooked. That 2GB photograph from your family vacation might look sharp as the banner image on your photography portfolio — too bad nobody will stick around long enough for it to load. In a mobile-first world, super-high-resolution images are seldom necessary. Compress your images. Stagger CSS rollout. Your website has to be quick.

Responsive Image Breakpoint Generator by Cloudinary
Unused CSS Finder by JitBit


HTTPS (Hypertext Transfer Protocol Secure) improves the connection security between users and a website. Google and other search engines punish websites that don’t have it. Have HTTPS. Most web hosting providers throw it in for free. If they don’t, get it, or change providers.

HTTP/2 Test Tool by Geekflare

Credibility plays a huge part in SEO, and backlinks are a major indicator of trust. If reputable, relevant sites are linking to your site, that makes you more credible in your field. Doing this properly takes time and dedication. Nobody owes you backlinks — you have to earn them. Earn. Not buy. Black hat approaches to backlinks (spamming comment sections, paying for them, etc.) will get you nowhere. If anything search engines will catch on and punish the offending site.

Backlink Gap by SEMrush

Testing And Monitoring

Site Speed
You can’t really be sure of site speed until it’s live. Run your site through speed testing tools like PageSpeed Insights and GTmetrix. Search engines like fast websites and dislike slow ones. So do people. Keep an eye on this over time. Just because a site was fast six months ago doesn’t mean it’s fast now, since you’ve been unloading uncompressed images again. Tut tut.

PageSpeed Insights
Bulk Mobile Friendly Test by Experte
TTFB (Time to First Byte) Test by Geekflare

In the long term SEO is as much about monitoring as it is about on-site changes. There are numerous free tools available for tracking search analytics. Google has Search Console (GDPR friendly) and Analytics (not always so GDPR friendly). Microsoft has the Bing panel. Then there are third party outfits like Moz, SEMRush, and Screaming Frog. As mentioned at the start of this checklist, don’t drown in numbers. Ease yourself in with essential tools and explore from there as your priorities become clearer.
Boiling down your SEO performance into regular reports makes progress more manageable. Be it weekly, monthly, or quarterly, these are vital for staying focused on your goals… and achieving them! Keeping tabs on your performance over time means you can nip problems in the bud, and make proactive adjustments to your approach.

How to Create Relevant and Engaging SEO Reports by Moz

Quick Wins

Not everyone has the time or resources to go through a full audit of their website’s SEO. That’s ok, and it doesn’t mean you have to fall behind. The following tips are particularly easy to implement, can return quick results, and allow you to keep an eye on your search performance long term.

Basic Analytics
If you’re completely new to SEO there are few better ways to get started than getting basic analytics up and running. By this, we mean Google Search Console and Google Analytics. Both are free and easy to add to a site. Having these up and running will immediately give you a better sense of your SEO situation.

Google Search Console
Google Analytics
Bing Webmaster Tools

Purge Low Quality Content
Producing great content takes time and a lot of work. Deleting rubbish content takes seconds. Your website is the sum total of its pages. If a site has a lot of ‘thin’ content, that’s going to weigh down the good stuff. Go through your existing content and honestly assess whether it’s worthy of the standard you want to live up to. If the answer’s no, maybe you should delete it. Doing this can give your SEO an immediate bounce. Depending on the site, purging low quality content can be like removing a ball and chain.
Optimize Images
A great way to speed up your website is to properly compress your images. If this isn’t something you’ve thought about before you may be slightly mortified by how big some of the files are. It can be tedious, but it has to be done and is an immediate way to speed up your site speed. And make sure they’ve all got alt text, while you’re at it.

Measure tool at

Please note that this cheat sheet will be updated occasionally, so if you think anything is missing and should be added, feel free to let us know! We’ll consider it for inclusion the next time we update the list.

Are Websites Adding To Consumer’s Health Issues?

Original Source:

Have any of you watched The Social Dilemma yet? For those of you who haven’t see it, here’s a summary of what it’s about:

People who were instrumental in building the world’s leading social media platforms explain what’s really going on behind the scenes.
Essentially, social media companies are in the business of selling their users to advertisers and partners.
So, the social algorithms are programmed to do whatever’s necessary to gather as much user data as possible.
This often leads to unethical means of grabbing users’ attention and keeping them addicted to scrolling, reading, clicking and so on.

All this has led to an increase in depression, anxiety, lower life satisfaction, distorted realities, compromised relationships and poor health on the part of the consumer.

But let’s be honest. It’s not just social media that sacrifices its users’ wellbeing for its own profitability.

Certain kinds of mobile apps capitalize on users’ addictive tendencies, FOMO and other negative behaviors. But what about websites? Are they responsible, in part, for the deterioration of consumers’ mental and physical wellbeing?

Today, I’m going to show you five ways in which websites are making visitors and customers feel worse and what you can do to help reverse this trend.

Is Your Website Making Its Visitors Feel Sick?

There’s so much toxicity, hate and divisiveness in the world already. The last thing we need is to give people more reason to feel negatively about themselves or towards others.

We are well aware of how dark patterns as well as misuse of visitor data can impact the way people respond to our websites (and later feel about the experience). It’s the whole reason why ethical design is such a critical matter these days.

But what else could your websites be doing that leads users to feel poorly? Let’s have a look:

1. Playing Into Alert Panic with Fake Notifications

Have you ever been watching something on TV or been in a crowded space and heard the all-too-familiar text message chime and reached for your phone?

Of course, you quickly realize the message isn’t for you as the person on the screen or in the crowd does the same thing as you, except they have someone they need to respond to. And you don’t.

We’ve been conditioned to feel disappointed when that notification isn’t for us. Or when it’s not from the person we wanted it to be.

Worse, because we’ve grown so accustomed to that dopamine hit, we’re often overwhelmed with notification alerts — sounds and visual signals — that we’ve activated on nearly every app we use. Facebook. Text. Email. Food delivery apps. Mobile games. Heck, even my meditation app wants to ping me once a day.

Larry Rosen, a psychology professor emeritus at California State University, explains why this is so bad for us:

We’ve trained ourselves, almost like Pavlov’s dogs, to figuratively salivate over what that vibration might mean. If you don’t address the vibrating phone or the beeping text, the signals in your brain that cause anxiety are going to continue to dominate and you’re going to continue feeling uncomfortable until you take care of them.

As a consumer, you’re well aware of the effect that notifications have on people. As web designers, though, what should you do with this information?

Unfortunately, some designers have chosen to add these anxiety-inducing triggers into their websites. Here’s an example from Mobile Monkey:

Mobile Monkey’s chat widget looks like someone is typing. (Source: Mobile Monkey) (Large preview)

There are actually two panic triggers in the chat widget:

The first are the three bouncing dots that look like someone is typing a message. The second is the red “1” that appears on the corner of the widget afterwards, resembling the marker you’d see if you had an unread text or email.

Considering I’ve never had a conversation with the chatbot on this site before, this alert does nothing but confuse and annoy me. I came to the site to read about CRO tools, not get interrupted by a chatbot I don’t need.

Another example of this can actually be found on The Social Dilemma’s website:

The Social Dilemma website uses a notification trigger in the header. (Source: The Social Dilemma) (Large preview)

At first, my thought was, “Hypocrites!”. But then I read the entire pop-up and realized it’s actually a brilliant move as it makes their audience hyper aware of how hooked they are to notifications.

Here’s what the grey section beneath the email signup form says:

Notifications like these offer an enticing loop of pleasure that can create an unconscious attachment to our devices.”

This is no different than an actor breaking the fourth wall and looking at the camera to address the audience. While it works for the film’s website — since its whole message is for consumers to break free from this kind of digital dependence — it’s just going to cause harm when used on other sites.

2. Deceiving Customers With Dishonest Photos

Have you ever noticed that social media has become a sort of “second life” for some people?

The most obvious example of this are influencers. They take pictures of their fancy homes, luxurious vacations and expensive clothes. But we’re learning more and more that this isn’t the reality of their day-to-day lives and that the highly staged photos are designed to manipulate fans into buying the products they promote.

But it’s not just influencers who lie on social media. Many of the people we know fall prey to this — only putting out the idealized photos of themselves, their families and their lives.

An article written by Dr. Cortney S. Warren for Psychology Today recaps the results of a number of studies on the correlation between social media and lying:

67% of daters have lied about their weight.
43% of men have made up facts about themselves and/or their lives.
32% of people only shared non-boring aspects of their lives on social media.
14% said they make themselves appear more physically active on social.
Only 18% of men and 19% of women said their Facebook pages were completely accurate.

Warren explains how these lies — while they make the liar feel better about themselves — actually do a lot of harm for everyone exposed to them:

To make matters more complicated, when we internally believe that what we see in social media is true and relevant to us, we are more likely to compare ourselves to it in an internal effort to evaluate ourselves against those around us (e.g., regarding our looks, wealth, significant other, family, etc.). As we do this against the idealized images and unreasonably positive life accounts that tend to permeate social media, we are likely to feel more poorly about ourselves and our lives.

Unfortunately, this is something that brands do, too, when they use inauthentic, idealized and doctored photos on their websites. Take, for instance, the example of McDonald’s. This is how its famous McRib is portrayed on its website:

Have any of you ever gotten a sandwich from McDonald’s or any fast food joint that looked that impeccable? Don’t get me wrong. I eat fast food more often than I’d like to admit. But I don’t lie to myself about what I’m about to find in my takeout bag. And that photo right there is definitely not what I’m expecting.

It’s irresponsible of any business to set such unrealistic expectations from the start. This can happen with all kinds of brands, too. For instance, travel companies that make their properties look fancier than they really are or medical facilities that look well-organized and clean when they’re not.

And what about retail and fashion companies that use super-skinny girls to show off their clothing? Not only do those photos lead to frustration when a customer can’t fit into something they bought, they’re likely to blame themselves for being too “fat” or “ugly” or whatever kind of self-hate they decide to inflict on themselves.

If you can’t be honest in your photos, then what your website sells is a lie. And you have to expect the deception to come at a price.

3. Bombarding Visitors with Addictive Content

Social media platforms and their algorithms are designed to keep users logged in and engaged.

If a user were to slow down while scrolling through their feed, for example, the algorithm would run a calculation to determine what might suck them back in. It could be:

A “Suggested for You” post featuring puppies playing in snow,
A notification that a close friend just posted something for the first time in awhile,
An ad for a product the user was looking at on Amazon a few days back.

We’re living in a time of information overload and social media platforms are very good at taking advantage of it. By constantly throwing something new into our field of vision, it becomes harder and harder to pull ourselves away. What’s more, when we’re feeling unmotivated or unproductive, we know exactly where to go to drown ourselves in distractions.

It’s gotten worse during the pandemic, too. As research scientist Mesfin Bekalu explains:

As humans we have a ‘natural’ tendency to pay more attention to negative news.

Addictions specialist Dr. Paul L. Hokemeyer elaborates:

A person who doomscrolls found at some point in the trajectory of their disorder that searching online for information on disturbing events gave them comfort. It gave them a sense of control over their lives and re-engaged their intellect. But while they thought they were being soothed by facts, what they were really doing was hyperactivating their emotional reactivity.

It’s not just scientists and health professionals who are aware of this. Social media algorithms are, too. And because they’re programmed to manipulate users with content that’ll make them want to keep reading and engaging, guess what people’s feeds are full of?

One of the benefits of building a website for brands is to get consumers away from the chatter, distractions and negativity that thrive on social media platforms. That doesn’t mean you’re free to bombard your visitors with content that exploits their addictive tendencies though.

And, yet, it happens. This, for instance, is what I saw when I clicked on a link to an article on the Small Business Trends website:

In just my first second on the site, I saw:

A pop-up reminding me about the pandemic and recession,
An ad for Similar Web sitting on top of the area of the pop-up where I could say “No Thank You”,
A newsletter subscription form on the right,
Ads for Capital One in the header and sidebar.

I see zero content (the title isn’t even fully visible) and I’m overwhelmed with ads — one of which hooks into the anxiety I’m already feeling about the pandemic. I’m sure I’m not the only person who’d feel the same way looking at this site.

It’s not just an overwhelming amount of ads that make visitors feel uneasy or, worse, compel them to explore each of the distractions before actually getting to the content.

For example, there are websites that display promotional videos, but then don’t allow visitors to escape them, as Fast Company does in its sidebar:

Fast Company’s video ad follows readers as they move down the page. (Source: Fast Company) (Large preview)

There’s no sound unless the visitor triggers it, but it doesn’t matter. The fact that the video is glued to the sidebar, auto-plays and shows the captions makes it an inescapable distraction.

Sites that use an endless scroll are another example of brands exploiting consumers’ addictive tendencies. Entrepreneur has an endless scroll that ensures that visitors will find more content to read… if only they keep scrolling and scrolling and scrolling:

Entrepreneur’s internal pages include a never-ending scroll. (Source: Entrepreneur) (Large preview)

Endless scrolling pages are a lot like going to an all-you-can-eat buffet or somewhere that offers “bottomless bowls” or “never-ending refills”. You know your customers are going to gorge themselves. And while they might enjoy it at the time, they’re going to walk away from the experience feeling mighty ill and probably a little ashamed with themselves for throwing away all that time, too.

Another thing this site does that’s worrisome is that it displays tracking banner ads.

You can barely see it in the video above, but the top of the page has a big ad for Flatfile, which is something I’ve been writing about for the last few weeks. So, before I could even focus on the content, I started stressing out about the state of my current projects.

While that exact response isn’t what the ad was meant to elicit, it’s supposed to stir up some type of anxiety or FOMO for a purchase not completed. For consumers that are struggling with a shopping addiction or outlandish debt, your website could realistically become a vehicle that feeds into it.


I know it’s your job to build websites that attract visitors, encourage those visitors to engage with the sites and eventually turn the engagement into conversions.

But if you want to do your part in designing more humane digital experiences, then it’s time to stop exploiting your audience’s vulnerabilities.

You can still take what you know about human psychology and use it to design attractive, friction-free and user-first experiences without manipulation and deceit.

Trust me. With the backlash social media platforms face (like after the Cambridge Analytica scandal), the number of people who quit them every year and now a high profile movie like The Social Dilemma, consumers are waking up. And it’s not just going to be Facebook they abandon when they realize how their thoughts and actions were controlled by a piece of technology and the people who built it.

Smashing Podcast Episode 31 With Eve Porcello: What Is GraphQL?

Original Source:

In this episode, we’re talking about GraphQL. What is it, and how does solve some common API problems? I spoke with expert Eve Porcello to find out.

Show Notes

Eve on Twitter
Eve’s company Moon Highway
Learning GraphQL from O’Reilly
Discover Your Path Through The GraphQL Wilderness – Eve’s GraphQL workshop launching early 2021

Weekly Update

How To Use MDX Stored In Sanity In A Next.js Website
written by Jason Lengstorf
Building A Conversational N.L.P Enabled Chatbot Using Google’s Dialogflow
written by Nwani Victory
Ethical Considerations In UX Research: The Need For Training And Review
written by Victor Yocco
Making Websites Easier To Talk To
written by Frederick O’Brien
How To Design A Simple UI When You Have A Complex Solution
written by Suzanne Scacca


Drew McLellan: She’s a software engineer, instructor, author, and co-founder of training and curriculum development company, Moon Highway. Her career started writing technical specifications and creating UX designs for web projects. Since starting Moon Highway in 2012, she’s created video content for and LinkedIn Learning, and has co-authored the books Learning React and Learning GraphQL for O’Reilly’s Media.

Drew: She’s also a frequent conference speaker, and has presented at conferences including React Rally, GraphQL Summit, and OSCON. So we know she’s an expert in GraphQL, but did you know she once taught a polar bear to play chess? My smashing friends, please welcome Eve Porcello.

Drew: Hi Eve, how are you?

Eve Porcello: I’m smashing.

Drew: As I mentioned there, you’re very much an educator in things like JavaScript and React, but I wanted to talk to you today about one of your other specialist areas, GraphQL. Many of us will have heard of GraphQL in some capacity, but might not be completely sure what it is, or what it does, and in particular, what sort of problem it solves in the web stack.

Drew: So set the stage for us, if you will, if I’m a front end developer, where does GraphQL slot into the ecosystem and what function does it perform for me?

Eve: Yeah. GraphQL kind of fits between the front end and the backend. It’s kind of living in the middle between the two and gives a lot of benefits to front end developers and back end developers.

Eve: If you’re a front end developer, you can define all of your front ends data requirements. So if you have a big list of React components, for example, you could write a query. And that’s going to define all of the fields that you would need to populate the data for that page.

Eve: Now with the backend piece, it’s really own, because we can collect a lot of data from a lot of different sources. So we have data in REST APIs, and databases, and all these different places. And GraphQL provides us this nice little orchestration layer to really make sense of the chaos of where all of our data is. So it’s a really useful for kind of everybody all over the stack.

Drew: So it’s basically an API based technology, isn’t it? It sits between your front end and your back end and provide some sort of API, is that correct?

Eve: Yeah, that’s exactly right. Exactly.

Drew: I think, over the last decade, the gold standard for APIs has been rest. So if you have a client side app and you need to populate it with data from the backend, you would build a REST API endpoint and you’d query that. So where does that model fall down? And when might we need GraphQL to come in and solve that for us?

Eve: Well, the problem that GraphQL really helps us with, kind of the golden problem, the golden solution, I guess, that GraphQL provides is that with REST we’re over fetching a lot of data. So if I have slash users or slash products, that’s going to give me back all of the data every time I hit route.

Eve: With GraphQL, we can be a little bit pickier about what data we want. So if I only need four fields from an object that has a hundred, I’m going to be able to really pinpoint those fields and not have to load data into, or load all of that data I should say, into your device, because that’s a lot of extra legwork, for your phone especially.

Drew: I’ve seen and worked with REST APIs in the past that have an optional field where you can pass in a list of the data that you want back, or you can augment what comes back with extra things. And so I guess that’s identifying this problem, isn’t it? That’s saying, you don’t always want the same data back every time. So is it that GraphQL formalizes that approach of allowing the front end to specify what the backend is going to return, in terms of data?

Eve: Yeah, exactly. So your query then becomes how you ask, how you filter, how you grasp for any sort of information from anywhere.

Eve: I also think it’s important to note that we don’t have to tear down all of our REST APIs in order to work with GraphQL really successfully. A lot of the most successful implementations of GraphQL I’ve seen out there, it’s wrappers around REST APIs. And the GraphQL query really gives you a way to think about what data you need. And then maybe some of your data comes from our users and products, examples, some of the data comes from rest, some of it comes from a database.

Drew: I guess the familiar scenario is, you might have an endpoint on your website that returns information about a user to display the header. It might give you their username and their avatar. And you cull that on every page and populate the data, but then you find somewhere else in your app you need to display their full name.

Drew: So you add that to the endpoint and it starts returning that. And then you do your account management section, and you need like their mailing address. So that gets returned by that endpoint as well.

Drew: And before you know it, that endpoint is returning a whole heavy payload that costs quite a lot on the backend to put together, and obviously a lot to download.

Drew: And that’s been culled on every single page just to show an avatar. So I guess that’s the sort of problem that grows over time, that was so easy to fall into, particularly in big teams, that GraphQL, it’s on top of that problem. It knows how to solve that, and it’s designed around solving that.

Eve: Exactly. And yeah, I think that whole idea of a GraphQL Schema, I think is a really, it’s kind of less talked about than the query language part of GraphQL. But I really feel like the Schema in particular gives us this nice type system for API.

Eve: So anybody on the team, managers, front end developers, back end developers, anybody who is really dealing with data can come together, coalesce around what data we actually want to serve up on this API, and then everyone knows what that source of truth is, they can go build their own parts of the app based on that.

Eve: So there’s some tricky Schema management things that come up with that too. But as far as moving from microservices back to monoliths, we’re sort of doing that, but getting all of the benefits we like out of microservices still.

Drew: And do I understand correctly that the typical way of setting up a GraphQL system is that you’d have basically one route, which is the endpoint that you send all your queries to so you’re not having to… Often one of the most difficult things is working out what to name, and what the path should be that this particular query should be at. It’s returning users and products, should it be it slash users something, or slash product something?

Drew: With GraphQL you just have one endpoint that you just fire your queries to and you get back an appropriate response.

Eve: Exactly. Yeah. It’s a single endpoint. I guess, you still are dealing with problems of naming because you’re naming everything in the Schema. But as far as, I feel like a lot of companies who have made big bets on microservices, everyone’s like, what endpoints do we have? Where are they? How are they documented? And with GraphQL, we have one place, one kind of dictionary to look up anything that we want to find out about how the API works.

Drew: So, I’m very familiar with other query languages, like SQL is an obvious example of a query language that a lot of web developers will know. And the queries in that take the form of almost like a command. It’s a text string, isn’t it, Select this from that, where, whatever. What format do the queries take with GraphQL?

Eve: It’s still a tech string, but it doesn’t define where that logic comes from. And a lot of the logic is moved back to the server. So the GraphQL server, the GraphQL API is really responsible for saying, “Go get this data from where it is, filter it based on these parameters.”

Eve: But in the query language, it’s very field oriented. So we just add fields for anything that we want to retrieve. We can put filters on those queries, of course, too. But I think it’s a little less direct about where that information comes from. A lot of the functionality is built into the server.

Drew: So you can mix and match in a query. You can make a request that brings back lots of different types of data in one request. Is that right?

Eve: Yeah, that’s absolutely right. So you could send a query for as many fields as your server would allow, and bring back all sorts of nested data. But that’s really how it works, we connect different types on fields. So I guess we’ll recycle my users and products idea, but the user might have a products field that returns a list of products. All of those are associated with other types as well. So as deeply nested as we want the query to go, we can.

Drew: So does that mean to retrieve the data for a typical view in your web application that might have all sorts of things going on, that you can just make one request to the backend and get that all in one go without needing to make different queries to different endpoints, because it’s all just one thing?

Eve: Yeah. That’s exactly the whole goal, is just a single query, define every field that you want, and then return it in one response.

Drew: And the queries are Jason based? Is that right?

Eve: … Turn it in one response.

Drew: And the queries are JSON based, is that right?

Eve: The query itself is a text string, but it typically returns JSON data. So if I have the fields, then my JSON response matches exactly, and so it’s really clear what you’re getting when you send that query, because the data response looks exactly the same.

Drew: A lot of the queries it seems like are for almost like bare objects, like a customer or a product. Is there a way to specify more complex queries where business logic is controlled at the backend? Say I want to get a list of teams for a user, but only where that user is an admin of a team and where the team plan hasn’t expired, and all those sorts of real constraints that we face in everyday web application development. Can that be achieved with GraphQL?

Eve: Absolutely. So that’s the real exciting, powerful thing about GraphQL is, you can move a lot of that logic to the server. If you had a complex query, some really specific type of user that you wanted to get, all you’d need to do in the Schema is say, “Get complicated user”, and then on the server, there would be a function where you could write all of the logic in whatever language you wanted to. JavaScript is kind of the most popular GraphQL implementation language, but you don’t have to use that at all. So Python, Go, C++, whatever you want to use, you can build a GraphQL server with that. But yeah, you can define as complex a query as you’d like to.

Drew: And I guess that enables you to encapsulate a lot of business logic then in new types of objects. Is that fair? You know, you set up a complicated user and then you don’t need to think what a complicated user is, but you can just keep using that complicated user and know that the business logic is implemented on that. Is that right?

Eve: That’s exactly right. So I think this is really nice for front end folks because they can start to prototype based on that. And then the backend team could go implement those functions to make that work. And then there’s kind of this shared understanding for what that type actually is and who they are, and, “What are the fields on that type?” And everything can be handled by wherever in the stack GraphQL is working. And that’s why it’s not really a front end or a back end technology. It’s really kind of both, and neither.

Drew: It sounds like it’s sort of formalizing the API and the relationship between front end and backend, so everybody’s getting a predictable interface that is standardized.

Eve: Exactly.

Drew: Which I guess in organizations where the front end and the backend are owned by different teams, which isn’t at all uncommon, I guess this approach also enables changes to be made, say, on the front end, it might require different data, without needing somebody who works on the backend to make the changes that correspond to that. You’ve still got this almost infinitely customizable API without requiring any work to be done to change it if you need new data.

Eve: Yeah, exactly right.

Drew: So is the GraphQL server responsible for formatting the response, or do you need to do that in your server side logic?

Eve: So the GraphQL server defines two things. It defines the Schema itself that lives on the server, and then it defines the resolver functions. Those are functions that go get the data from wherever it is. So if I have a REST API that I’m wrapping with GraphQL, the resolver would fetch from that API, transform the data however it needed to be, and then return it to the client in that function. You can use any sort of database functions you’d like to on that server as well. So if you have data in a bunch of different places, this is a really nice cohesive spot to put all of that data in and to kind of design all the logic around, “Where’s that data coming? How do we want to transform it?”

Drew: The client says, “I want a complex user”, the server receives that in a query and could say, “Right, I’m going to look up the complex user resolver.” Is that right?

Eve: Mm-hmm (affirmative).

Drew: Which is the function, and then you write your logic that your backend team, or whoever writes the logic inside that function, to do whatever is necessary to return a complex user.

Eve: Yeah, exactly.

Drew: So that could be calling other APIs, it could be querying a database, it could be looking stuff up in cache, or pretty much anything.

Eve: Pretty much anything. And then, as long as that return from the function matches the requirements of the Schema, matches what fields, what types, were returning there, then everything will work nice and harmoniously.

Drew: I guess it gives you a consistent response format across your entire API just by default. You don’t have to design what that looks like. You just get a consistent result.

Eve: Yeah, exactly.

Drew: I think that could be quite a win really, because it can be really difficult to maintain consistency across a big range of API end points, especially in larger teams. Different people are working on different things. Unless you have quite strict governance in place, it can get really complex really quickly, can’t it?

Eve: Yeah, absolutely. And I think that Schema is just such a nice little document to describe everything. You get the automatic benefit of being able to see all of the fields in that Schema whenever you’re trying to send queries to it, because you can send introspection queries and there’s all sorts of nice tools for that, like GraphQL and GraphQL Playground, little tools that you can use to interact with the API’s data.

Eve: But also, if you’ve ever played around with Postman, like to ping a REST API, a lot of those, the documentation doesn’t really exist or it’s tough to find, or things like that. GraphQL really gives you that nice cohesive layer to describe everything that might be part of that API.

Drew: Practically, how do things work on the server side? I mean, I guess you need to run a GraphQL service as part of your infrastructure, but what form does that take? Is it an entire server running on its own port? Or is it just like a library you integrate into your existing Express or Apache or whatever with a route that resolves to that service? How do you implement it?

Eve: Yeah, it’s an actual server. So kind of the most popular GraphQL implementations are Node.js servers. When GraphQL as a spec was released, the team released this reference implementation in JavaScript, kind of a Node server that served as the guidelines for all these other ones who have popped up. But yeah, you can run these servers on their own instances. You can put them on Lambda. So there’s Apollo Server Express, there’s Apollo Server Lambda; all sorts of different types of implementations that you can use to actually run this thing.

Drew: So you mentioned briefly before the concept of a Schema that the server has.

Eve: Yeah.

Drew: That gives you the ability to describe your types more strictly than just, you know, mapping a name to a resolver. There’s more involved there, is there?

Eve: Yeah. There’s a full language. So I’ve referenced the spec and I didn’t describe what it is. GraphQL itself is a spec that describes the query language and the Schema definition language. So it has its own syntax. It has its own rules for defining these types.

Eve: When you’re using the Schema definition language, you basically use all of the features of that language to think about, what are the types that are part of the API? It’s also where you define the queries, the mutations, which are the verbs, like the actions, create account login, things like that. And even GraphQL subscriptions, which are another cool thing, real time GraphQL that you can define right there in the Schema.

Eve: So yeah, the Schema really is super important. And I think that it gives us this nice type enforcement across our full Stack application, because as soon as you start to deviate from those fields and from those types, you start to see errors, which is, in that case, good, because you’re following the rules of the Schema.

Drew: Is there any crossover between that and TypeScript? Is there a sort of synergy between the two there?

Eve: Absolutely. So if you’re a person who talks about GraphQL a lot, sometimes people will tell you that it’s bad, and they’ll come up to you publicly, when you could do that, and talk about how GraphQL is no good. But a lot of times they skip out on the cool stuff you get from types. So as far as synergy with TypeScript goes, absolutely, you can auto-generate types for your front end application using the types from the Schema. So that’s a huge win because you can not only generate it the first time, which gives you great interoperability with your front end application, but also, as things change, you can regenerate types and then build to reflect those changes. So yeah, I think those things fit really nicely together as types start to be kind of the defacto rule.

Eve: … to be kind of the defacto rule in JavaScript, they fit nicely together.

Drew: It seems to be a sort of ongoing theme with the way that TypeScript has been designed … that’s not TypeScript, sorry. GraphQL has been designed that there’s a lot of about formalizing the interaction between the front end and the back end. And it’s coming as a solution in between the just creates consistency and a formalization of what so far has been otherwise a fairly scrappy experience with rest for a lot of people. One thing that we always have to keep in mind when writing client-side apps is that the code is subject to inspection and potentially modification. And having an API where the client can just request data could be dangerous. Now, if you can specify what fields you want, maybe that could be dangerous. Where in the sort of the whole stack, would you deal with the authorization of users and making sure that the business rules around your data enforced?

Eve: You would deal with that all on the server. So, that could happen in many different ways. You don’t have to use one off strategy, but your resolvers will handle your authorization. So that could mean wrapping an existing off REST API, like a service like Auth0 or something you’ve built on your own. That could mean interacting with an OAuth, like GitHub or Facebook or Google login, those types of things that involves kind of passing tokens back and forth with resolvers. But oftentimes that will be built directly into the Schema. So the Schema will say, I don’t know, we’ll create a login mutation. And then I send that mutation with my credentials and then on the server, all of those credentials are verified. So the client doesn’t have to worry so much, maybe a little bit of passing tokens and things like that. But most of that is just built into the server.

Drew: So essentially, that doesn’t really change compared to how we’re building rest endpoints at the moment. Rest as a technology, well, it doesn’t really deal with authorization either and we have middleware and things on the server that deals with it. And it’s just the same with GraphQL. You just deal with it. Are there any conventions in GraphQL community for doing that? Are there common approaches or is it all over the place for how people choose to implement it?

Eve: It’s honestly all over the place. I think most times you’ll see folks building off into the Schema and by that I mean, representing those types and authorized users versus regular users building those types into the Schema itself. But you’ll also see a lot of folks using third-party solutions. I mentioned Auth0. A lot of folks will kind of offload their authorization on to companies who are more focused on it, particularly smaller companies, startups, things like that. But you’ll also see bigger companies starting to create solutions for this. So AWS, Amazon has AppSync, which is their flavor of GraphQL, and they have author rolls built directly into AppSync. And that’s kind of cool just to be able to, I don’t know, not have to worry about all of that stuff or at least provide an interface for working with that. So a lot of these ecosystem tools have, I think authorization is such a big topic in GraphQL. They’ve seen kind of the need, the demand for auth solutions and standard approaches to handling auth on the graph.

Drew: I guess there’s hardly a, an implementation out there that doesn’t need some sort of authorization. So yeah, it’s going to be a fairly common requirement. We’re all sort of increasingly building componentized applications, particularly when we’re using things React and View and what have you. And the principle of loose coupling leaves us with lots of components that don’t necessarily know what else is running on the page around them. Is there a danger as a result of that, you could end up with lots of components querying for the same data and making multiple requests? Or is it just an architectural problem in your app that you need to solve for that? Are there sort of well-trodden solutions for dealing with that?

Eve: Well, I think because GraphQL for the most part, not 100% of the solutions, but almost every GraphQL query is sent over HTTP. So if you want to track down where those multiple requests are happening, it’s probably a fairly familiar problem to folks who are using rest data for their applications. So there are some tools like Paulo Client Dev Tools and Urkel Dev Tools for front end developers who are like, “What’s going on? Which queries are on this page?” That gives you really clear insights into what’s happening. There’s kind of several schools of thought with that. Do we create one big, huge query for all of the data for the page? Or do we create smaller queries to load data for different parts of the app? Both as you might imagine, they have their own drawbacks, just because if you have a big query, you’re waiting for more fields.

Eve: If you have smaller queries, there may be collisions between what data you’re requiring. But I think, and not to go off on too much of a tangent, but I’m there already. So the there’s something called the Deferred Directive that’s coming to the GraphQL spec and the Deferred Directive is going to help with kind of secondarily loading content. So let’s say you have some content at the top of the page, the super important content that you want to load first. If you add that to your query and then any subsequent fields get the deferred directive on that. It’s just a little decorator that you would add to a field, that will then say, “All right, load the important data first, then hold up and load the second data second.” And it kind of gives you this, it’s the appearance of kind of streaming data to your front end, so that there’s perceived performance, there’s interactivity. People are seeing data right away versus waiting for every single field to load for the page, which yeah, it could be a problem.

Drew: Yeah. I guess that enables you to architect pages where everything that’s … we don’t like to talk too much about the viewport, but it is everything above the fold, you could prioritize, have that load in and then secondarily load in everything sort of further down. So, we’ve talked a lot about querying data. One of the main jobs of an API is sending new and modified data back to the server for persistence. You mentioned briefly mutations earlier. That’s the terminology that’s GraphQL uses for writing data back to the server?

Eve: Exactly. So any sort of changes we want to make to the data, anything we want to write back to the server, those are mutations, and those are all just like queries, they’re named operations that live on the server. So you can think about what are all the things we want our users to be able to do? Represent those with mutations. And then again on the server, write all the functions that make that stuff work.

Drew: And is that just as simple as querying for data? Calling a mutation is just as easy?

Eve: Yeah. It’s part of the query language. It looks pretty much identical. The only difference is, well, I guess queries take in filters. So mutations taken what looked like filters in the query itself. But those are responsible for actually changing data. An email and a password might get sent with a mutation, and then the server collects that and then uses that to authorize the user.

Drew: So, just as before, you’re creating a resolver on the backend to deal with that and to do whatever needs to be done. One common occurrence when writing data is that you want to commit your changes and then re-query to get the sort of current state of it. Does GraphQL have a good workflow for that?

Eve: It sort of lives in the mutation itself. So, a lot times when creating your Schema you’ll create the mutation operation. I’ll stick with log-in, takes in the email and the password. And the mutation itself returned something. So it could return something as simple as a Boolean, this went well, or this went badly, or it could return an actual type. So oftentimes you’ll see the mutation like the log-in mutation, maybe it returns a user. So you get all the information about the user once they’re logged in. Or you can create a custom object type that gives you that user plus what time the user logged in, and maybe a little more metadata about that transaction in the return object. So again, it’s kind of up to you to design that, but that pattern is really baked into GraphQL.

Drew: This all sounds pretty great, but every technical choice involves trade-offs. What are the downsides of using GraphQL? Are there any scenarios where it’d be a really poor choice?

Eve: I think that the place where a GraphQL might struggle is creating a one-to-one map of-

Eve: … struggle is creating a one-to-one map of tabular data. So let’s say you have, I don’t know, think a database table with all sorts of different fields and, I don’t know, thousands of fields on a specific type, things like that, that type of data can be represented nicely with GraphQL, but sometimes when you run a process to generate a Schema on that data, you’re left with, in a Schema, the same problems that you had in the database, which is maybe too much data that goes beyond what the client actually requires. So I think those places, they’re potentially problems. I’ve talked to folks who have auto-generated Schemas based on their data and it’s become a million line long Schema or something like that, just thousands and thousands of lines of Schema code. And that’s where it becomes a little tricky, like how useful is this as a human readable document?

Eve: Yeah. So any sort of situation where you’re dealing with a client, it is a really nice fit as far as modeling every different type of data, it becomes a little tricky if your data sources too large.

Drew: So it sounds like anywhere where you’re going to carefully curate the responses in the fields and do it more by hand, you can get really powerful results. But if you’re auto-generating stuff because you’ve just got a massive Schema, then maybe it becomes a little unwieldy.

Eve: Yeah. And I think people are listening and disagreeing with me on that because there are good tools for that as well. But I think kind of the place where GraphQL really shines is that step of abstracting logic to the server, giving front end developers the freedom to define their components or their front ends data requirements, and really managing the Schema as a team.

Drew: Is there anything sort of built into the query language to deal with pagination of results, or is that down to a custom implementation as needed?

Eve: Yeah. Pagination, you would build first into the Schema, so you could define pagination for that. There’s a lot of guidelines that have sort of emerged in the community. A good example to look at if you’re newer to GraphQL or not, I look at this all the time, is the GitHub GraphQL API. They’ve basically recreated their API for v4 of their public facing API using GraphQL. So that’s a good spot to kind of look at how is a actual big company using this at scale. A lot of folks have big APIs running, but they don’t make it public to everybody. So pagination is built into that API really nicely and you can return, I don’t know, the first 50 repositories that you’ve ever created, or you can also use cursor based pagination for returning records based on ideas in your data. So cursor based pagination and kind of positional pagination like first, last records, that’s usually how people approach that, but there’s many techniques.

Drew: Are there any big got yous we should know going into using GraphQL? Say I’m about to deploy a new GraphQL installation for my organization, we’re going to build all our new API endpoints using GraphQL going forward. What should I know? Is there anything lurking in the bushes?

Eve: Lurking in the bushes, always with technology, right? I think one of the things that isn’t built into GraphQL, but can be implemented without too much hassle is API security. So for example, you mentioned if I have a huge query, we talked about this with authorization, but it’s also scary to open up an API where someone could just send a huge nested query, friends of friends, friends of friends, friends of friends, down and down the chain. And then you’re basically allowing people to DDoS you with these huge queries. So there’s things that you can set up on the server to limit query depth and query complexity. You can put queries on a safe list. So maybe your front ends, you know what they all are and it’s not a public API. So you only want to let certain queries come over the wire to you. So I would say before rolling that out, that is definitely a possible got you with the GraphQL.

Drew: You do a lot of instruction and training around GraphQL, and you’ve co-written the O’Reilly ’animal’ book with Alex Banks called Learning GraphQL. But you’ve got something new that you’re launching early in 2021, is that right?

Eve: That’s right. So I have been collaborating with to create a full stack GraphQL video course. We’re going to build an API and front end for a summer camp, so everything is summer camp themed. And yeah, we’re just going to get into how to work with Apollo server, Apollo client. We will talk about scaling GraphQL APIs with Apollo Federation. We’ll talk about authorization strategies and all sorts of different things. So it’s just kind of collecting the things that I’ve learned from teaching over the past, I don’t know, three or four years GraphQL and putting it into one spot.

Drew: So it’s a video course that… Is it all just self-directed, you can just work your way through at your own pace?

Eve: Yeah, exactly. So it’s a big hefty course so you can work through it at your own pace. Absolutely.

Drew: Oh, that sounds really good. And it’s, is that right?

Eve:, exactly.

Drew: And I’m looking forward to seeing that released because I think that’s something that I might need. So I’ve been learning all about GraphQL. What have you been learning about lately?

Eve: I’ve also been looking into Rust lately. So I’ve been building a lot of Rust Hello Worlds, and figuring out what that is. I don’t know if I know what that is yet, but I have been having fun tinkering around with it.

Drew: If you dear listener, would like to hear more from Eve, you can find her on Twitter where she’s @eveporcello, and you can find out about her work at Her GraphQL workshop, discover your path through the GraphQL wilderness, is coming out early in 2021 and can be found at Thanks for joining us today, Eve. Do you have any parting words?

Eve: Parting words, have fun with GraphQL, take the plunge, you’ll enjoy it, and thanks so much for having me. I appreciate it.

2 Smartest Ways to Structure Sass

Original Source:

Sass – the extended arm of CSS; the power factor that brings elegance to your code.

With Sass, it is all about variables, nesting, mixins, functions, partials, imports, inheritance, and control directives. Sass makes your code more maintainable and reusable.

And now, I will show you how to make your code more structured and organized.

The organization of files and folders is crucial when projects expand. Modularizing the directory is necessary as the file structure increases significantly. This means structuring is in order. Here is a way to do it.

Divide the stylesheets into separate files by using Partials
Import the partials into the master stylesheet – which is typically the main.sass file.
Create a layout folder for the layout specific files

Types of Sass Structures

There are a few different structures you can use. I prefer using two structures — a simple one and a more complex one. Let’s have a look.

Simple Structure

The simple structure is convenient for a small project like a single web page. For that purpose, you need to create a very minimal structure. Here is an example:

_base.sass — contains all the resets, variables, mixins, and utility classes
_layout.sass — all the Sass code handling the layout, which is the container and grid systems
_components.sass — everything that is reusable – buttons, navbars, cards, and so on
_main.sass — the main partial should contain only the imports of the already mentioned files

Another example of the same simple structure is the following:

_core.sass — contains variables, resets, mixins, and other similar styles
_layout.sass — there are the styles for the header, footer, the grid system, etc
_components.sass — styles for every component necessary for that project, including buttons, modals, etc.
_app.sass — imports

This is the one I usually use for smaller projects. And when it comes to making a decision of what kind of structure to be used, the size of the project is often the deciding factor.

Why Use This Structure?

There are several advantages why you should use this organisational structure. First of all, the CSS files cache and in doing so, the need to download a new file for every new page visit is decreased. In this way, the HTTP requests decrease as well.

Secondly, this structure is much easier to maintain since there is only one file.

Thirdly, the CSS files can be compressed and thus decrease their size. For a better outcome, it is recommended to use Sass/Less and then do concatenation and minification of the files.

In case files become disorganized, you would need to expand the structure. In such a case, you can add a folder for the components and break it further into individual files. If the project broadens and there is a need for restructuring the whole Sass structure, consider the next, more complex pattern.

The 7-1 Patterned Structure

The name of this structure comes from 7 folders, 1 file. This structure is used by many, as it is considered to be a good basis for projects of larger sizes. All you need to do is organize the partials in 7 different folders, and one single file (app.sass) should sit at the root level handling the imports. Here is an example:

|- abstracts/
| |- _mixins // Sass Mixins Folder
| |- _variables.scss // Sass Variables
|- core/
| |- _reset.scss // Reset
| |- _typography.scss // Typography Rules
|- components/
| |- _buttons.scss // Buttons
| |- _carousel.scss // Carousel
| |- _slider.scss // Slider
|- layout/
| |- _navigation.scss // Navigation
| |- _header.scss // Header
| |- _footer.scss // Footer
| |- _sidebar.scss // Sidebar
| |- _grid.scss // Grid
|- pages/
| |- _home.scss // Home styles
| |- _about.scss // About styles
|- sections/ (or blocks/)
| |- _hero.scss // Hero section
| |- _cta.scss // CTA section
|- vendors/ (if needed)
| |- _bootstrap.scss // Bootstrap
– app.scss // Main Sass file

In the Abstract partial, there is a file with all the variables, mixins, and similar components.

The Core partial contains files like typography, resets, and boilerplate code, used across the whole website. Once you write this code, there is no further overwriting.

The Components partial contains styles for all components that are to be created for one website, including buttons, carousels, tabs, modals, and the like.

The Layout partial has all styles necessary for the layout of the site, i.e., header, footer.

The Pages partial contains the styles for every individual page. Almost every page needs to have specific styles that are to be used only for that particular page.

For every section to be reusable and the sass code to be easily accessible, there is the Section/Blocks partial. Also, it is important to have this partial so that you don’t need to search whether particular code is in the home.sass or about.sass files in the Pages partial.

It is a good idea to put each section in a separate .sass file. Thus, if you have two different hero sections, put the code in the same file to know that there you can find the code for the two sections. And if you follow this pattern, you will have the majority of files in this folder.

The Vendors partial is intended for bootstrap frameworks so, if you use one in your project, create this partial.

I recommend you use app.sass as the main folder. Here is how it should look:

// Abstract files
@import “abscracts/all”;

// Vendor Files
@import “vendor/bootstrap.scss”;

// Core files
@import “core/all”;

// Components
@import “components/all”;

// Layout
@import “layout/all”;

// Sections
@import “sections/all”;

// Pages
@import “pages/all”;

Instead of having a lot of imports in the file, create an all.sass file in every folder. Each all.sass file should contain all the imports for that folder — and to make it more visible and understandable, create a main file.


The biggest benefit of this structure is organisation.You always know where to check if you need to change something specific. For example, if you want to change the spacing on a Section/Block you go directly to the Sections/Blocks folder. That way, you don’t need to search in the folder to find the class in a file.


When the code is structured, the processes are promptly facilitated. They are streamlined and every segment of the code has their own place.

Final Words

Organizing code is essential for developers and together with all other skills, it is the most effective way to improve the functioning of the site. And even though there are multiple ways of organisation and different strategies, opting for simplicity helps you avoid the dangerous pitfalls. And finally, there is no right or wrong choice since everything depends on the developer’s work strategies.


Featured image via Reshot.


p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Collective #640

Original Source:

Collective 640 Item image

Inspirational Website of the Week: Vrrb

An elegant and clean design with beautiful details and some creative animations. Our pick this week.

Get inspired

Collective 640 Item image

This content is sponsored via Syndicate Ads
The #1 Leader in Heatmaps, Recordings, Surveys & More

Hotjar shows you how visitors are really experiencing your website—without drowning in numbers. Sign up for a 15-day free trial.

Find out more

Collective 640 Item image

50 Projects in 50 Days

A fantastic repo by Brad Traversy with 50+ mini web projects using HTML, CSS and JavaScript.

Check it out

Collective 640 Item image


A simple and fun way to create your own WebGL experiment.

Check it out

Collective 640 Item image

The 2020 Web Almanac

The Web Almanac is an annual state of the web report combining the expertise of the web community with the data and trends of the HTTP Archive.

Check it out

Collective 640 Item image

Christmas Cannon

A super fun Christmas demo made by Steve Gardner.

Check it out

Collective 640 Item image

Kinetic Type Tutorial (Github Universe 2020)

A tutorial by Mario Carrillo on how to create the kinetic type for Github Universe.

Check it out

Collective 640 Item image

The Import On Interaction Pattern

An article by Addy Osmani on lazy-loading non-critical resources when it’s required by user interaction.

Read it

Collective 640 Item image

The Rules of Margin Collapse

Josh W Comeau puts an end to the mystery of collapsing margins and explains everything you need to know to not get caught by surprise anymore.

Read it

Collective 640 Item image

Introduction to Spline

If you got excited about Spline, you’ll find this tutorial very useful.

Watch it

Collective 640 Item image

Why I Love Tailwind

Max Stoiber shares why he loves Tailwind and explains how to avoid the downsides of atomic CSS.

Read it

Collective 640 Item image

Alt vs Figcaption

Elaina Natario explains alt and figcaption, their differences and how to use them.

Read it

Collective 640 Item image

Animated SVG Links

Some super stylish line animations for links by Adam Kuhn.

Check it out

Collective 640 Item image

CSS Scroll Snap

Learn why and how to use CSS scroll snap in this article by Ahmad Shadeed.

Check it out

Collective 640 Item image

Valtio Game

A demo that shows how to create a simple game with Valtio, a proxy-state library for React.

Check it out

Collective 640 Item image

Style your readme using CSS with this simple trick

A gem from some months ago: Some wicked CSS trickery by Sindre Sorhus.

Check it out

Collective 640 Item image

Accessible icon links

Hugo Giraudel shows how to make icon links accessible in this article.

Read it

Collective 640 Item image

Human performance metrics

A very insightful article by Gilles Dubuc on how web performance perception was measured at Wikipedia.

Read it

Collective 640 Item image

What Can You Put in a CSS Variable?

An article by Will Boyd on what you can use as CSS variable value.

Read it

Collective 640 Item image

GitHub: Where the world builds software

The new landing page of GitHub shines with an interactive globe.

Check it out

Collective 640 Item image

Typography Principles

A really nice scroll experience with a wonderful design explaining typographic principles.

Check it out

Collective 640 Item image

From Our Blog
Horizontal Smooth Scroll Layouts

Some ideas for horizontal smooth scrolling layouts powered by Locomotive Scroll.

Check it out

The post Collective #640 appeared first on Codrops.

Making Websites Easier To Talk To

Original Source:

A website without a screen doesn’t sound right does it. Like a book without pages, or a car without a steering wheel. Yet there are audiobooks, hand-free vehicles. And increasingly websites are being used without even being looked at — at least by humans.

Phone assistants and home speakers are a growing part of the Internet ecosystem. In the article, I will try to break down what that means for websites going forward, what designers can do about it, and why this might finally be a leap forward to accessibility. More than two thirds of the web is inaccessible to those with visual impairments, after all. It’s time to make websites easy to talk to.

Invasion Of The Home Speakers

Global smart speaker sales topped 147 million in 2019 and pandemic or no pandemic the trend is going up. Talking is faster than typing, after all. From Google Home to Alexa to smartphone assistants, cars, and even fridges, more and more people are using programmes to search the web on their behalf.

Putting aside the rather ominous Big Brother Inc undertones of this trend, it’s safe to say hundreds of millions of people are already exploring the web each day without actually looking at it. Screens are no longer essential to browsing the web and sites ought to adapt to this new reality. Those that don’t are cutting themselves off from hundreds of millions of people.

Developers, designers and writers alike should be prepared for the possibility that their work will not be seen or clicked at all — it will be heard and spoken to.

Designing Invisibility

There are two main prongs to the topic of website talkiness — tech and language. Let’s start with tech, which runs the gauntlet all the way from basic content structure to semantic markup and beyond. I’m as keen on good writing as anyone, but it’s not the place to start. You could have website copy worthy of a Daniel Day-Lewis performance, but if it isn’t arranged and marked up properly it won’t be worth much to anyone.

Age Old Foundations

The idea of websites being understood without being seen is not a new one. Screen readers have been around for decades, with two-thirds of users choosing speech as their output, with the final third choosing braille.

The focus of this article goes further than this, but making websites screen reader friendly provides a rock solid foundation for the fancier stuff below. I won’t linger on this too long as others have written extensively on the topic (links below) but below are things you should always be thinking about:

Clear navigation in-page and across the site.
Align DOM structure with visual design.
Alt text, no longer than 16 words or so, if an image does not need alt text (if it’s a background for example) have empty alt text, not no alt text.
Descriptive hyperlinks.
‘Skip to content links’.

Visual thinking actually blinds us to many design failings. Users can and often do put the pieces together themselves, but that doesn’t do much for machine-readable websites. Making websites easy to talk to starts with making them text-to-speech (TTS) friendly. It’s good practice and it massively improves accessibility. Win win.

Further Reading On TTS Design And Accessibility

Text to Speech by W3C
Front End North Pt 2: Léonie Watson blew my mind
Text-To-Speech With AWS (Part 1)
Text-To-Speech And Back Again With AWS (Part 2)
Notes On Client-Rendered Accessibility
Labelling Controls by the W3C
Using the aria-label attribute by Mozilla
I Used The Web For A Day Using A Screen Reader
From The Experts: Global Digital Accessibility Developments During COVID-19

Fancier Stuff

As well as laying a strong foundation, designing for screen readers and accessibility is good for its own sake. That’s reason enough to mention it first. However, it doesn’t quite provide for the uptick of ‘hands-free’ browsing I spoke about at the start of this piece. Voice user interfaces, or VUIs. For that we have to dig into semantic markup.

Making websites easy to talk to means labelling content at a much more granular level. When people ask their home assistant for the latest news, or a recipe, or whether that restaurant is open on Tuesday night, they don’t want to navigate a website using their voice. They want the information. Now. For that to happen information on websites needs to be clearly labelled.

I’ve rather tumbled down the Semantic Web rabbit hole this year, and I don’t intend to repeat myself here. The web can and should aspire to be machine readable, and that includes talkiness.

Semantic markup already exists for this. One is called ‘speakable’, a property currently in beta which highlights the parts of a web page which are ‘especially appropriate for text-to-speech conversion.’

For example, I and two friends review an album a week as a hobby. We recently redesigned the website with semantic markup integrated. Below is a portion of a page’s structured data showing speakable in action:

“@context”: “”,
“@type”: “Review”,
“reviewBody”: “It’s breathless, explosive music, the kind of stuff that compels listeners to pick up an instrument or start a band. Origin of Symmetry listens like a spectacular jam — with all the unpolished, patchy, brazen energy that entails — and all in all it’s pretty rad, man.”,
“datePublished”: “2015-05-23”,
“author”: [
“@type”: “Person”,
“name”: “André Dack”
“@type”: “Person”,
“name”: “Frederick O’Brien”
“@type”: “Person”,
“name”: “Andrew Bridge”
“itemReviewed”: {
“@type”: “MusicAlbum”,
“name”: “Origin of Symmetry”,
“@id”: “”,
“image”: “”,
“albumReleaseType”: “”,
“byArtist”: {
“@type”: “MusicGroup”,
“name”: “Muse”,
“@id”: “”
“reviewRating”: {
“@type”: “Rating”,
“ratingValue”: 26,
“worstRating”: 0,
“bestRating”: 30
“speakable”: {
“@type”: “SpeakableSpecification”,
“cssSelector”: [

So, if someone asks their home speaker assistant what Audioxide thought of Origin of Symmetry by Muse, speakable should direct it to the album name, the artist, and the bite-sized summary of the review. Convenient and to the point. (And spares people the ordeal of listening to our full summaries.) Nothing’s there that wasn’t there before; it’s just labelled properly. You’ll notice as well that choosing a CSS class is enough. Easy.

This kind of functionality lends itself better so certain types of sites than others, but possibilities are vast. Recipes, news stories, ticket availability, contact information, grocery shopping… all these things and more can be made better if only we get into the habit of making websites easier to talk to, every page packed with clearly structured and labelled information ready and waiting to answer queries when they come their way.

Beyond that the big brains at places like Google and Mozilla are hard at work on dedicated web speech APIs, allowing for more sophisticated user interactions with things like forms and controls. It’s early days for tech like this but absolutely something to keep an eye on.

The rise of home speakers means old and new worlds are colliding. Providing for one provides for the other. Let’s not forget websites are supposed to have been designed for screen readers for decades.

Further Reading

Web apps that talk — Introduction to the Speech Synthesis API
Web Speech Concepts and Usage by Mozilla
What are Voice User Interfaces? By the Interaction Design Foundation

Writing For Speaking

You’ve taken steps to make your website better understood by screen readers, search engines, and all that good stuff. Congratulations. Now we get to the fuzzier topics of tone and personality.

Designing a website to speak is different to designing it to be read. The nature of user interactions is different. A major point to keep in mind is that where voice queries are concerned websites are almost always responsive — answering questions, giving recipes, confirming orders.

An Open NYT study found that for household users ‘interacting with their smart speakers sometimes results in frustrating, or even funny, exchanges, but that feels like a better experience than being tethered to a phone that pushes out notifications.’

In other words, you can’t and shouldn’t force the issue. The look-at-me ethos of pop ups and ads and endless engagement has no place here. Your task is having a good site that gives information on command as clearly and succinctly as possible. A virtual butler, if you will.

What this means in linguistic terms is:

Succinct sentences,
Plain, simple language,
Front-loaded information (think inverted pyramid)),
Phrasing answers as complete sentences.

Say what you write out loud, have free text-to-speech systems like TTSReacher say it back to you. Words can sound very different out loud than they do written down, and visa versa. I have my reservations about readability algorithms, but they’re useful tools for gauging clarity.

Further Reading

‘Readability Testing for Voice Content’ on A List Apart
The Elements of Style by William Strunk Jr.

HAL, Without The Bad Bits

Talking with websites is part of a broader shift towards channel-agnostic web experiences. The nature of websites is changing. From desktop to mobile, and from mobile to smart home systems, they are becoming more fluid. We all know about ‘mobile-first’ indexing. How long until it’s ‘voice-first’?

Moving away from rigid constraints is daunting, but it is liberating too. We look at websites, we listen to them, we talk to them. Each one is like a little HAL, with as much or little personality and/or murderous intent as we see fit to design into it.

Here are steps we can take to make websites easier to talk to, whether building from scratch or updating old projects:

Navigate your site using screen readers.
Try vocal queries via phone/home assistants.
Use semantic markup.
Implement speakable markup.

Designing websites for screenless situations improves their accessibility, but it also sharpens their personality, their purpose, and their usefulness. As Preston So writes for A List Apart, ‘it’s an effective way to analyze and stress-test just how channel-agnostic your content truly is.’

Making your websites easy to talk to prepares them for the channel-agnostic web, which is fast becoming a reality. Rather than text and visuals on a screen, sites must be abstract and flexible, ready to interact with an ever growing range of devices.

Collective #639

Original Source:


A peer-to-peer stack for building software together. Without central servers and censorship.

Check it out

This content is sponsored via Thought Leaders
JavaScript Speed Coding Challenge

Are you ready to put your JavaScript skills to the test? Enter the challenge and see how you stack up among the world’s top developers!

Get started


Mannequin.js is a simple library of an articulated mannequin figure. The shape of the figure and its movements are done purely in JavaScript. The graphics is implemented in Three.js.

Check it out

A Utility Class for Covering Elements

Michelle Barker shares a useful utility class that you can use to easily cover elements.

Read it


A place to showcase websites, receive web design awards, find inspiration for a web design project and interact with people that have similar interests.

Check it out

Advent of Code 2020

A new edition of the great coding Advent calendar: Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

Check it out

Automatic Social Share Images

A fantastic tutorial on how to generate social share images with a serverless function and headless browser by Ryan Filler.

Read it

CSS Sticky Parallax Sections

Ryan Mulligan’s CSS sticky positioning demo with some neat parallax effect on scroll using scale transforms.

Check it out

Introduction to Bash Scripting

An open-source introduction to Bash scripting that will help you learn the basics of Bash scripting for automating your daily SysOps, DevOps, and Dev tasks.

Check it out

Structure Synth JS

An amazing demo collection by Gerard Ferrandez that will take you to another world.

Check it out

Clemens Wenger: Physics of Beauty

An interactive, minimal audio-visual dive into the music album by Clemens Wenger.

Check it out


Explore the physics of other universes with this particle simulator that was built in ~500 lines of self-contained HTML/JS/WebGL2.

Check it out

2020: Projects of the Year

Every December, Readymag’s team takes a look back at the previous 12 months and selects the best projects made with their platform.

Check it out


Create slides using Markdown with this cool tool.

Check it out

Real Debugging Beyond Console Log

Steve Griffith shows how to debug code beyond console.log statements.

Watch it

I created my own YouTube algorithm (to stop me wasting time)

Great article on how Chris Lovejoy improved the YouTube recommendation algorithm using the YouTube API and Amazon’s AWS Lambda.

Read it

An ex-Googler’s guide to dev tools

After leaving Google, many engineers miss the developer tools. Here’s one ex-Googler’s guide to navigating the dev tools landscape outside of Google, finding the ones that fill the gaps you’re feeling, and introducing these to your new team.

Read it


Preview your business cards in a 3D scene you can customize, save and export.

Check it out

Time to Say Goodbye to Google Fonts

Simon Wicki explains why you should self-host your fonts for better performance.

Read it

K!sbag: Free minimal portfolio template

K!sbag is a free minimal site template with 6 ready-made HTML pages for building a personal portfolio website.

Check it out

Under-Engineered Responsive Tables

Adrian Roselli shows how to make a WCAG-compliant responsive HTML table.

Read it

From Our Blog
Coding a Simple Raymarching Scene with Three.js

A coding session where you’ll learn how to set up a simple Raymarching scene with Three.js.

Check it out

The post Collective #639 appeared first on Codrops.

20 Awesome Christmas Projects Hidden in CodePen

Original Source:

CodePen is an online playground for talented front-end developers, a place where you can always find cool projects to widen your horizons, and see what other developers are up to. Year-end holidays…

Visit for full content.