UI Interactions & Animations Roundup #12

Original Source: http://feedproxy.google.com/~r/tympanus/~3/SZzEfMeoD1U/

It’s time to share a new roundup of UI interaction and animation concepts with you! We’ve collected some fresh and super-hot shots that will give you an idea of current trends and some good motion inspiration. Check out those awesome icon animations, playful text warps and fun 3D rotations—they are super trendy these days!

Glasses

by Ruslan Siiz

TAKASHO debut

by Boro | Egor Gajduk

Ocean agency – home page load and scroll

by Juraj Molnár

Icons 3D

by Outcrowd

Marqeta Homepage

by Clay: UI/UX Design Agency

Berlin Music Awards

by Viacheslav Olianishyn

Header slider idea

by Hrvoje Grubisic

ORE Contact Page Animation

by Zhenya Rynzhuk

Inside intro animation

by Gil

Animated Icons for Maslo Mobile App

by Igor Pavlinski

Wine + Peace™ · Slider Winemaker Homepage

by Pierre-Jean Doumenjou

Uniform Wares. Watch website WIP design

by Daniel Montgomery

makemepulse website – Landing page focus

by Louis Ansa

Photography Contest Website Interactions

by tubik

ORE Image Grid Animation

by Zhenya Rynzhuk

Avalanche

by Advanced Team

W/2 – Interaction

by Anton Pecheritsa

Brand Experience

by Francesco Zagami

Endplan – Logo Animation

by Michaela Fiasova

Beginnings – Collections Slider

by Tom Anderson

The post UI Interactions & Animations Roundup #12 appeared first on Codrops.

What’s New In Vue 3?

Original Source: https://smashingmagazine.com/2020/11/new-vue3-update/

With the release of Vue 3, developers have to make the upgrade from Vue 2 as it comes with a handful of new features that are super helpful in building easy-to-read and maintainable components and improved ways to structure our application in Vue. We’re going to be taking a look at some of these features in this article.

At the end of this tutorial, the readers will;

Know about provide / inject and how to use it.
Have a basic understanding of Teleport and how to use it.
Know about Fragments and how to go about using them.
Know about the changes made to the Global Vue API.
Know about the changes made to the Events API.

This article is aimed at those that have a proper understanding of Vue 2.x. You can find all the code used in this example in GitHub.

provide / inject

In Vue 2.x, we had props that made it easy to pass data (string, arrays, objects, etc) from a parent component directly to its children component. But during development, we often found instances where we needed to pass data from the parent component to a deeply nested component which was more difficult to do with props. This resulted in the use of Vuex Store, Event Hub, and sometimes passing data through the deeply nested components. Let’s look at a simple app;

It is important to note that Vue 2.2.0 also came with provide / inject which was not recommended to use in generic application code.

# parentComponent.vue

<template>
<div class=”home”>
<img alt=”Vue logo” src=”../assets/logo.png” />
<HelloWorld msg=”Vue 3 is liveeeee!” :color=”color” />
<select name=”color” id=”color” v-model=”color”>
<option value=”” disabled selected> Select a color</option>
<option :value=”color” v-for=”(color, index) in colors” :key=”index”>{{
color
}}</option></select
>
</div>
</template>
<script>
import HelloWorld from “@/components/HelloWorld.vue”;
export default {
name: “Home”,
components: {
HelloWorld,
},
data() {
return {
color: “”,
colors: [“red”, “blue”, “green”],
};
},
};
</script>

# childComponent.vue

<template>
<div class=”hello”>
<h1>{{ msg }}</h1>
<color-selector :color=”color”></color-selector>
</div>
</template>
<script>
import colorSelector from “@/components/colorComponent.vue”;
export default {
name: “HelloWorld”,
components: {
colorSelector,
},
props: {
msg: String,
color: String,
},
};
</script>
<!– Add “scoped” attribute to limit CSS to this component only –>
<style scoped>
h3 {
margin: 40px 0 0;
}
ul {
list-style-type: none;
padding: 0;
}
li {
display: inline-block;
margin: 0 10px;
}
a {
color: #42b983;
}
</style>

# colorComponent.vue

<template>
<p :class=”[color]”>This is an example of deeply nested props!</p>
</template>
<script>
export default {
props: {
color: String,
},
};
</script>
<style>
.blue {
color: blue;
}
.red {
color: red;
}
.green {
color: green;
}
</style>

Here, we have a landing page with a dropdown containing a list of colors and we’re passing the selected color to childComponent.vue as a prop. This child component also has a msg prop that accepts a text to display in the template section. Finally, this component has a child component (colorComponent.vue) that accepts a color prop from the parent component which is used in determining the class for the text in this component. This is an example of passing data through all the components.

But with Vue 3, we can do this in a cleaner and short way using the new Provide and inject pair. As the name implies, we use provide as either a function or an object to make data available from a parent component to any of its nested component regardless of how deeply nested such a component is. We make use of the object form when passing hard-coded values to provide like this;

# parentComponent.vue

<template>
<div class=”home”>
<img alt=”Vue logo” src=”../assets/logo.png” />
<HelloWorld msg=”Vue 3 is liveeeee!” :color=”color” />
<select name=”color” id=”color” v-model=”color”>
<option value=”” disabled selected> Select a color</option>
<option :value=”color” v-for=”(color, index) in colors” :key=”index”>{{
color
}}</option></select
>
</div>
</template>
<script>
import HelloWorld from “@/components/HelloWorld.vue”;
export default {
name: “Home”,
components: {
HelloWorld,
},
data() {
return {
colors: [“red”, “blue”, “green”],
};
},
provide: {
color: ‘blue’
}
};
</script>

But for instances where you need to pass a component instance property to provide, we use the function mode so this is possible;

# parentComponent.vue

<template>
<div class=”home”>
<img alt=”Vue logo” src=”../assets/logo.png” />
<HelloWorld msg=”Vue 3 is liveeeee!” />
<select name=”color” id=”color” v-model=”selectedColor”>
<option value=”” disabled selected> Select a color</option>
<option :value=”color” v-for=”(color, index) in colors” :key=”index”>{{
color
}}</option></select
>
</div>
</template>
<script>
import HelloWorld from “@/components/HelloWorld.vue”;
export default {
name: “Home”,
components: {
HelloWorld,
},
data() {
return {
selectedColor: “blue”,
colors: [“red”, “blue”, “green”],
};
},
provide() {
return {
color: this.selectedColor,
};
},
};
</script>

Since we don’t need the color props in both the childComponent.vue and colorComponent.vue, we’re getting rid of it. The good thing about using provide is that the parent component does not need to know which component needs the property it is providing.

To make use of this in the component that needs it in this case, colorComponent.vue we do this;

# colorComponent.vue

<template>
<p :class=”[color]”>This is an example of deeply nested props!</p>
</template>
<script>
export default {
inject: [“color”],
};
</script>
<style>
.blue {
color: blue;
}
.red {
color: red;
}
.green {
color: green;
}
</style>

Here, we use inject which takes in an array of the required variables the component needs. In this case, we only need the color property so we only pass that. After that, we can use the color the same way we use it when using props.

We might notice that if we try to select a new color using the dropdown, the color does not update in colorComponent.vue and this is because by default the properties in provide are not reactive. To Fix that, we make use of computed method.

# parentComponent.vue

<template>
<div class=”home”>
<img alt=”Vue logo” src=”../assets/logo.png” />
<HelloWorld msg=”Vue 3 is liveeeee!” />
<select name=”color” id=”color” v-model=”selectedColor”>
<option value=”” disabled selected> Select a color</option>
<option :value=”color” v-for=”(color, index) in colors” :key=”index”>{{
color
}}</option></select
>
</div>
</template>
<script>
import HelloWorld from “@/components/HelloWorld.vue”;
import { computed } from “vue”;
export default {
name: “Home”,
components: {
HelloWorld,
},
data() {
return {
selectedColor: “”,
todos: [“Feed a cat”, “Buy tickets”],
colors: [“red”, “blue”, “green”],
};
},
provide() {
return {
color: computed(() => this.selectedColor),
};
},
};
</script>

Here, we import computed and pass our selectedColor so that it can be reactive and update as the user selects a different color. When you pass a variable to the computed method it returns an object which has a value. This property holds the value of your variable so for this example, we would have to update colorComponent.vue to look like this;

# colorComponent.vue

<template>
<p :class=”[color.value]”>This is an example of deeply nested props!</p>
</template>
<script>
export default {
inject: [“color”],
};
</script>
<style>
.blue {
color: blue;
}
.red {
color: red;
}
.green {
color: green;
}
</style>

Here, we change color to color.value to represent the change after making color reactive using the computed method. At this point, the class of the text in this component would always change whenever selectedColor changes in the parent component.

Teleport

There are instances where we create components and place them in one part of our application because of the logic the app uses but are intended to be displayed in another part of our application. A common example of this would be a modal or a popup that is meant to display and cover the whole screen. While we can create a workaround for this using CSS’s position property on such elements, with Vue 3, we can also do using using Teleport.

Teleport allows us to take a component out of its original position in a document, from the default #app container Vue apps are wrapped in and move it to any existing element on the page it’s being used. A good example would be using Teleport to move an header component from inside the #app div to an header It is important to note that you can only Teleport to elements that are existing outside of the Vue DOM.

The Teleport component accepts two props that determine the behavior of this component and they are;

to
This prop accepts either a class name, an id, an element or a data-* attribute. We can also make this value dynamic by passing a :to prop as opposed to to and change the Teleport element dynamically.
:disabled
This prop accepts a Boolean and can be used to toggle the Teleport feature on an element or component. This can be useful for dynamically changing the position of an element.

An ideal example of using Teleport looks like this;

# index.html**

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”utf-8″ />
<meta http-equiv=”X-UA-Compatible” content=”IE=edge” />
<meta name=”viewport” content=”width=device-width,initial-scale=1.0″ />
<link rel=”icon” href=”<%= BASE_URL %>favicon.ico” />
<title>
<%= htmlWebpackPlugin.options.title %>
</title>
</head>
<!– add container to teleport to –>
<header class=”header”></header>
<body>
<noscript>
<strong
>We’re sorry but <%= htmlWebpackPlugin.options.title %> doesn’t work
properly without JavaScript enabled. Please enable it to
continue.</strong
>
</noscript>
<div id=”app”></div>
<!– built files will be auto injected –>
</body>
</html>

In the default index.html file in your Vue app, we add an header element because we want to Teleport our header component to that point in our app. We also added a class to this element for styling and for easy referencing in our Teleport component.

# Header.vue**

<template>
<teleport to=”header”>
<h1 class=”logo”>Vue 3 ?</h1>
<nav>
<router-link to=”/”>Home</router-link>
</nav>
</teleport>
</template>
<script>
export default {
name: “app-header”,
};
</script>
<style>
.header {
display: flex;
align-items: center;
justify-content: center;
}
.logo {
margin-right: 20px;
}
</style>

Here, we create the header component and add a logo with a link to the homepage on our app. We also add the Teleport component and give the to prop a value of header because we want this component to render inside this element. Finally, we import this component into our app;

# App.vue

<template>
<router-view />
<app-header></app-header>
</template>
<script>
import appHeader from “@/components/Header.vue”;
export default {
components: {
appHeader,
},
};
</script>

In this file, we import the header component and place it in the template so it can be visible in our app.

Now if we inspect the element of our app, we would notice that our header component is inside the headerelement;

Fragments

With Vue 2.x, it was impossible to have multiple root elements in the template of your file and as a workaround, developers started wrapping all elements in a parent element. While this doesn’t look like a serious issue, there are instances where developers want to render a component without a container wrapping around such elements but have to make do with that.

With Vue 3, a new feature called Fragments was introduced and this feature allows developers to have multiple elements in their root template file. So with Vue 2.x, this is how an input field container component would look like;

# inputComponent.vue

<template>
<div>
<label :for=”label”>label</label>
<input :type=”type” :id=”label” :name=”label” />
</div>
</template>
<script>
export default {
name: “inputField”,
props: {
label: {
type: String,
required: true,
},
type: {
type: String,
required: true,
},
},
};
</script>
<style></style>

Here, we have a simple form element component that accepts two props, label and type, and the template section of this component is wrapped in a div. This is not necessarily an issue but if you want the label and input field to be directly inside your form element. With Vue 3, developers can easily rewrite this component to look like this;

# inputComponent.vue

<template class=”testingss”>
<label :for=”label”>{{ label }}</label>
<input :type=”type” :id=”label” :name=”label” />
</template>

With a single root node, attributes are always attributed to the root node and they are also known as Non-Prop Attributes. They are events or attributes passed to a component that do not have corresponding properties defined in props or emits. Examples of such attributes are class and id. It is, however, required to explicitly define which of the elements in a multi-root node component should be attributed to.

Here’s what this means using the inputComponent.vue from above;

When adding class to this component in the parent component, it must be specified which component would this class be attributed to otherwise the attribute has no effect.

<template>
<div class=”home”>
<div>
<input-component
class=”awesome__class”
label=”name”
type=”text”
></input-component>
</div>
</div>
</template>
<style>
.awesome__class {
border: 1px solid red;
}
</style>

When you do something like this without defining where the attributes should be attributed to, you get this warning in your console;

And the border has no effect on the component;

To fix this, add a v-bind=”$attrs” on the element you want such attributes to be distributed to;

<template>
<label :for=”label” v-bind=”$attrs”>{{ label }}</label>
<input :type=”type” :id=”label” :name=”label” />
</template>

Here, we’re telling Vue that we want the attributes to be distributed to the label element which means we want the awesome__class to be applied to it. Now, if we inspect our element in the browser we would see that the class has now been added to label and hence a border is now around the label.

Global API

It was not uncommon to see Vue.component or Vue.use in main.js file of a Vue application. These types of methods are known are Global APIs and there are quite a number of them in Vue 2.x. One of the challenges of this method is that it makes it impossible to isolate certain functionalities to one instance of your app (if you have more than one instance in your app) without it affecting other apps because they are all mounted on Vue. This is what I mean;

Vue.directive(‘focus’, {
inserted: el => el.focus()
})

Vue.mixin({
/* … */
})

const app1 = new Vue({ el: ‘#app-1’ })
const app2 = new Vue({ el: ‘#app-2’ })

For the above code, it is impossible to state that the Vue Directive be associated with app1 and the Mixin with app2 but instead, they’re both available in the two apps.

Vue 3 comes with a new Global API in an attempt to fix this type of problem with the introduction of createApp. This method returns a new instance of a Vue app. An app instance exposes a subset of the current global APIs. With this, all APIs (component, mixin, directive, use, etc) that mutate Vue from Vue 2.x are now going to be moved to individual app instances and now, each instance of your Vue app can have functionalities that are unique to them without affecting other existing apps.

Now, the above code can be rewritten as;

const app1 = createApp({})
const app2 = createApp({})
app1.directive(‘focus’, {
inserted: el => el.focus()
})
app2.mixin({
/* … */
})

It is however possible to create functionalities that you want to be share among all your apps and this can be done by using a factory function.

Events API

One of the most common ways developers adopted for passing data among components that don’t have a parent to child relationship other than using the Vuex Store is the use of Event Bus. One of the reasons why this method is common is because of how easy it is to get started with it;

# eventBus.js

const eventBus = new Vue()

export default eventBus;

After this, the next thing would be to import this file into main.js to make it globally available in our app or to import it in files that you need it;

# main.js

import eventBus from ‘eventBus’
Vue.prototype.$eventBus = eventBus

Now, you can emit events and listen for emitted events like this;

this.$eventBus.$on(‘say-hello’, alertMe)
this.$eventBus.$emit(‘pass-message’, ‘Event Bus says Hi’)

There is a lot of Vue codebase that is filled with code like this. However, with Vue 3, it would be impossible to do because $on, $off, and $once have all been removed but $emit is still available because it is required for children component to emit events to their parent components. An alternative to this would be using provide / inject or any of the recommended third-party libraries.

Conclusion

In this article, we have covered how you can pass data around from a parent component down to a deeply nested child component using the provide / inject pair. We have also looked at how we can reposition and transfer components from one point in our app to another. Another thing we looked at is the multi-root node component and how to ensure we distribute attributes so they work properly. Finally, we also covered the changes to the Events API and Global API.

Further Resources

“JavaScript Factory Functions with ES6+,” Eric Elliott, Medium
“Using Event Bus to Share Props Between Vue Components,” Kingsley Silas, CSS-Tricks
Using Multiple Teleports On The Same Target, Vue.js Docs
Non-Prop Attributes, Vue.js Docs
Working With Reactivity, Vue.js Docs
teleport, Vue.js Docs
Fragments, Vue.js Docs
2.x Syntax, Vue.js Docs

Creating A Continuous Integration Test Workflow Using GitHub Actions

Original Source: https://smashingmagazine.com/2020/11/continuous-integration-test-workflow-gitHub-actions/

When contributing to projects on version control platforms like GitHub and Bitbucket, the convention is that there is the main branch containing the functional codebase. Then, there are other branches in which several developers can work on copies of the main to either add a new feature, fix a bug, and so on. It makes a lot of sense because it becomes easier to monitor the kind of effect the incoming changes will have on the existing code. If there is any error, it can easily be traced and fixed before integrating the changes into the main branch. It can be time-consuming to go through every single line of code manually looking for errors or bugs — even for a small project. That is where continuous integration comes in.

What Is Continuous Integration (CI)?
“Continuous integration (CI) is the practice of automating the integration of code changes from multiple contributors into a single software project.”

— Atlassian.com

The general idea behind continuous integration (CI) is to ensure changes made to the project do not “break the build,” that is, ruin the existing code base. Implementing continuous integration in your project, depending on how you set up your workflow, would create a build whenever anyone makes changes to the repository.

So, What Is A Build?

A build — in this context — is the compilation of source code into an executable format. If it is successful, it means the incoming changes will not negatively impact the codebase, and they are good to go. However, if the build fails, the changes will have to be reevaluated. That is why it is advisable to make changes to a project by working on a copy of the project on a different branch before incorporating it into the main codebase. This way, if the build breaks, it would be easier to figure out where the error is coming from, and it also does not affect your main source code.

“The earlier you catch defects, the cheaper they are to fix.”

— David Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

There are several tools available to help with creating continuous integration for your project. These include Jenkins, TravisCI, CircleCI, GitLab CI, GitHub Actions, etc. For this tutorial, I will be making use of GitHub Actions.

GitHub Actions For Continuous Integration

CI Actions is a fairly new feature on GitHub and enables the creation of workflows that automatically run your project’s build and tests. A workflow contains one or more jobs that can be activated when an event occurs. This event could be a push to any of the branches on the repo or the creation of a pull request. I will explain these terms in detail as we proceed.

Let’s Get Started!
Prerequisites

This is a tutorial for beginners so I will mostly talk about GitHub Actions CI on a surface level. Readers should already be familiar with creating a Node JS REST API using the PostgreSQL database, Sequelize ORM, and writing tests with Mocha and Chai.

You should also have the following installed on your machine:

NodeJS,
PostgreSQL,
NPM,
VSCode (or any editor and terminal of your choice).

I will make use of a REST API I already created called countries-info-api. It’s a simple api with no role-based authorizations (as at the time of writing this tutorial). This means anyone can add, delete, and/or update a country’s details. Each country will have an id (auto-generated UUID), name, capital, and population. To achieve this, I made use of Node js, express js framework, and Postgresql for the database.

I will briefly explain how I set up the server, database before I begin with writing the tests for test coverage and the workflow file for continuous integration.

You can clone the countries-info-api repo to follow through or create your own API.

Technology used: Node Js, NPM (a package manager for Javascript), Postgresql database, sequelize ORM, Babel.

Setting Up The Server

Before setting up the server, I installed some dependencies from npm.

npm install express dotenv cors

npm install –save-dev @babel/core @babel/cli @babel/preset-env nodemon

I am using the express framework and writing in the ES6 format, so I’ll need Babeljs to compile my code. You can read the official documentation to know more about how it works and how to configure it for your project. Nodemon will detect any changes made to the code and automatically restart the server.

Note: Npm packages installed using the –save-dev flag are only required during the development stages and are seen under devDependencies in the package.json file.

I added the following to my index.js file:

import express from “express”;
import bodyParser from “body-parser”;
import cors from “cors”;
import “dotenv/config”;

const app = express();
const port = process.env.PORT;

app.use(bodyParser.json());

app.use(bodyParser.urlencoded({ extended: true }));

app.use(cors());

app.get(“/”, (req, res) => {
res.send({message: “Welcome to the homepage!”})
})

app.listen(port, () => {
console.log(`Server is running on ${port}…`)
})

This sets up our api to run on whatever is assigned to the PORT variable in the .env file. This is also where we will be declaring variables that we don’t want others to easily have access to. The dotenv npm package loads our environment variables from .env.

Now when I run npm run start in my terminal, I get this:

As you can see, our server is up and running. Yay!

This link http://127.0.0.1:your_port_number/ in your web browser should return the welcome message. That is, as long as the server is running.

Next up, Database and Models.

I created the country model using Sequelize and I connected to my Postgres database. Sequelize is an ORM for Nodejs. A major advantage is that it saves us the time of writing raw SQL queries.

Since we are using Postgresql, the database can be created via the psql command line using the CREATE DATABASE database_name command. This can also be done on your terminal, but I prefer PSQL Shell.

In the env file, we will set up the connection string of our database, following this format below.

TEST_DATABASE_URL = postgres://<db_username>:<db_password>@127.0.0.1:5432/<database_name>

For my model, I followed this sequelize tutorial. It is easy to follow and explains everything about setting up Sequelize.

Next, I will write tests for the model I just created and set up the coverage on Coverall.

Writing Tests And Reporting Coverage

Why write tests? Personally, I believe that writing tests help you as a developer to better understand how your software is expected to perform in the hands of your user because it is a brainstorming process. It also helps you discover bugs on time.

Tests:

There are different software testing methods, however, For this tutorial, I made use of unit and end-to-end testing.

I wrote my tests using the Mocha test framework and the Chai assertion library. I also installed sequelize-test-helpers to help test the model I created using sequelize.define.

Test coverage:

It is advisable to check your test coverage because the result shows whether our test cases are actually covering the code and also how much code is used when we run our test cases.

I used Istanbul (a test coverage tool), nyc (Instabul’s CLI client), and Coveralls.

According to the docs, Istanbul instruments your ES5 and ES2015+ JavaScript code with line counters, so that you can track how well your unit-tests exercise your codebase.

In my package.json file, the test script runs the tests and generates a report.

{
“scripts”: {
“test”: “nyc –reporter=lcov –reporter=text mocha -r @babel/register ./src/test/index.js”
}
}

In the process, it will create a .nyc_output folder containing the raw coverage information and a coverage folder containing the coverage report files. Both files are not necessary on my repo so I placed them in the .gitignore file.

Now that we have generated a report, we have to send it to Coveralls. One cool thing about Coveralls (and other coverage tools, I assume) is how it reports your test coverage. The coverage is broken down on a file by file basis and you can see the relevant coverage, covered and missed lines, and what changed in the build coverage.

To get started, install the coveralls npm package. You also need to sign in to coveralls and add the repo to it.

Then set up coveralls for your javascript project by creating a coveralls.yml file in your root directory. This file will hold your repo-token gotten from the settings section for your repo on coveralls.

Another script needed in the package.json file is the coverage scripts. This script will come in handy when we are creating a build via Actions.

{
“scripts”: {
“coverage”: “nyc npm run test && nyc report –reporter=text-lcov –reporter=lcov | node ./node_modules/coveralls/bin/coveralls.js –verbose”
}
}

Basically, it will run the tests, get the report, and send it to coveralls for analysis.

Now to the main point of this tutorial.

Create Node JS Workflow File

At this point, we have set up the necessary jobs we will be running in our GitHub Action. (Wondering what “jobs” mean? Keep reading.)

GitHub has made it easy to create the workflow file by providing a starter template. As seen on the Actions page, there are several workflow templates serving different purposes. For this tutorial, we will use the Node.js workflow (which GitHub already kindly suggested).

You can edit the file directly on GitHub but I will manually create the file on my local repo. The folder .github/workflows containing the node.js.yml file will be in the root directory.

This file already contains some basic commands and the first comment explains what they do.

# This workflow will do a clean install of node dependencies, build the source code and run tests across different versions of node

I will make some changes to it so that in addition to the above comment, it also runs coverage.

My .node.js.yml file:

name: NodeJS CI
on: [“push”]
jobs:
build:
name: Build
runs-on: windows-latest
strategy:
matrix:
node-version: [12.x, 14.x]

steps:
– uses: actions/checkout@v2
– name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
– run: npm install
– run: npm run build –if-present
– run: npm run coverage

– name: Coveralls
uses: coverallsapp/github-action@master
env:
COVERALLS_REPO_TOKEN: ${{ secrets.COVERALLS_REPO_TOKEN }}
COVERALLS_GIT_BRANCH: ${{ github.ref }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}

What does this mean?

Let’s break it down.

name
This would be the name of your workflow (NodeJS CI) or job (build) and GitHub will display it on your repository’s actions page.
on
This is the event that triggers the workflow. That line in my file is basically telling GitHub to trigger the workflow whenever a push is made to my repo.
jobs
A workflow can contain at least one or more jobs and each job runs in an environment specified by runs-on. In the file sample above, there is just one job that runs the build and also runs coverage, and it runs in a windows environment. I can also separate it into two different jobs like this:

Updated Node.yml file

name: NodeJS CI
on: [push]
jobs:
build:
name: Build
runs-on: windows-latest
strategy:
matrix:
node-version: [12.x, 14.x]

steps:
– uses: actions/checkout@v2
– name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
– run: npm install
– run: npm run build –if-present
– run: npm run test

coverage:
name: Coveralls
runs-on: windows-latest
strategy:
matrix:
node-version: [12.x, 14.x]

steps:
– uses: coverallsapp/github-action@master
env:
COVERALLS_REPO_TOKEN: ${{ secrets.COVERALLS_REPO_TOKEN }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}

env
This contains the environment variables that are available to all or specific jobs and steps in the workflow. In the coverage job, you can see that the environment variables have been “hidden”. They can be found in your repo’s secrets page under settings.
steps
This basically is a list of the steps to be taken when running that job.
The build job does a number of things:
It uses a checkout action (v2 signifies the version) that literally checks-out your repository so that it is accessible by your workflow;
It uses a setup-node action that sets up the node environment to be used;
It runs install, build and test scripts found in our package.json file.

coverage
This uses a coverallsapp action that posts your test suite’s LCOV coverage data to coveralls.io for analysis.

I initially made a push to my feat-add-controllers-and-route branch and forgot to add the repo_token from Coveralls to my .coveralls.yml file, so I got the error you can see on line 132.

Bad response: 422 {“message”:”Couldn’t find a repository matching this job.”,”error”:true}

Once I added the repo_token, my build was able to run successfully. Without this token, coveralls would not be able to properly report my test coverage analysis. Good thing our GitHub Actions CI pointed out the error before it got pushed to the main branch.

N.B: These were taken before I separated the job into two jobs. Also, I was able to see the coverage summary-and error message-on my terminal because I added the –verbose flag at the end of my coverage script

Conclusion

We can see how to set up continuous integration for our projects and also integrate test coverage using the Actions made available by GitHub. There are so many other ways this can be adjusted to fit the needs of your project. Although the sample repo used in this tutorial is a really minor project, you can see how essential continuous integration is even in a bigger project. Now that my jobs have run successfully, I am confident merging the branch with my main branch. I would still advise that you also read through the results of the steps after every run to see that it is completely successful.

Exciting New Tools for Designers, November 2020

Original Source: https://www.webdesignerdepot.com/2020/11/exciting-new-tools-for-designers-november-2020/

In the spirit of fall feasts, this month’s collection of tools and resources is a smorgasbord of sorts. You’ll find everything from web tools to icon libraries to animation tools to great free fonts. Let’s dig in.

Here’s what new for designers this month.

The Good Line-Height

The Good Line-Height is the tool you won’t be able to live without after using it a few times. The tool calculates the ideal line-height for every text size in a typographic scale so that everything always fits the baseline grid. Set the font size, multiplier, and grid row height to get started.

Link-to-QR

Link-to-QR makes creating quick codes a breeze. Paste in your link and the tool creates an immediate QR code that you can download or share. Pick a color and transparency, plus size, and you are done.

Quarkly

Quarkly allows you to create websites and web apps both using a mouse and typing code – you get all the pros of responsive editing, but can also open the code editor at any time and manually edit anything and it all synchronizes. The tool is built for design control and is in beta.

UnSpam.email

Unspam.email is an online spam tester tool for emails. Improve deliverability with the free email tester. The service analyzes the main aspects of an email and returns a spam score and predicts results with a heat map of your email newsletter.

Filmstrip

Filmstrip allows you to create or import keyframe animations, make adjustments, and export them for web playback. It’s a quick and easy tool for modern web animation.

CSS Background Patterns

CSS Background Patterns is packed with groovy designs that you can adjust and turn into just the right background for your web project. Set the colors, opacity, and spacing; then pick a pattern; preview it right on the screen; and then snag the CSS. You can also submit your own patterns.

Neonpad

Neonpad is a simple – but fun – plain text editor in neon colors. Switch hues for a different writing experience. Use it small or expand to full browser size.

Link Hover Animation

Link Hover Animation is a nifty twist on a hover state. The animation draws a circle around the link!

Tint and Shade Generator

Tint and Shade Generator helps you make the most of any hex color. Start with a base color palette and use it to generate complementary colors for gradients, borders, backgrounds, or shadows.

Pure CSS Product Card

Pure CSS Product Card by Adam Kuhn is a lovely example of an e-commerce design that you can learn from. The card is appealing and functional.

Free Favicon Maker

Free Favicon Maker allows you to create a simple SVG or PNG favicon in a few clicks. You can set a style that includes a letter or emoji, font and size, color, and edge type and you are ready to snag the HTML or download the SVG or PNG file.

Ultimate Free iOS Icon Pack

The Ultimate Free iOS Icon Pack is a collection of 100 minimal icons in an Apple style. With black and white version of each icon and original PSD files, you can create sleek icons for your iPhone screen in minutes. And it’s completely free! No email address or registration required.

Phosphor Icon Family

Phosphor is a flexible icon family for all the things you need icons for including diagrams and presentations. There are plenty of arrows, chats, circles, clocks, office elements, lists, business logos, and more. Everything is in a line style, filled, or with duotone color. Everything is free but donations are accepted.

3,000 Hands

3,000 Hands is a kit of hands that includes plenty of gestures and style in six skin tones and with 10 angles of every gesture. They have a 3D-ish shape and are in an easy to use PNG format. This kit has everything you need from a set of hand icons.

Radix Icons

Radix Icons is a set of 15px by 15px icons for tiny spaces. They are in a line style and are available in a variety of formats including Figma, Sketch, iconJar, SVG, npm installation, or GitHub.

Deepnote

Deepnote is a new kind of data science notebook. It is Jupyter-compatible with real-time collaboration and running in the cloud and designed for data science teams.

ZzFXM Tiny JavaScript Music Generator

ZzFXM is a tiny JavaScript function that generates stereo music tracks from patterns of note and instrument data. Instrument samples are created using a modified version of the super-tiny ZzFX sound generator by Frank Force. It is designed for size-limited productions.

Image Tiles Scroll Animation

Image Tiles Scroll Animation is a different type of scrolling pattern using Locomotive Scroll. The grid creates a smooth animation in a fun and modern style.

Bubbles

Bubbles is a Chrome extension that allows you to collaborate by clicking anywhere on your screen and then dropping a comment to start a conversation with anyone. This is a nice option for work from home teams.

Tyrus

Tyrus is a toolkit from the design team at Airbnb to help illustrators make the most out of their design businesses. It is broken into sections to help you with design briefs, originality, deadlines, and feedback.

PatchGirl

PatchGirl is an automated QA tool for developers. You can combine SQL and HTTP queries to build any possible state of your database.

Apparel

Apparel is a beautiful premium typeface family with plenty of versatility in a modern serif style. It is a contemporary, classy, and fresh serif typeface with a laid-back. Its medium-large x-height makes it ideal for headlines and brand identity design.

Christmas Story

Christmas Story is a nice solution if you are already starting to think ahead to holiday projects or cards. The long swashes and tails are elaborate and fun.

Nafta

Nafta is a fun handwriting style font that has a marker-style stroke. It’s a modern take on the popular Sharpie font. It includes all uppercase letters.

Safira

Safira is a wide and modern sans with ligatures and a stylish feel. The rounded ball terminals are especially elegant.

Shine Brighter Sans

Shine Brighter Sans is a super-thin sans-serif with a light attitude. The limited character set combined with its light weight is best for display use.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Guide To Freelance Web Design Business For College Students

Original Source: http://feedproxy.google.com/~r/Designrfix/~3/t2_NTFk3j2Y/guide-to-freelance-web-design-business-for-college-students

Description: A web designer is now one of the most popular freelance jobs that lets you organize your schedule in the most convenient way.  How to start web design business in college Many college students (if not all of them) work part-time to cover their expenses, pay off the loans, and pay for writing services […]

The post Guide To Freelance Web Design Business For College Students appeared first on designrfix.com.

40+ Creative Progress Bar Designs, Vol. 2

Original Source: https://www.hongkiat.com/blog/progress-bar-designs/

Progress bars play a great role in offering an honest and efficient user interface. I’d rather prefer to monitor the progress of a task through a progress bar than to wait and look at the blank…

Visit hongkiat.com for full content.

20 Progress Bar UI (Freebies) to Download

Original Source: https://www.hongkiat.com/blog/progress-bar-freebies/

For today’s internet user, even an eye blink is too long to wait. However, there are some websites (especially the media-heavy ones) that take some time to load. So to cope with an impatient…

Visit hongkiat.com for full content.

Data Visualization With ApexCharts

Original Source: https://smashingmagazine.com/2020/11/data-visualization-apexcharts/

ApexCharts is a modern charting library that helps developers to create beautiful and interactive visualizations for web pages with a simple API, while React-ApexCharts is ApexChart’s React integration that allows us to use ApexCharts in our applications. This article will be beneficial to those who need to show complex graphical data to their customers.

Getting Started

First, install the React-ApexCharts component in your React application and import react-apexcharts.

npm i react-apexcharts apexcharts

import ReactApexCharts from ‘react-apexcharts’

The core components of an ApexChart is its configuration object. In the configuration object, we define the series and options properties for a chart. series is the data we want to visualize on the chart. In the series, we define the data and name of the data. The values in the data array will be plotted on the y-axis of the chart. The name of the data will appear when you hover over the chart. You can have a single or multiple data series. In options, we define how we want a chart to look, the features and tools we want to add to a chart and the labels of the x and y axes of a chart. The data we define in the configuration object’s series and options properties is what we then pass to the ReactApexChart component’s series and options props respectively.

Here is a sample of how the components of an ApexChart work together. (We will take a closer look at them later in the article.)

const config = {
series: [1, 2, 3, 4, 5],
options: {
chart: {
toolbar: {
show: true
},
}
}
}

return (
<ReactApexChart options={config.options} series={config.series} type=”polarArea” />
)

When going through the docs, you will notice that the width, height, and type of chart are defined in the options object, like in the code snippet below.

const config = {
series: [44, 55, 13, 43, 22],
chart: {
width: 380,
type: ‘pie’
}
},

This is because the docs were written with vanilla JavaScript application in mind. We are working with React, so we define the width, height, and type by passing them in as props to the ReactApexCharts component. We will see how this works in the next section.

Line Charts

This is a type of chart used to show information that changes over time. We plot a line using several points connected by straight lines. We use Line charts to visualize how a piece of data changes over time. For example, in a financial application, you could use it to show a user how their purchases have increased over some time.

This chart consists of the following components:

Title
This sits on top of the chart and informs the user about what data the chart represents.
Toolbar
The toolbar is at the right-hand corner in the image above. It controls the level of zoom of the chart. You can also export the char through the toolbar.
Axis labels
On the left and right axes, we have the labels for each axis.
Data labels
The data labels are visible at each plot point on the line. They make it easier to view the data on the chart.

We have seen how a line chart looks and its different components. Now let us go through the steps of building one.

We start with series. Here we define the data of the series and its name. Then, we pass the options and series to the ReactApexChart component’s props. We also define the type of chart in the type prop and set it to line.

const config = {
series: [{
name: “Performance”,
data: [10, 21, 35, 41, 59, 62, 79, 81, 98]
}],
options: {}
}
return (
<ReactApexChart options={config.options} series={config.series} type=”line” />
)

The critical part of an ApexChart is its series data. The configurations defined in the options property are optional. Without setting any definitions in options, the data will still be displayed. However, it may not be the most readable chart. If you decide not to set any custom definitions in options, it must still be present as an empty object.

Let’s configure the options of the chart by adding some values to the options object we have in the config object.

In the chart property of the options object, we define the configurations of the chart. Here, we add the toolbar from the chart by setting its show property to true. The toolbar provides us with tools to control the zoom level of the chart and to export the chart in different file formats. The toolbar is visible by default.

options: {
chart: {
toolbar: {
show: true
},
},
}

We can make our chart easier to read by enabling data labels for the chart. To do that, we add the dataLabels property to the options object and set it’s enabled property to true. This makes it easier to interpret the data in the chart.

dataLabels: {
enabled: true
},

By default, the stroke of a line chart is straight. However, we can make it curved. We add the stroke property to options and set to it’s curve to smooth.

stroke: {
curve: “smooth”
}

An important part of any chart is its title. We add a title property to options to give the chart a title.

title: {
text: ‘A Line Chart’,
align: ‘left’
},

We can add labels to the x and y axes of the chart. To do this we add xaxis and yaxis properties to options and there, we define the title for each axis.

xaxis: {
categories: [‘Jan’, ‘Feb’, ‘Mar’, ‘Apr’, ‘May’, ‘Jun’, ‘Jul’, ‘Aug’, ‘Sep’],
title: {
text: ‘Month’
}
},
yaxis: {
title: {
text: ‘Performance’
}
}

In the end, your code should look like this. With these steps, we’ve not only built a line chart but seen a breakdown of how the options we define can enhance a chart.

import ReactApexCharts from ‘react-ApexCharts’

const config = {
series: [{
name: “Performance”,
data: [10, 21, 35, 41, 59, 62, 79, 81, 98]
}],
options: {
chart: {
toolbar: {
show: true
},
},

dataLabels: {
enabled: true
},
stroke: {
curve: “smooth”
}

title: {
text: ‘A Line Chart’,
align: ‘left’
},
xaxis: {
categories: [‘Jan’, ‘Feb’, ‘Mar’, ‘Apr’, ‘May’, ‘Jun’, ‘Jul’, ‘Aug’, ‘Sep’],
title: {
text: ‘Month’
}
},

yaxis: {
title: {
text: ‘Performance’
}
}
}
}
return (
<ReactApexChart options={config.options} series={config.series} type=”line” />
)

Area Charts

An area chart is like a line chart in terms of how data values are plotted on the chart and connected using line segments. The only difference is that in an area chart, the area plotted by the data points is filled with shades or colors. Like line charts, area charts depict how a piece of data changes over time. However, unlike line charts, they can also visually represent volume. We can use it to show how groups in a series of data intersect. For example, a chart that shows you the volume of users that access your application through different browsers.

In the image above, we have an example of an area chart. Like the line chart, it has a title, data labels, and axis labels. The shaded portion of the plotted area chart shows the volume in the data. It also shows how the data in series1 intersects with that of series2. Another use case of area charts is in showing the relationship between two or more pieces of data and how they intersect.

Let’s see how to build a stacked area chart and how to add data labels to it.

To make an area chart, we set the chart type to area and the stroke to smooth. This is the default stroke for an area chart.

const config = {
options: {
stroke: {
curve: ‘smooth’
}
}
}

return (
<ReactApexChart options={config.options} series={config.series} type=”area” />
)

To make it a stacked chart, in the chart property of the options object, we set stacked to true.

const config = {
options: {
stroke: {
curve: ‘smooth’
},
chart: {
stacked: true
}
}

return (
<ReactApexChart options={config.options} series={config.series} type=”area” />
)

Bar Charts

We use bar charts to presents data with rectangular bars at heights or lengths proportional to the values they represent. It is best used to compare different categories, like what type of car people have or how many customers a shop has on different days.

The horizontal bars are the major components of a bar chart. They allow us to easily compare values of different categories with ease.

In building a bar chart, we start by defining the series data for the chart and setting the ReactApexChart component’s type to bar.

const config = {
series: [{
data: [400, 430, 448, 470, 540, 580, 690, 1100, 1200, 1380]
}],
options: {}
}
return (
<ReactApexChart options={config.options} series={config.series} type=”bar” />
)

Let’s add more life and distinction to the bars. By default, bar charts are vertical. To make them horizontal, we define how we want the bars to look in the plotOptions property. We set the horizontal prop to true to make the bars horizontal. We set the position of the dataLabels to bottom. We can also set it to top or center. The distributed prop adds distinction to our bars. Without it, no distinct colors will be applied to the bars, and the legend will not show at the bottom of the chart. We also define the shape of the bars using the startingShape and endingShape properties.

options{
plotOptions: {
bar: {
distributed: true,
horizontal: true,
startingShape: “flat”,
endingShape: “rounded”,
dataLabels: {
position: ‘bottom’,
},
}
},
}

Next, we add the categories, labels, and titles to the chart.

xaxis: {
categories: [‘South Korea’, ‘Canada’, ‘United Kingdom’, ‘Netherlands’, ‘Italy’, ‘France’, ‘Japan’, ‘United States’, ‘China’, ‘India’]
},

title: {
text: ‘A bar Chart’,
align: ‘center’,
},

Column Charts

A column chart is a data visualization where each category is represented by a rectangle, with the height of the rectangle being proportional to the plotted values. Like bar charts, column charts are used to compare different categories of data. Column charts are also known as vertical bar charts. To convert the bar chart above to a column chart, all we have to do is set horizontal to false in the plotOptions.

The vertical columns make it easy to interpret the data we visualize. Also, the data labels added to the top of each column increase the readability of the chart.

Let’s look into building a basic column chart and see how we can convert it to a stacked column chart.

As always, we start with the series data and setting the chart type to “bar”.

const config = {
series: [{
name: ‘Net Profit’,
data: [44, 55, 57, 56, 61, 58, 63, 60, 66]
}, {
name: ‘Revenue’,
data: [76, 85, 101, 98, 87, 105, 91, 114, 94]
}, {
name: ‘Free Cash Flow’,
data: [35, 41, 36, 26, 45, 48, 52, 53, 41]
}],
options: {}
}

return (
<ReactApexChart options={config.options} series={config.series} type=”bar” />
)

This is what we get out of the box. However, we can customize it. We define the width and shape of the bars in the plotOptions property. We also set the position of the dataLabel to top.

options: {
plotOptions: {
bar: {
columnWidth: ‘75%’,
endingShape: ‘flat’,
dataLabels: {
position: “top”
},
},
},
}

Next, we define the style and font-size of the data labels and their distance from the graphs. Finally, we add the labels for the x and y axes.

options: {
dataLabels: {
offsetY: -25,
style: {
fontSize: ’12px’,
colors: [“#304758”]
}
},

xaxis: {
categories: [‘Feb’, ‘Mar’, ‘Apr’, ‘May’, ‘Jun’, ‘Jul’, ‘Aug’, ‘Sep’, ‘Oct’],
},

yaxis: {
title: {
text: ‘$ (thousands)’
}
},
}

To convert this to a stacked chart, all we have to do is add a stacked property to the chart and set it to true. Also, since we switched to a stacked chart, we’ll change the endingShape of the bars to flat to remove the curves.

options: {
chart: {
stacked: true,
},

plotOptions: {
bar: {
endingShape: ‘flat’,
}
}
}

Pie And Donut Charts

A pie chart is a circular graph that shows individual categories as slices – or percentages – of the whole. The donut chart is a variant of the pie chart, with a hole in its center, and it displays categories as arcs rather than slices. Both make part-to-whole relationships easy to grasp at a glance. Pie charts and donut charts are commonly used to visualize election and census results, revenue by product or division, recycling data, survey responses, budget breakdowns, educational statistics, spending plans, or population segmentation.

In pie and donut charts, series is calculated in percentages. This means the sum of the values in the series should be 100.

Let’s start by building a pie chart. We set the chart type to pie. We also define the series for the chart and define the labels in the options. The order of the labels corresponds with the values in the series array.

const config = {
series: [20, 10, 35, 12, 23],
options: {
labels: [‘Team A’, ‘Team B’, ‘Team C’, ‘Team D’, ‘Team E’],
}
}

return (
<ReactApexChart options={config.options} series={config.series} type=”pie” />
)

We can control the responsive nature of our charts. To do this, we add a responsive property to the chart’s options. Here we set the max-width breakpoint to 480px. Then, we set the width of the chart to 450px and the position of the legend to bottom. Now, at screen sizes of 480px and below, the legend will appear at the bottom of the chart.

options: {
labels: [‘Team A’, ‘Team B’, ‘Team C’, ‘Team D’, ‘Team E’],
responsive: [{
breakpoint: 480,
options: {
chart: {
width: 450
},
legend: {
position: ‘bottom’
}
}
}]
},

To convert the pie chart to a donut chart, all you have to do is change the component’s type to donut.

<ReactApexChart options={config.options} series={config.series} type=”donut” />

Mixed Charts

Mixed charts allow you to combine two or more chart types into a single chart. You can use mixed charts when the numbers in your data vary widely from data series to data series or when you have mixed type of data (for example, price and volume). Mixed charts make it easy to visualize different data types in the same format simultaneously.

Let’s make a combination of a line, area, and column chart.

We define the series data and the type for each of the charts. For mixed charts, the type of each chart is defined in its series, and not in the ReactApexChart component’s type prop.

const config = {
series: [{
name: ‘TEAM A’,
type: ‘column’,
data: [23, 11, 22, 27, 13, 22, 37, 21, 44, 22, 30]
}, {
name: ‘TEAM B’,
type: ‘area’,
data: [44, 55, 41, 67, 22, 43, 21, 41, 56, 27, 43]
}, {
name: ‘TEAM C’,
type: ‘line’,
data: [30, 25, 36, 30, 45, 35, 64, 52, 59, 36, 39]
}],
options: {}
}

Next, we set the stroke type to smooth and define its width. We pass in an array of values to define the width of each chart. The values in the array correspond to the order of the charts defined in series. We also define the opacity of each chart’s fill. For this, we also pass in an array. This way, we can control the opacity of each chart separately.

Lastly, we add the labels for the x and y axes.

options: {
stroke: {
width: [2,2,4],
curve: ‘smooth’
},
fill: {
opacity: [0.7, 0.3, 1],
},
labels: [‘Jan’, ‘Feb’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’,
‘Aug’, ‘Sept’, ‘Oct’, ‘Nov’],
yaxis: {
title: {
text: ‘Points’,
},
},
}

Customizing our charts

Apart from changing the color of our charts, we can add some level of customization to them.

We can add grids to our charts and style them. In the grid property, we define the colors for the rows and columns of the chart. Adding grids to your chart can make it easier to understand.

options: {
grid: {
row: {
colors: [‘#f3f3’, ‘transparent’],
opacity: 0.5
},
column: {
colors: [‘#dddddd’, ‘transparent’],
opacity: 0.5
},
},
}

We can adjust the stroke of the charts and define their colors. Let’s do that with the column chart. Each color in the colors array corresponds with the data in the series array.

options: {
stroke: {
show: true,
width: 4,
colors: [‘red’, “blue”, “green” ]
},
}

Conclusion

We have gone through some of the chart types ApexCharts provides and learned how to switch from one chart type to another. We have also seen some ways of customizing the appearance of our charts. There are still many things to discover, so dive into the ApexCharts docs right away.

Internationalization And Localization For Static Sites

Original Source: https://smashingmagazine.com/2020/11/internationalization-localization-static-sites/

Internationalization and localization is more than just writing your content in multiple languages. You need a strategy to determine what localization to send, and code to do it. You need to be able to support not just different languages, but different regions with the same language. Your UI needs to be responsive, not just to screen size, but to different languages and writing modes. Your content needs to be structured, down to the microcopy in your UI and the format of your dates, to be adaptable to any language you throw at it. Doing all of this with a static site generator, like Eleventy, can make it even harder, because you may not have a database, nonetheless a server. It can all be done, though, but it takes planning.

When building out chromeOS.dev, we knew that we needed to make it available to a global audience. Making sure that our codebase could support multiple locales (language, region, or combination of the two) without needing to custom-code each one, while allowing translation to be done with as little of that system’s knowledge as possible, would be critical to making this happen. Our content creators needed to be able to focus on creating content, and our translators on translating content, with as little work as possible to get their work into the site and deployed. Getting these sometimes conflicting set of needs right is the heart of what it takes to internationalize codebases and localize sites.

Internationalization (i18n) and localization (l10n) are two sides of the same coin. Internationalization is all about how, in our case, software, gets designed so that it can be adapted for multiple languages and regions without needing engineering changes. Localization, on the other hand, is about actually adapting the software for those languages and regions. Internationalization can happen across the whole website stack; from HTML, CSS, and JS to design considerations and build systems. Localization happens mostly in content creation (both long-form copy and microcopy) and management.

Note: For those curious, i18n and l10n are types of abbreviations known as numeronyms. A11y, for accessibility, is another common numeronym in web development.

Internationalization (i18n)

When figuring out internationalization, there are generally three items you need to consider: how to figure out what language and/or region the user wants, how to make sure they get content in their preferred localization, and how to adapt your site to adjust to those differences. While implementation specifics may change for dynamic sites (that render a page when a user requests it) and static sites (where pages are before getting deployed), the core concepts should stay the same.

Determining User’s Language And Region

The first thing to consider when figuring out internationalization is to determine how you want users to access localized content. This decision will become foundational to how you set up other systems, so it’s important to decide this early and ensure that the tradeoffs work well for your users.

Generally, there are three high-level ways of determining what localization to serve to users:

Location from IP address;
Accept-Language header or navigator.languages;
Identifier in URL.

Many systems wind up combining one, two, or all three, when deciding what localization to serve. As we were investigating, though, we found issues with using IP addresses and Accept-Language headers that we thought were significant enough to remove from consideration for us:

A user’s preferred language often doesn’t correlate to their physical location, which IP address provides. Just because someone is physically located in America, for instance, does not mean that they would prefer English content.
Location analysis from IP addresses is difficult, generally unreliable, and may prevent the site from being crawled by search engines.
Accept-Language headers are often never explicitly set, and only provide information about language, not region. Because of its limitations, this may be helpful to establish an initial guess about language, but isn’t necessarily reliable.

For these reasons, we decided that it would be better for us to not try and infer language or region before a user lands on our site, but rather have strong indicators in our URLs. Having strong indicators also allows us to assume that they’re getting the site in the language they want from their access URL alone, provides for an easy way to share localized content directly without concern of redirection, and provides a clean way for us to let users switch their preferred language.

There are three common patterns for building identifiers into URLs:

Provide different domains (usually TLDs or subdomains for different regions and languages (e.g. example.com and example.de, en.example.org and de.example.org);
Have localized sub-directories for content (e.g. example.com/en and example.com/de);
Serve localized content based on URL parameters (e.g. example.com?loc=en and example.com?loc=de).

While commonly used, URL parameters are generally not recommended because it’s difficult for users to recognize the localization (along with a number of analytics and management issues). We also decided that different domains weren’t a good solution for us; our site is a Progressive Web App and every domain, including TLDs and subdomains, are considered a different origin, effectively requiring a separate PWA for each localization.

We decided to use subdirectories, which provided a bonus of us being able to localize on language only (example.com/en) or language and region (example.com/en-US and example.com/en-GB) as needed while maintaining a single PWA. We also decided that every localization of our site would live in a subdirectory so one language isn’t elevated above another, and that all URLs, except for the subdirectory, would be identical across localizations based on the authoring language, allowing users to easily change localizations without needing to translate URLs.

Serving Localized Content

Once a strategy for determining a user’s language and region has been determined, you need a way to reliably serve them the right content. At a minimum, this will require some form of stored information, be it in a cookie, some local storage, or part of your app’s custom logic. Being able to keep a user’s localization preferences is an important part of i18n user experience; if a user has identified they want content in German, and they land on English content, you should be able to identify their preferred language and redirect them appropriately. This can be done on the server, but the solution we went with for chromeOS.dev is hosting and server setup agnostic: we used service workers. The user’s journey is as follows:

A user comes to our site for the first time. Our service worker isn’t installed.
Whatever localization they land on we set as their preferred language in IndexedDB. For this, we presume they’re landing there through some means, either social, referral, or search, that has directed them based on other localization contexts we don’t have. If a user lands without a localization set, we set it to English, as that’s our site’s primary language. We also have a language switcher in our footer to allow a user to change their language. At this point, our service worker should be installed.
After the service worker is installed, we intercept all URL requests for site navigation. Because our localizations are subdirectory based, we can readily identify what localization is being requested. Once identified, we check if the requested page is in a localized subdirectory, check if the localized subdirectory is in a list of supported localizations, and check if the localized subdirectory matches their preferences stored in IndexedDB. If it’s not in a localized subdirectory or the localized subdirectory matches their preferences, we serve the page; otherwise we do a 302 redirect from our service worker for the right localization.

We bundled our solution into Workbox plugin, Service Worker Internationalization Redirect. The plugin, along with its preferences sub-module, can be combined to set and get a user’s language preference and manage redirection when combined with Workbox’s registerRoute method and filtering requests on request.mode === ‘navigate’.

A full, minimal example looks like this:

Client Code

import { preferences } from ‘service-worker-i18n-redirect/preferences’;
window.addEventListener(‘DOMContentLoaded’, async () => {
const language = await preferences.get(‘lang’);
if (language === undefined) {
preferences.set(‘lang’, lang.value); // Language determined from localization user landed on
}
});

Service Worker Code

import { StaleWhileRevalidate } from ‘workbox-strategies’;
import { CacheableResponsePlugin } from ‘workbox-cacheable-response’;
import { i18nHandler } from ‘service-worker-i18n-redirect’;
import { preferences } from ‘service-worker-i18n-redirect/preferences’;
import { registerRoute } from ‘workbox-routing’;

// Create a caching strategy
const htmlCachingStrategy = new StaleWhileRevalidate({
cacheName: ‘pages-cache’,
plugins: [
new CacheableResponsePlugin({
statuses: [200],
}),
],
});

// Array of supported localizations
const languages = [‘en’, ‘es’, ‘fr’, ‘de’, ‘ko’];

// Use it for navigations
registerRoute(
({ request }) => request.mode === ‘navigate’,
i18nHandler(languages, preferences, htmlCachingStrategy),
);

With the combination of the client-side and service worker code, users’ preferred localization will automatically get set when they hit the site the first time and, if they navigate to a URL that isn’t in their preferred localizations, they’ll be redirected.

Adapting Site User Interface

There is a lot that goes into properly adapting user interfaces, so while not everything will be covered here, there are a handful of more subtle things that can and should be managed programmatically.

Blockquote Quotes

A common design pattern is having blockquotes wrapped in quotation marks, but did you know what gets used for those quotation marks varies with localization? Instead of hard-coding, use open-quote and close-quote to ensure the correct quotes are used for the correct language.

Date And Number Format

Both dates and numbers have a method, .toLocaleString to allow formatting based on a localization (language and/or region). Browsers that support these ship with all localizations available, making it readily usable there, but Node.js doesn’t. Fortunately, the full-icu module for Node allows you to use all of the localization data available. To do so, after installing the module, run your code with the NODE_ICU_DATA environment variable set to the path to the module, e.g. NODE_ICU_DATA=node_modules/full-icu.

HTML Meta Information

There are three areas in your HTML tag and headers that should be updated with each localization:

The page’s language,
Writing direction,
Alternative languages the page is available in.

The first to go on the html element with the dir and lang properties respectively, e.g. <html lang=”en” dir-“ltr”> for US English. Properly setting these will ensure content flows in the right direction and can allow browsers to understand what language the page is in, allowing additional features like translating the content. You should also include rel=”alternate” links to let search engines know that a page has been fully translated, so including <link href=”/es” rel=”alternate” hreflang=”es”> on our English landing page will let search engines know that this has a translation it should be on the lookout for.

Intrinsic Design

Localizing content can present design challenges as different translations will take up a varying amount of room on the page. Some languages, like German, have longer words requiring more horizontal space or more forgiving text wrapping. Other languages, like Arabic, have taller typefaces requiring more vertical space. Fortunately, there are a number of CSS tools for making spacing and layout responsive to not just the viewport size, but to the content as well, meaning they better adapt to multiple languages.

There are a number of CSS units specifically designed for working with content. There are the em and rem units representing the calculated font-size and root font-size, respectively. .Swapping fixed-size px values for these units can go a long way in making a site more responsive to its content. Then there’s the ch unit, representing the inline size of the 0 (zero) glyph in a font. This allows you to tie things like width, for instance, directly to the content it contains.

These units can then be combined with existing, powerful CSS tools for layout, specifically flexbox and grid, to components that adapt to their size, and layouts adapt to their content. Enhancing those with logical properties for borders, margins, and padding instead of physical physical properties makes those layouts and components automatically adapt to writing mode, too. The power of intrinsic web design (coined by Jen Simmons, content-aware units, and logical properties allows for interfaces to be designed and built so they can adapt to any language, not just any screen size.

Localization (l10n)

The most obvious form localization takes is translating content from one language to another. In more subtle forms, translations not only happen by language, but region it’s spoken, for instance, English spoken in American versus English spoken in the United Kingdom, South Africa, or Australia. To be successful here, understanding what to translate and how to structure your content for translation is critical to success.

Content Strategy

There are some parts of a software project that are important to localize, and some that aren’t. CSS class names, JavaScript variables, and other places in your codebase that are structural, but not user-facing, probably don’t need to be localized. Figuring out what needs to be localized, and how to structure it, comes down to content strategy.

Content strategy has a lot of definitions, but here it means the structure of content, microcopy (the words and phrases used throughout a project not tied to a specific piece of content), and the connections thereof. For more detailed information on content strategy, I’d recommend Content Strategy for Mobile by Karen McGrane and Designing Connected Content by Carrie Hane and Mike Atherton.

For chromeOS.dev, we wound up codifying content models that describe the structure of our content. Content models aren’t just for long-form article-like content; a content model should exist for any entity that a user may specifically want from you, like an author, document, or even reusable media assets. Good content models include individually-addressable pieces, or chunks, of a larger conceptual piece, while excluding chunks that are tangentially related or can be referenced from another content model. For instance, a content model for a blog post may include a title, an array of tags, a reference to an author, the date published, and the body of the post, but it shouldn’t include the string for breadcrumbs, or the author’s name and picture, which should be its own content model. Content models don’t change from localization to localization; they are site structure. An instance of a content model is tied to a localization, and those instances can be localized.

Content models only cover part of what needs to be localized, though. The rest—your “Read More” buttons, your “Menu” title, your disclaimer text—that’s all microcopy. Microcopy needs structure, too. While content models may feel natural to create, especially for template-driven sites, microcopy models tend to be less obvious and are often overlooked accidentally by writing what’s needed directly in a template.

By building content and microcopy models and enforcing them—through a content management system, linting, or review—you’re able to ensure that localization can focus on localizing.

Localize Values, Not Keys

Content and microcopy models usually generate structures akin to objects in a codebase; be it database entries, JSON object, YAML, or Front Matter. Don’t localize object keys! If you have your Search text microcopy located in a microcopy object at microcopy.search.text, don’t put it in a microcopie object at microcopie.chercher.texte. Keys in modules should be treated as localization-agnostic identifiers so they can be reliably used in reusable templates and relied upon throughout a codebase. This also means that object keys shouldn’t be displayed to end-users as content or microcopy.

Static Site Setup

For chromeOS.dev, we used Eleventy (11ty) with Nunjucks as our static site generator, but these recommendations for setting up a static site generator should be applicable to most static site generators. Where something is 11ty specific, it will be called out.

Folder Structure

Static site generators that compile based on folder structure are particularly good at supporting the subdirectory i18n method. 11ty also supports a data cascade with global data and a means of generating pages from data through pagination, so combining these three concepts yields a basic folder structure that looks like the following:

.
└── pages
├── _data
├── _generated
└── {{locale-code}}
├── {{locale-code}}.11tydata.js
├── _data
└── […content]

At a top-level, there’s a directory to hold the pages for a site, here called pages. Nested inside, there’s a _data folder containing global data files. This folder is important when talking about helpers next. Then, there’s a _generated folder. We have a number of pages that, instead of having their own content, are generated from existing content, small amounts of microcopy, or a combination of both. Think home a home page, a search page, or a blog section’s landing page. Because these pages are highly templated, we store the templates in the _generated folder and build them from there instead of having individual HTML or Markdown files for each. These folders are prefixed with an underscore to indicate that they don’t output pages directly underneath them, but rather are used to create pages elsewhere.

Next, l10n subdirectories! Each directory should be named for the BCP47 language tag (more commonly, locale code) for the localization it contains: for instance, en for English, or en-US for American English. In the chromeOS.dev codebase, we often refer to these as locales, too. These folders will become the localization subdirectories, segmenting content to a localization. 11ty’s data cascade allows for data to be available to every file in a directory and its children if the file is at the root of a directory and named the same as the directory (called directory data files). 11ty uses an object returned from this file, or a function that returns an object, and injects it into the variables made available for templating, so we have access to data here for all content of that localization.

To aid in maintainability of these files, we wrote a helper called l10n-data, part of our static site scaffolding, that takes advantage of this folder structure to build a cascade of localized data, allowing data to be localized piecemeal. It does this by having data stored in a locale-specific data directory, _data directory in it (loaded into the directory data file). If you look in our English locale data directory, for instance, you’ll see microcopy models like locale.json which defines the language code and writing direction that will then be rendered into our HTML, newsletter.yml which defines the microcopy needed for our newsletter signup, and a microcopy.yml file which includes general microcopy used in multiple places throughout the site that doesn’t fit into a more specific file. Everywhere any of this microcopy gets used, we pull it from this data made available through 11ty injecting data variables into our templates to use.

Microcopy tends to be the hardest to manage, while the rest of the content is mostly straight forward. Put your content, often Markdown files or HTML, into the localized subfolder. For static site generators that work on folder structure, the file name and folder structure of the content will typically map 1:1 to the final URL for that content, so a Markdown file at en/web/pwas.md would output to a URL en/web/pwa. Following our “values, not keys” principle of localization, we decided that we wouldn’t localize content file names (and therefore paths), making it easier for us to keep track of the same file’s localization status across locales and for users to know they’re on the right page between different locales.

I18n Helpers

In addition to content and microcopy, we found we needed to write a number of helpers modules to make working with localized content easier. 11ty has a concept called a filter that allows content to be modified before being rendered. We wound up building four of them to help with i18n templating.

The first is a date filter. We standardized on having all dates across our content written as a YAML date value because we mostly write them in YAML and they become available in our templates as a full UTC timestamp. When using the full-icu module and config, the date string (content being changed), along with the locale code for the content being rendered, can be passed directly to Date.toLocaleString (with optional formatting options) to render a localized date. Date.toLocaleDateString can optionally be used instead if you just want the date portion when no formatting options are passed in, instead of the full localized date and time.

The second filter is something we called localURL. This takes a local URL (content being changed) and the locale the URL should be in, and swaps them. It changes, for example, /en/linux to /es/linux.

The final two filters are about retrieving localized information from locale code alone. The third leverages the iso-639-10 module to transform a locale code into language name in the native language. This we use primarily for our language selector. The fourth uses the iso-i18n-countries module to retrieve a list of countries in that language. This we use primarily for building forms with country lists.

In addition to filters, 11ty has a concept called collections which is a grouping of content. 11ty makes a number of collections available by default, and can even build collections off of tags. In a multilingual site, we found that we wanted to build custom collections. We wound up building a number of helper functions to build collections based on localization. This allows us to do things like have location-specific tag collections or site section collections without needing to filter in our templates against all content on our site.

Our final, and most critical, helper was our site global data. Relying on the locale-code based subdirectory structure, this function dynamically determines what localizations the site supports. It builds a global variable, site, which includes the l10n property, containing all of the microcopy and localization-specific content from {{locale-code}}.11tydata.js. It also contains a languages property that lists all of the available locales as an array. Finally, the function outputs a JavaScript file detailing what languages are supported by the site and individual files for each entry in {{locale-code}}.11tydata.js, keyed per localization, all designed to be imported by our browser scripts. The heavy lifting of this file ties our static site to our front-end JavaScript with the single source of truth being the localization information we already need. It also allows us to programmatically generate pages based on our localizations by looping over site.l10n. This, combined with our localization-specific collections, let us use 11ty’s pagination to create localized home and news landing pages without maintaining separate HTML pages for each.

Conclusion

Getting internationalization and localization right can be difficult; understanding how different strategies and affect complexity is critical to making it easier. Pick an i18n strategy that is a natural fit for static sites, subdirectories, then build tools off that to automate parts of i18n and i10n from the content being produced. Build robust content and microcopy models. Leverage service workers for server-agnostic localization. Tie it all together with a design that’s responsive not just to screen size, but content. In the end you’ll have a site that your users of all locales will love that can be maintained by authors and translators as if it were a simple single-locale site.