People drew car logos from memory and the results are hilarious

Original Source:

We like to think we're pretty knowledgeable when it comes to logos and branding, but as a new study has shown, drawing even the most recognisable logos from memory can be a tad harder than it looks. 

Vanmonster recently asked 100 members of the British public to draw several famous car logos from memory, and the results range from impressively accurate to hilariously off. They also serve as an interesting insight into which logo elements are ingrained into the public consciousness, and which are more forgettable than the manufacturers might like to think. For some of the most memorable logos ever, check out our best logos of all time.

We'll start with perhaps the simplest logo on Vanmonster's list. Surely Audi's four intersecting rings (below) are impossible to forget? To be fair, the vast majority of entrants got this one right, and most slip-ups are at least ring-related (Audi's and the Olympics' rings are similar, we'll give them that). As for those towards the bottom-right (Vanmoster lists the drawings from 'most accurate' to 'least accurate'), a few seem to have mixed up Audi and the Avengers. They both begin with 'A', we guess.


Most of these look about right

On the other end of the spectrum, the Alfa Romeo logo is probably the most complex on the list. Unsurprising, then, that most got this one wrong. 74% of entrants forgot to include the red cross, 63% forgot the green snake and a whopping 75% didn't include the shield. Our favourites are probably the three 'least accurate': two question marks, and one which simply says, "animals of some sort".

100 very different Alfa Romeo logos

As well as showing all 100 entries for each logo, Vanmonster includes a handy gallery showing us the original logo alongside the most and least accurate attempt. Below are the best and worst attempts at BMW's logo (which recently underwent its biggest design change in over 100 years).

BMW logos


You can find the rest of the entries on Vanmonster's website, with logos including Renault, Toyota, Ferrari and many more. One thing's for sure, just like when 150 Americans tried to draw various (non-vehicular) logos from memory, this car logo test proves that the simplest are the most memorable. It's easier to recall rings than flag-snake-shield combinations.

Related articles:

Where to find logo design inspirationQuiz: Can you identify these original car logos?TrueCar rebrand fails to reinvent the wheel

4 Top SEO Plugins For WordPress (+ Bonus Tools)

Original Source:

When building out a WordPress website, it’s vital to have all the right tools on hand. And to get eyeballs on your content, that means building out a solid SEO strategy. A relatively hands-off way to accomplish this is through the use of SEO plugins. Luckily, there are quite a few plugins available for WordPress that make optimizing your site for SEO super easy.

Let’s take a look at some of these SEO plugins for WordPress then dive into discussing a few other tools that can take the guesswork out of selecting keywords and tracking results.

UNLIMITED DOWNLOADS: 500,000+ WordPress & Design Assets

Sign up for Envato Elements and get unlimited downloads starting at only $16.50 per month!


Yoast SEO

Yoast SEO - WordPress SEO plugins

First on our list is Yoast SEO. This WordPress plugin is one of the most popular, for good reason. It acts as a one-stop shop for on-page SEO. Once installed, it automatically adds widgets to each post and page you can use to add SEO titles, descriptions, assign keywords, as well as other items. You can also use it to add Open Graph metadata. Additionally, you can use it to add social media images to each post along with titles and descriptions optimized for social media platforms. Lastly, Yoast creates an XML sitemap for you and can be used for managing SEO redirects. Both a free and premium version of the plugin are available.


SEOPress - WordPress SEO plugins

Another great option is SEOPress. This plugin covers many of the same attributes as Yoast by adding fields for customizing a post or page’s meta title, description, social media content, redirects, and XML sitemaps. It’s interface is a bit easier to navigate, however, while still offering a wider range of options for experienced developers. This plugin is available in a free and premium version as well.

All in One SEO Pack

All In One SEO - WordPress SEO plugins

Still another popular option is the All in One SEO Pack. This SEO plugin for WordPress includes a full set of tools you can implement immediately on your website. Customize meta titles and descriptions; set up an XML sitemap, create image sitemaps, and more. It’s also compatible with WooCommerce. As you might expect, the premium version of this plugin comes with additional features and allows for a greater level of control over your site’s optimization efforts.

Rank Math

RankMath - WordPress SEO plugins

The last of the plugins we’ll be discussing here is Rank Math. This WordPress SEO plugin is super easy to use and makes it easy to optimize your posts and pages for search engines and for social media. Use the provided setup wizard to import information from other SEO plugins or manually customize meta titles, descriptions, and images. Use it to create Open Graph metadata, an XML sitemap, integrate with Google Search Console and more.

Bonus Tools & Resources

Though the primary focus here is SEO plugins for WordPress, we’d be remiss if we didn’t at least mention a few other tools that make building a comprehensive SEO strategy easier. This simple, straightforward tool delivers keyword suggestions by using Google Autocomplete. It’s as simple as it is genius.
SEOQuake: This browser extension can be used to assess a wide number of on-page SEO variables for any website you visit.
Ahrefs: The ultimate competitor research tools. Ahrefs allows you to see why your competitors are ranking for the keywords they are so you can plan a comparable strategy.
SEMRush: This tools allows you to keep track of how your site is performing as well as monitor competitors, backlinks, and more.
Google Search Console: Last on our list, this tool allows you to research keywords and monitor their ranking on any website you manage.

Pick a WordPress SEO Plugin and Start Ranking

In case you didn’t know, having an SEO strategy is imperative for any site’s success. Sure, some happen upon it accidentally, but most keep a mindful eye on keywords and rankings. And you can take a lot of the legwork out of this effort by using a WordPress SEO plugin and by utilizing some of the research and monitoring tools listed here. The results will be well worth the price of admission, so to speak.

Local Authentication Using Passport in Node.js

Original Source:

Local Authentication Using Passport in Node.js

A common requirement when building a web app is to implement a login system, so that users can authenticate themselves before gaining access to protected views or resources. Luckily for those building Node apps, there’s a middleware called Passport that can be dropped into any Express-based web application to provide authentication mechanisms in only a few commands.

In this tutorial, I’ll demonstrate how to use Passport to implement local authentication (that is, logging in with a username and password) with a MongoDB back end. If you’re looking to implement authentication via the likes of Facebook or GitHub, please refer to this tutorial.

As ever, all of the code for this article is available for download on GitHub.


To follow along with this tutorial, you’ll need to have Node and MongoDB installed on your machine.

You can install Node by heading to the official Node download page and grabbing the correct binaries for your system. Alternatively, you can use a version manager — a program that allows you to install multiple versions of Node and switch between them at will. If you fancy going this route, please consult our quick tip, “Install Multiple Versions of Node.js Using nvm”.

MongoDB comes in various editions. The one we’re interested in is the MongoDB Community Edition.

The project’s home page has excellent documentation and I won’t try to replicate that here. Rather, I’ll offer you links to instructions for each of the main operating systems:

Install MongoDB Community Edition on Windows
Install MongoDB Community Edition on macOS
Install MongoDB Community Edition on Ubuntu

If you use a non-Ubuntu–based version of Linux, you can check out this page for installation instructions for other distros. MongoDB is also normally available through the official Linux software channels, but sometimes this will pull in an outdated version.

Note: You don’t need to enter your name and address to download MongoDB. If prompted, you can normally dismiss the dialog.

If you’d like a quick refresher on using MongoDB, check out our beginner’s guide, “An Introduction to MongoDB”.

Authentication Strategies: Session vs JWT

Before we begin, let’s talk briefly about authentication choices.

Many of the tutorials online today will opt for token-based authentication using JSON Web Tokens (JWTs). This approach is probably the simplest and most popular one nowadays. It relegates part of the authentication responsibility to the client and makes them sign a token that’s sent with every request, to keep the user authenticated.

Session-based authentication has been around longer. This method relegates the weight of the authentication to the server. It uses cookies and sees the Node application and database work together to keep track of a user’s authentication state.

In this tutorial, we’ll be using session-based authentication, which is at the heart of the passport-local strategy.

Both methods have their advantages and drawbacks. If you’d like to read more into the difference between the two, this Stack Overflow thread might be a good place to start.

Creating the Project

Once all of the prerequisite software is set up, we can get started.

We’ll begin by creating the folder for our app and then accessing that folder on the terminal:

mkdir AuthApp
cd AuthApp

To create the node app, we’ll use the following command:

npm init

You’ll be prompted to provide some information for Node’s package.json. Just keep hitting Return to accept the default configuration (or use the -y flag).

Setting up Express

Now we need to install Express. Go to the terminal and enter this command:

npm install express

We’ll also need to install the body-parser middleware which is used to parse the request body that Passport uses to authenticate the user. And we’ll need to install the express-session middleware.

Let’s do that. Run the following command:

npm install body-parser express-session

When that’s done, create an index.js file in the root folder of your app and add the following content to it:


const express = require(‘express’);
const app = express();


const bodyParser = require(‘body-parser’);
const expressSession = require(‘express-session’)({
secret: ‘secret’,
resave: false,
saveUninitialized: false

app.use(bodyParser.urlencoded({ extended: true }));

const port = process.env.PORT || 3000;
app.listen(port, () => console.log(‘App listening on port ‘ + port));

First, we require Express and create our Express app by calling express(). Then we define the directory from which to serve our static files.

The next line sees us require the body-parser middleware, which will help us parse the body of our requests. We’re also adding the express-session middleware to help us save the session cookie.

As you can, see we’re configuring express-session with a secret to sign the session ID cookie (you should choose a unique value here), and two other fields, resave and saveUninitialized. The resave field forces the session to be saved back to the session store, and the saveUninitialized field forces a session that is “uninitialized” to be saved to the store. To learn more about them, check out their documentation, but for now it’s enough to know that for our case we want to keep them false.

Then, we use process.env.PORT to set the port to the environment port variable if it exists. Otherwise, we’ll default to 3000, which is the port we’ll be using locally. This gives you enough flexibility to switch from development, directly to a production environment where the port might be set by a service provider like, for instance, Heroku. Right below that, we called app.listen() with the port variable we set up and a simple log to let us know that it’s all working fine and on which port is the app listening.

That’s all for the Express setup. Now it’s on to setting up Passport.

Setting up Passport

First, we install Passport with the following command:

npm install passport

Then we need to add the following lines to the bottom of the index.js file:


const passport = require(‘passport’);


Here, we require passport and initialize it along with its session authentication middleware, directly inside our Express app.

Creating a MongoDB Data Store

Since we’re assuming you’ve already installed Mongo, you should be able to start the Mongo shell using the following command:


Within the shell, issue the following command:

use MyDatabase;

This simply creates a datastore named MyDatabase.

Leave the terminal there; we’ll come back to it later.

Connecting Mongo to Node with Mongoose

Now that we have a database with records in it, we need a way to communicate with it from our application. We’ll be using Mongoose to achieve this. Why don’t we just use plain Mongo? Well, as the Mongoose devs like to say on their website:

writing MongoDB validation, casting and business logic boilerplate is a drag.

Mongoose will simply make our lives easier and our code more elegant.

Let’s go ahead and install it with the following command:

npm install mongoose

We’ll also be using passport-local-mongoose, which will simplify the integration between Mongoose and Passport for local authentication. It will add a hash and salt field to our Schema in order to store the hashed password and the salt value. This is great, as passwords should never be stored as plain text in a database.

Let’s install the package:

npm install passport-local-mongoose

Now we have to configure Mongoose. Hopefully you know the drill by now: add the following code to the bottom of your index.js file:


const mongoose = require(‘mongoose’);
const passportLocalMongoose = require(‘passport-local-mongoose’);

{ useNewUrlParser: true, useUnifiedTopology: true });

const Schema = mongoose.Schema;
const UserDetail = new Schema({
username: String,
password: String

const UserDetails = mongoose.model(‘userInfo’, UserDetail, ‘userInfo’);

Here we require the previously installed packages. Then we connect to our database using mongoose.connect and give it the path to our database. Next, we’re making use of a Schema to define our data structure. In this case, we’re creating a UserDetail schema with username and password fields.

Finally, we add passportLocalMongoose as a plugin to our Schema. This will work part of the magic we talked about earlier. Then, we create a model from that schema. The first parameter is the name of the collection in the database. The second one is the reference to our Schema, and the third one is the name we’re assigning to the collection inside Mongoose.

That’s all for the Mongoose setup. We can now move on to implementing our Passport strategy.

Implementing Local Authentication

And finally, this is what we came here to do! Let’s set up the local authentication. As you’ll see below, we’ll just write the code that will set it up for us:




There’s quite some magic going on here. First, we make passport use the local strategy by calling createStrategy() on our UserDetails model — courtesy of passport-local-mongoose — which takes care of everything so that we don’t have to set up the strategy. Pretty handy.

Then we’re using serializeUser and deserializeUser callbacks. The first one will be invoked on authentication, and its job is to serialize the user instance with the information we pass on to it and store it in the session via a cookie. The second one will be invoked every subsequent request to deserialize the instance, providing it the unique cookie identifier as a “credential”. You can read more about that in the Passport documentation.


Now let’s add some routes to tie everything together. First, we’ll add a final package. Go to the terminal and run the following command:

npm install connect-ensure-login

The connect-ensure-login package is middleware that ensures a user is logged in. If a request is received that is unauthenticated, the request will be redirected to a login page. We’ll use this to guard our routes.

Now, add the following to the bottom of index.js:

/* ROUTES */

const connectEnsureLogin = require(‘connect-ensure-login’);‘/login’, (req, res, next) => {
(err, user, info) => {
if (err) {
return next(err);

if (!user) {
return res.redirect(‘/login?info=’ + info);

req.logIn(user, function(err) {
if (err) {
return next(err);

return res.redirect(‘/’);

})(req, res, next);

(req, res) => res.sendFile(‘html/login.html’,
{ root: __dirname })

(req, res) => res.sendFile(‘html/index.html’, {root: __dirname})

(req, res) => res.sendFile(‘html/private.html’, {root: __dirname})

(req, res) => res.send({user: req.user})

At the top, we’re requiring connect-ensure-login. We’ll come back to this later.

Next, we set up a route to handle a POST request to the /login path. Inside the handler, we use the passport.authenticate method, which attempts to authenticate with the strategy it receives as its first parameter — in this case local. If authentication fails, it will redirect us to /login, but it will add a query parameter — info — that will contain an error message. Otherwise, if authentication is successful, it will redirect us to the ‘/’ route.

Then we set up the /login route, which will send the login page. For this, we’re using res.sendFile() and passing in the file path and our root directory, which is the one we’re working on — hence the __dirname.

The /login route will be accessible to anyone, but our next ones won’t. In the / and /private routes we’ll send their respective HTML pages, and you’ll notice something different here. Before the callback, we’re adding the connectEnsureLogin.ensureLoggedIn() call. This is our route guard. Its job is validating the session to make sure you’re allowed to look at that route. Do you see now what I meant earlier by “letting the server do the heavy lifting”? We’re authenticating the user every single time.

Finally, we’ll need a /user route, which will return an object with our user information. This is just to show you how you can go about getting information from the server. We’ll request this route from the client and display the result.

Talking about the client, let’s do that now.

The post Local Authentication Using Passport in Node.js appeared first on SitePoint.

How To Set Up An Express API Backend Project With PostgreSQL

Original Source:

How To Set Up An Express API Backend Project With PostgreSQL

How To Set Up An Express API Backend Project With PostgreSQL

Chidi Orji


We will take a Test-Driven Development (TDD) approach and the set up Continuous Integration (CI) job to automatically run our tests on Travis CI and AppVeyor, complete with code quality and coverage reporting. We will learn about controllers, models (with PostgreSQL), error handling, and asynchronous Express middleware. Finally, we’ll complete the CI/CD pipeline by configuring automatic deploy on Heroku.

It sounds like a lot, but this tutorial is aimed at beginners who are ready to try their hands on a backend project with some level of complexity, and who may still be confused as to how all the pieces fit together in a real project.

It is robust without being overwhelming and is broken down into sections that you can complete in a reasonable length of time.

Getting Started

The first step is to create a new directory for the project and start a new node project. Node is required to continue with this tutorial. If you don’t have it installed, head over to the official website, download, and install it before continuing.

I will be using yarn as my package manager for this project. There are installation instructions for your specific operating system here. Feel free to use npm if you like.

Open your terminal, create a new directory, and start a Node.js project.

# create a new directory
mkdir express-api-template

# change to the newly-created directory
cd express-api-template

# initialize a new Node.js project
npm init

Answer the questions that follow to generate a package.json file. This file holds information about your project. Example of such information includes what dependencies it uses, the command to start the project, and so on.

You may now open the project folder in your editor of choice. I use visual studio code. It’s a free IDE with tons of plugins to make your life easier, and it’s available for all major platforms. You can download it from the official website.

Create the following files in the project folder:

Here’s a description of what .editorconfig does from the EditorConfig website. (You probably don’t need it if you’re working solo, but it does no harm, so I’ll leave it here.)

“EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs.”

Open .editorconfig and paste the following code:

root = true
indent_style = space
indent_size = 2
charset = utf-8
trim_trailing_whitespace = false
insert_final_newline = true

The [*] means that we want to apply the rules that come under it to every file in the project. We want an indent size of two spaces and UTF-8 character set. We also want to trim trailing white space and insert a final empty line in our file.

Open and add the project name as a first-level element.

# Express API template

Let’s add version control right away.

# initialize the project folder as a git repository
git init

Create a .gitignore file and enter the following lines:


These are all the files and folders we don’t want to track. We don’t have them in our project yet, but we’ll see them as we proceed.

At this point, you should have the following folder structure.

├── .editorconfig
├── .gitignore
├── package.json

I consider this to be a good point to commit my changes and push them to GitHub.

Starting A New Express Project

Express is a Node.js framework for building web applications. According to the official website, it is a

Fast, unopinionated, minimalist web framework for Node.js.

There are other great web application frameworks for Node.js, but Express is very popular, with over 47k GitHub stars at the time of this writing.

In this article, we will not be having a lot of discussions about all the parts that make up Express. For that discussion, I recommend you check out Jamie’s series. The first part is here, and the second part is here.

Install Express and start a new Express project. It’s possible to manually set up an Express server from scratch but to make our life easier we’ll use the express-generator to set up the app skeleton.

# install the express generator globally
yarn global add express-generator

# install express
yarn add express

# generate the express project in the current folder
express -f

The -f flag forces Express to create the project in the current directory.

We’ll now perform some house-cleaning operations.

Delete the file index/users.js.
Delete the folders public/ and views/.
Rename the file bin/www to bin/www.js.
Uninstall jade with the command yarn remove jade.
Create a new folder named src/ and move the following inside it:
1. app.js file
2. bin/ folder
3. routes/ folder inside.
Open up package.json and update the start script to look like below.

“start”: “node ./src/bin/www”

At this point, your project folder structure looks like below. You can see how VS Code highlights the file changes that have taken place.

├── node_modules
├── src
| ├── bin
│ │ ├── www.js
│ ├── routes
│ | ├── index.js
│ └── app.js
├── .editorconfig
├── .gitignore
├── package.json
└── yarn.lock

Open src/app.js and replace the content with the below code.

var logger = require(‘morgan’);
var express = require(‘express’);
var cookieParser = require(‘cookie-parser’);
var indexRouter = require(‘./routes/index’);
var app = express();

app.use(express.urlencoded({ extended: true }));
app.use(‘/v1’, indexRouter);

module.exports = app;

After requiring some libraries, we instruct Express to handle every request coming to /v1 with indexRouter.

Replace the content of routes/index.js with the below code:

var express = require(‘express’);
var router = express.Router();
router.get(‘/’, function(req, res, next) {
return res.status(200).json({ message: ‘Welcome to Express API template’ });
module.exports = router;

We grab Express, create a router from it and serve the / route, which returns a status code of 200 and a JSON message.

Start the app with the below command:

# start the app
yarn start

If you’ve set up everything correctly you should only see $ node ./src/bin/www in your terminal.

Visit http://localhost:3000/v1 in your browser. You should see the following message:

“message”: “Welcome to Express API template”

This is a good point to commit our changes.

The corresponding branch in my repo is 01-install-express.

Converting Our Code To ES6

The code generated by express-generator is in ES5, but in this article, we will be writing all our code in ES6 syntax. So, let’s convert our existing code to ES6.

Replace the content of routes/index.js with the below code:

import express from ‘express’;

const indexRouter = express.Router();

indexRouter.get(‘/’, (req, res) =>
res.status(200).json({ message: ‘Welcome to Express API template’ })

export default indexRouter;

It is the same code as we saw above, but with the import statement and an arrow function in the / route handler.

Replace the content of src/app.js with the below code:

import logger from ‘morgan’;
import express from ‘express’;
import cookieParser from ‘cookie-parser’;
import indexRouter from ‘./routes/index’;
const app = express();
app.use(express.urlencoded({ extended: true }));
app.use(‘/v1’, indexRouter);

export default app;

Let’s now take a look at the content of src/bin/www.js. We will build it incrementally. Delete the content of src/bin/www.js and paste in the below code block.

#!/usr/bin/env node
* Module dependencies.
import debug from ‘debug’;
import http from ‘http’;
import app from ‘../app’;
* Normalize a port into a number, string, or false.
const normalizePort = val => {
const port = parseInt(val, 10);
if (Number.isNaN(port)) {
// named pipe
return val;
if (port >= 0) {
// port number
return port;
return false;

* Get port from environment and store in Express.
const port = normalizePort(process.env.PORT || ‘3000’);
app.set(‘port’, port);

* Create HTTP server.
const server = http.createServer(app);

// next code block goes here

This code checks if a custom port is specified in the environment variables. If none is set the default port value of 3000 is set on the app instance, after being normalized to either a string or a number by normalizePort. The server is then created from the http module, with app as the callback function.

The #!/usr/bin/env node line is optional since we would specify node when we want to execute this file. But make sure it is on line 1 of src/bin/www.js file or remove it completely.

Let’s take a look at the error handling function. Copy and paste this code block after the line where the server is created.

* Event listener for HTTP server “error” event.
const onError = error => {
if (error.syscall !== ‘listen’) {
throw error;
const bind = typeof port === ‘string’ ? `Pipe ${port}` : `Port ${port}`;
// handle specific listen errors with friendly messages
switch (error.code) {
case ‘EACCES’:
alert(`${bind} requires elevated privileges`);
alert(`${bind} is already in use`);
throw error;

* Event listener for HTTP server “listening” event.
const onListening = () => {
const addr = server.address();
const bind = typeof addr === ‘string’ ? `pipe ${addr}` : `port ${addr.port}`;
debug(`Listening on ${bind}`);
* Listen on provided port, on all network interfaces.
server.on(‘error’, onError);
server.on(‘listening’, onListening);

The onError function listens for errors in the http server and displays appropriate error messages. The onListening function simply outputs the port the server is listening on to the console. Finally, the server listens for incoming requests at the specified address and port.

At this point, all our existing code is in ES6 syntax. Stop your server (use Ctrl + C) and run yarn start. You’ll get an error SyntaxError: Invalid or unexpected token. This happens because Node (at the time of writing) doesn’t support some of the syntax we’ve used in our code.

We’ll now fix that in the following section.

Configuring Development Dependencies: babel, nodemon, eslint, And prettier

It’s time to set up most of the scripts we’re going to need at this phase of the project.

Install the required libraries with the below commands. You can just copy everything and paste it in your terminal. The comment lines will be skipped.

# install babel scripts
yarn add @babel/cli @babel/core @babel/plugin-transform-runtime @babel/preset-env @babel/register @babel/runtime @babel/node –dev

This installs all the listed babel scripts as development dependencies. Check your package.json file and you should see a devDependencies section. All the installed scripts will be listed there.

The babel scripts we’re using are explained below:

A required install for using babel. It allows the use of Babel from the terminal and is available as ./node_modules/.bin/babel.

Core Babel functionality. This is a required installation.

This works exactly like the Node.js CLI, with the added benefit of compiling with babel presets and plugins. This is required for use with nodemon.

This helps to avoid duplication in the compiled output.

A collection of plugins that are responsible for carrying out code transformations.

This compiles files on the fly and is specified as a requirement during tests.

This works in conjunction with @babel/plugin-transform-runtime.

Create a file named .babelrc at the root of your project and add the following code:

“presets”: [“@babel/preset-env”],
“plugins”: [“@babel/transform-runtime”]

Let’s install nodemon

# install nodemon
yarn add nodemon –dev

nodemon is a library that monitors our project source code and automatically restarts our server whenever it observes any changes.

Create a file named nodemon.json at the root of your project and add the code below:

“watch”: [
“verbose”: true,
“ignore”: [“*.test.js”, “*.spec.js”]

The watch key tells nodemon which files and folders to watch for changes. So, whenever any of these files changes, nodemon restarts the server. The ignore key tells it the files not to watch for changes.

Now update the scripts section of your package.json file to look like the following:

# build the content of the src folder
“prestart”: “babel ./src –out-dir build”

# start server from the build folder
“start”: “node ./build/bin/www”

# start server in development mode
“startdev”: “nodemon –exec babel-node ./src/bin/www”

prestart scripts builds the content of the src/ folder and puts it in the build/ folder. When you issue the yarn start command, this script runs first before the start script.
start script now serves the content of the build/ folder instead of the src/ folder we were serving previously. This is the script you’ll use when serving the file in production. In fact, services like Heroku automatically run this script when you deploy.
yarn startdev is used to start the server during development. From now on we will be using this script as we develop the app. Notice that we’re now using babel-node to run the app instead of regular node. The –exec flag forces babel-node to serve the src/ folder. For the start script, we use node since the files in the build/ folder have been compiled to ES5.

Run yarn startdev and visit http://localhost:3000/v1. Your server should be up and running again.

The final step in this section is to configure ESLint and prettier. ESLint helps with enforcing syntax rules while prettier helps for formatting our code properly for readability.

Add both of them with the command below. You should run this on a separate terminal while observing the terminal where our server is running. You should see the server restarting. This is because we’re monitoring package.json file for changes.

# install elsint and prettier

yarn add eslint eslint-config-airbnb-base eslint-plugin-import prettier –dev

Now create the .eslintrc.json file in the project root and add the below code:

“env”: {
“browser”: true,
“es6”: true,
“node”: true,
“mocha”: true
“extends”: [“airbnb-base”],
“globals”: {
“Atomics”: “readonly”,
“SharedArrayBuffer”: “readonly”
“parserOptions”: {
“ecmaVersion”: 2018,
“sourceType”: “module”
“rules”: {
“indent”: [“warn”, 2],
“linebreak-style”: [“error”, “unix”],
“quotes”: [“error”, “single”],
“semi”: [“error”, “always”],
“no-console”: 1,
“comma-dangle”: [0],
“arrow-parens”: [0],
“object-curly-spacing”: [“warn”, “always”],
“array-bracket-spacing”: [“warn”, “always”],
“import/prefer-default-export”: [0]

This file mostly defines some rules against which eslint will check our code. You can see that we’re extending the style rules used by Airbnb.

In the "rules" section, we define whether eslint should show a warning or an error when it encounters certain violations. For instance, it shows a warning message on our terminal for any indentation that does not use 2 spaces. A value of [0] turns off a rule, which means that we won’t get a warning or an error if we violate that rule.

Create a file named .prettierrc and add the code below:

“trailingComma”: “es5”,
“tabWidth”: 2,
“semi”: true,
“singleQuote”: true

We’re setting a tab width of 2 and enforcing the use of single quotes throughout our application. Do check the prettier guide for more styling options.

Now add the following scripts to your package.json:

# add these one after the other

“lint”: “./node_modules/.bin/eslint ./src”

“pretty”: “prettier –write ‘**/*.{js,json}’ ‘!node_modules/**'”

“postpretty”: “yarn lint –fix”

Run yarn lint. You should see a number of errors and warnings in the console.

The pretty command prettifies our code. The postpretty command is run immediately after. It runs the lint command with the –fix flag appended. This flag tells ESLint to automatically fix common linting issues. In this way, I mostly run the yarn pretty command without bothering about the lint command.

Run yarn pretty. You should see that we have only two warnings about the presence of alert in the bin/www.js file.

Here’s what our project structure looks like at this point.

├── build
├── node_modules
├── src
| ├── bin
│ │ ├── www.js
│ ├── routes
│ | ├── index.js
│ └── app.js
├── .babelrc
├── .editorconfig
├── .eslintrc.json
├── .gitignore
├── .prettierrc
├── nodemon.json
├── package.json
└── yarn.lock

You may find that you have an additional file, yarn-error.log in your project root. Add it to .gitignore file. Commit your changes.

The corresponding branch at this point in my repo is 02-dev-dependencies.

Settings And Environment Variables In Our .env File

In nearly every project, you’ll need somewhere to store settings that will be used throughout your app e.g. an AWS secret key. We store such settings as environment variables. This keeps them away from prying eyes, and we can use them within our application as needed.

I like having a settings.js file with which I read all my environment variables. Then, I can refer to the settings file from anywhere within my app. You’re at liberty to name this file whatever you want, but there’s some kind of consensus about naming such files settings.js or config.js.

For our environment variables, we’ll keep them in a .env file and read them into our settings file from there.

Create the .env file at the root of your project and enter the below line:

TEST_ENV_VARIABLE=”Environment variable is coming across”

To be able to read environment variables into our project, there’s a nice library, dotenv that reads our .env file and gives us access to the environment variables defined inside. Let’s install it.

# install dotenv
yarn add dotenv

Add the .env file to the list of files being watched by nodemon.

Now, create the settings.js file inside the src/ folder and add the below code:

import dotenv from ‘dotenv’;
export const testEnvironmentVariable = process.env.TEST_ENV_VARIABLE;

We import the dotenv package and call its config method. We then export the testEnvironmentVariable which we set in our .env file.

Open src/routes/index.js and replace the code with the one below.

import express from ‘express’;
import { testEnvironmentVariable } from ‘../settings’;

const indexRouter = express.Router();

indexRouter.get(‘/’, (req, res) => res.status(200).json({ message: testEnvironmentVariable }));

export default indexRouter;

The only change we’ve made here is that we import testEnvironmentVariable from our settings file and use is as the return message for a request from the / route.

Visit http://localhost:3000/v1 and you should see the message, as shown below.

“message”: “Environment variable is coming across.”

And that’s it. From now on we can add as many environment variables as we want and we can export them from our settings.js file.

This is a good point to commit your code. Remember to prettify and lint your code.

The corresponding branch on my repo is 03-env-variables.

Writing Our First Test

It’s time to incorporate testing into our app. One of the things that give the developer confidence in their code is tests. I’m sure you’ve seen countless articles on the web preaching Test-Driven Development (TDD). It cannot be emphasized enough that your code needs some measure of testing. TDD is very easy to follow when you’re working with Express.js.

In our tests, we will make calls to our API endpoints and check to see if what is returned is what we expect.

Install the required dependencies:

# install dependencies

yarn add mocha chai nyc sinon-chai supertest coveralls –dev

Each of these libraries has its own role to play in our tests.

test runner

used to make assertions

collect test coverage report

extends chai’s assertions

used to make HTTP calls to our API endpoints

for uploading test coverage to

Create a new test/ folder at the root of your project. Create two files inside this folder:


Mocha will find the test/ folder automatically.

Open up test/setup.js and paste the below code. This is just a helper file that helps us organize all the imports we need in our test files.

import supertest from ‘supertest’;
import chai from ‘chai’;
import sinonChai from ‘sinon-chai’;
import app from ‘../src/app’;

export const { expect } = chai;
export const server = supertest.agent(app);
export const BASE_URL = ‘/v1’;

This is like a settings file, but for our tests. This way we don’t have to initialize everything inside each of our test files. So we import the necessary packages and export what we initialized — which we can then import in the files that need them.

Open up index.test.js and paste the following test code.

import { expect, server, BASE_URL } from ‘./setup’;

describe(‘Index page test’, () => {
it(‘gets base url’, done => {
.end((err, res) => {
‘Environment variable is coming across.’

Here we make a request to get the base endpoint, which is / and assert that the res.body object has a message key with a value of Environment variable is coming across.

If you’re not familiar with the describe, it pattern, I encourage you to take a quick look at Mocha’s “Getting Started” doc.

Add the test command to the scripts section of package.json.

“test”: “nyc –reporter=html –reporter=text –reporter=lcov mocha -r @babel/register”

This script executes our test with nyc and generates three kinds of coverage report: an HTML report, outputted to the coverage/ folder; a text report outputted to the terminal and an lcov report outputted to the .nyc_output/ folder.

Now run yarn test. You should see a text report in your terminal just like the one in the below photo.

Test coverage report (Large preview)

Notice that two additional folders are generated:


Look inside .gitignore and you’ll see that we’re already ignoring both. I encourage you to open up coverage/index.html in a browser and view the test report for each file.

This is a good point to commit your changes.

The corresponding branch in my repo is 04-first-test.

Continuous Integration(CD) And Badges: Travis, Coveralls, Code Climate, AppVeyor

It’s now time to configure continuous integration and deployment (CI/CD) tools. We will configure common services such as travis-ci, coveralls, AppVeyor, and codeclimate and add badges to our README file.

Let’s get started.

Travis CI

Travis CI is a tool that runs our tests automatically each time we push a commit to GitHub (and recently, Bitbucket) and each time we create a pull request. This is mostly useful when making pull requests by showing us if the our new code has broken any of our tests.

Visit or and create an account if you don’t have one. You have to sign up with your GitHub account.
Hover over the dropdown arrow next to your profile picture and click on settings.
Under Repositories tab click Manage repositories on Github to be redirected to Github.
On the GitHub page, scroll down to Repository access and click the checkbox next to Only select repositories.
Click the Select repositories dropdown and find the express-api-template repo. Click it to add it to the list of repositories you want to add to travis-ci.
Click Approve and install and wait to be redirected back to travis-ci.
At the top of the repo page, close to the repo name, click on the build unknown icon. From the Status Image modal, select markdown from the format dropdown.
Copy the resulting code and paste it in your file.
On the project page, click on More options > Settings. Under Environment Variables section, add the TEST_ENV_VARIABLE env variable. When entering its value, be sure to have it within double quotes like this "Environment variable is coming across."
Create .travis.yml file at the root of your project and paste in the below code (We’ll set the value of CC_TEST_REPORTER_ID in the Code Climate section).

language: node_js
– CC_TEST_REPORTER_ID=get-this-from-code-climate-repo-page
– node_js: ’12’
directories: [node_modules]
after_success: yarn coverage
– curl -L > ./cc-test-reporter
– chmod +x ./cc-test-reporter
– ./cc-test-reporter before-build
– yarn test
– ./cc-test-reporter after-build –exit-code $TRAVIS_TEST_RESUL

First, we tell Travis to run our test with Node.js, then set the CC_TEST_REPORTER_ID global environment variable (we’ll get to this in the Code Climate section). In the matrix section, we tell Travis to run our tests with Node.js v12. We also want to cache the node_modules/ directory so it doesn’t have to be regenerated every time.

We install our dependencies using the yarn command which is a shorthand for yarn install. The before_script and after_script commands are used to upload coverage results to codeclimate. We’ll configure codeclimate shortly. After yarn test runs successfully, we want to also run yarn coverage which will upload our coverage report to


Coveralls uploads test coverage data for easy visualization. We can view the test coverage on our local machine from the coverage folder, but Coveralls makes it available outside our local machine.

Visit and either sign in or sign up with your Github account.
Hover over the left-hand side of the screen to reveal the navigation menu. Click on ADD REPOS.
Search for the express-api-template repo and turn on coverage using the toggle button on the left-hand side. If you can’t find it, click on SYNC REPOS on the upper right-hand corner and try again. Note that your repo has to be public, unless you have a PRO account.
Click details to go to the repo details page.
Create the .coveralls.yml file at the root of your project and enter the below code. To get the repo_token, click on the repo details. You will find it easily on that page. You could just do a browser search for repo_token.


This token maps your coverage data to a repo on Coveralls. Now, add the coverage command to the scripts section of your package.json file:

"coverage": "nyc report –reporter=text-lcov | coveralls"

This command uploads the coverage report in the .nyc_output folder to Turn on your Internet connection and run:

yarn coverage

This should upload the existing coverage report to coveralls. Refresh the repo page on coveralls to see the full report.

On the details page, scroll down to find the BADGE YOUR REPO section. Click on the EMBED dropdown and copy the markdown code and paste it into your README file.

Code Climate

Code Climate is a tool that helps us measure code quality. It shows us maintenance metrics by checking our code against some defined patterns. It detects things such as unnecessary repetition and deeply nested for loops. It also collects test coverage data just like

Visit and click on ‘Sign up with GitHub’. Log in if you already have an account.
Once in your dashboard, click on Add a repository.
Find the express-api-template repo from the list and click on Add Repo.
Wait for the build to complete and redirect to the repo dashboard.
Under Codebase Summary, click on Test Coverage. Under the Test coverage menu, copy the TEST REPORTER ID and paste it in your .travis.yml as the value of CC_TEST_REPORTER_ID.
Still on the same page, on the left-hand navigation, under EXTRAS, click on Badges. Copy the maintainability and test coverage badges in markdown format and paste them into your file.

It’s important to note that there are two ways of configuring maintainability checks. There are the default settings that are applied to every repo, but if you like, you could provide a .codeclimate.yml file at the root of your project. I’ll be using the default settings, which you can find under the Maintainability tab of the repo settings page. I encourage you to take a look at least. If you still want to configure your own settings, this guide will give you all the information you need.


AppVeyor and Travis CI are both automated test runners. The main difference is that travis-ci runs tests in a Linux environment while AppVeyor runs tests in a Windows environment. This section is included to show how to get started with AppVeyor.

Visit AppVeyor and log in or sign up.
On the next page, click on NEW PROJECT.
From the repo list, find the express-api-template repo. Hover over it and click ADD.
Click on the Settings tab. Click on Environment on the left navigation. Add TEST_ENV_VARIABLE and its value. Click ‘Save’ at the bottom of the page.
Create the appveyor.yml file at the root of your project and paste in the below code.

– nodejs_version: “12”
– yarn
– yarn test
build: off

This code instructs AppVeyor to run our tests using Node.js v12. We then install our project dependencies with the yarn command. test_script specifies the command to run our test. The last line tells AppVeyor not to create a build folder.

Click on the Settings tab. On the left-hand navigation, click on badges. Copy the markdown code and paste it in your file.

Commit your code and push to GitHub. If you have done everything as instructed all tests should pass and you should see your shiny new badges as shown below. Check again that you have set the environment variables on Travis and AppVeyor.

Repo CI/CD badges. (Large preview)

Now is a good time to commit our changes.

The corresponding branch in my repo is 05-ci.

Adding A Controller

Currently, we’re handling the GET request to the root URL, /v1, inside the src/routes/index.js. This works as expected and there is nothing wrong with it. However, as your application grows, you want to keep things tidy. You want concerns to be separated — you want a clear separation between the code that handles the request and the code that generates the response that will be sent back to the client. To achieve this, we write controllers. Controllers are simply functions that handle requests coming through a particular URL.

To get started, create a controllers/ folder inside the src/ folder. Inside controllers create two files: index.js and home.js. We would export our functions from within index.js. You could name home.js anything you want, but typically you want to name controllers after what they control. For example, you might have a file usersController.js to hold every function related to users in your app.

Open src/controllers/home.js and enter the code below:

import { testEnvironmentVariable } from ‘../settings’;

export const indexPage = (req, res) => res.status(200).json({ message: testEnvironmentVariable });

You will notice that we only moved the function that handles the request for the / route.

Open src/controllers/index.js and enter the below code.

// export everything from home.js
export * from ‘./home’;

We export everything from the home.js file. This allows us shorten our import statements to import { indexPage } from ‘../controllers’;

Open src/routes/index.js and replace the code there with the one below:

import express from ‘express’;
import { indexPage } from ‘../controllers’;
const indexRouter = express.Router();

indexRouter.get(‘/’, indexPage);

export default indexRouter;

The only change here is that we’ve provided a function to handle the request to the / route.

You just successfully wrote your first controller. From here it’s a matter of adding more files and functions as needed.

Go ahead and play with the app by adding a few more routes and controllers. You could add a route and a controller for the about page. Remember to update your test, though.

Run yarn test to confirm that we’ve not broken anything. Does your test pass? That’s cool.

This is a good point to commit our changes.

The corresponding branch in my repo is 06-controllers.

Connecting The PostgreSQL Database And Writing A Model

Our controller currently returns hard-coded text messages. In a real-world app, we often need to store and retrieve information from a database. In this section, we will connect our app to a PostgreSQL database.

We’re going to implement the storage and retrieval of simple text messages using a database. We have two options for setting a database: we could provision one from a cloud server, or we could set up our own locally.

I would recommend you provision a database from a cloud server. ElephantSQL has a free plan that gives 20MB of free storage which is sufficient for this tutorial. Visit the site and click on Get a managed database today. Create an account (if you don’t have one) and follow the instructions to create a free plan. Take note of the URL on the database details page. We’ll be needing it soon.

ElephantSQL turtle plan details page (Large preview)

If you would rather set up a database locally, you should visit the PostgreSQL and PgAdmin sites for further instructions.

Once we have a database set up, we need to find a way to allow our Express app to communicate with our database. Node.js by default doesn’t support reading and writing to PostgreSQL database, so we’ll be using an excellent library, appropriately named, node-postgres.

node-postgres executes SQL queries in node and returns the result as an object, from which we can grab items from the rows key.

Let’s connect node-postgres to our application.

# install node-postgres
yarn add pg

Open settings.js and add the line below:

export const connectionString = process.env.CONNECTION_STRING;

Open your .env file and add the CONNECTION_STRING variable. This is the connection string we’ll be using to establish a connection to our database. The general form of the connection string is shown below.


If you’re using elephantSQL you should copy the URL from the database details page.

Inside your /src folder, create a new folder called models/. Inside this folder, create two files:


Open pools.js and paste the following code:

import { Pool } from ‘pg’;
import dotenv from ‘dotenv’;
import { connectionString } from ‘../settings’;

export const pool = new Pool({ connectionString });

First, we import the Pool and dotenv from the pg and dotenv packages, and then import the settings we created for our postgres database before initializing dotenv. We establish a connection to our database with the Pool object. In node-postgres, every query is executed by a client. A Pool is a collection of clients for communicating with the database.

To create the connection, the pool constructor takes a config object. You can read more about all the possible configurations here. It also accepts a single connection string, which I will use here.

Open model.js and paste the following code:

import { pool } from ‘./pool’;

class Model {
constructor(table) {
this.pool = pool;
this.table = table;
this.pool.on(‘error’, (err, client) => `Error, ${err}, on idle client${client}`);

async select(columns, clause) {
let query = `SELECT ${columns} FROM ${this.table}`;
if (clause) query += clause;
return this.pool.query(query);

export default Model;

We create a model class whose constructor accepts the database table we wish to operate on. We’ll be using a single pool for all our models.

We then create a select method which we will use to retrieve items from our database. This method accepts the columns we want to retrieve and a clause, such as a WHERE clause. It returns the result of the query, which is a Promise. Remember we said earlier that every query is executed by a client, but here we execute the query with pool. This is because, when we use pool.query, node-postgres executes the query using the first available idle client.

The query you write is entirely up to you, provided it is a valid SQL statement that can be executed by a Postgres engine.

The next step is to actually create an API endpoint to utilize our newly connected database. Before we do that, I’d like us to create some utility functions. The goal is for us to have a way to perform common database operations from the command line.

Create a folder, utils/ inside the src/ folder. Create three files inside this folder:


We’re going to create functions to create a table in our database, insert seed data in the table, and to delete the table.

Open up queries.js and paste the following code:

export const createMessageTable = `

export const insertMessages = `
INSERT INTO messages(name, message)
VALUES (‘chidimo’, ‘first message’),
(‘orji’, ‘second message’)

export const dropMessagesTable = ‘DROP TABLE messages’;

In this file, we define three SQL query strings. The first query deletes and recreates the messages table. The second query inserts two rows into the messages table. Feel free to add more items here. The last query drops/deletes the messages table.

Open queryFunctions.js and paste the following code:

import { pool } from ‘../models/pool’;
import {
} from ‘./queries’;

export const executeQueryArray = async arr => new Promise(resolve => {
const stop = arr.length;
arr.forEach(async (q, index) => {
await pool.query(q);
if (index + 1 === stop) resolve();

export const dropTables = () => executeQueryArray([ dropMessagesTable ]);
export const createTables = () => executeQueryArray([ createMessageTable ]);
export const insertIntoTables = () => executeQueryArray([ insertMessages ]);

Here, we create functions to execute the queries we defined earlier. Note that the executeQueryArray function executes an array of queries and waits for each one to complete inside the loop. (Don’t do such a thing in production code though). Then, we only resolve the promise once we have executed the last query in the list. The reason for using an array is that the number of such queries will grow as the number of tables in our database grows.

Open runQuery.js and paste the following code:

import { createTables, insertIntoTables } from ‘./queryFunctions’;

(async () => {
await createTables();
await insertIntoTables();

This is where we execute the functions to create the table and insert the messages in the table. Let’s add a command in the scripts section of our package.json to execute this file.

“runQuery”: “babel-node ./src/utils/runQuery”

Now run:

yarn runQuery

If you inspect your database, you will see that the messages table has been created and that the messages were inserted into the table.

If you’re using ElephantSQL, on the database details page, click on BROWSER from the left navigation menu. Select the messages table and click Execute. You should see the messages from the queries.js file.

Let’s create a controller and route to display the messages from our database.

Create a new controller file src/controllers/messages.js and paste the following code:

import Model from ‘../models/model’;

const messagesModel = new Model(‘messages’);
export const messagesPage = async (req, res) => {
try {
const data = await‘name, message’);
res.status(200).json({ messages: data.rows });
} catch (err) {
res.status(200).json({ messages: err.stack });

We import our Model class and create a new instance of that model. This represents the messages table in our database. We then use the select method of the model to query our database. The data (name and message) we get is sent as JSON in the response.

We define the messagesPage controller as an async function. Since node-postgres queries return a promise, we await the result of that query. If we encounter an error during the query we catch it and display the stack to the user. You should decide how choose to handle the error.

Add the get messages endpoint to src/routes/index.js and update the import line.

# update the import line
import { indexPage, messagesPage } from ‘../controllers’;

# add the get messages endpoint
indexRouter.get(‘/messages’, messagesPage)

Visit http://localhost:3000/v1/messages and you should see the messages displayed as shown below.

Messages from database. (Large preview)

Now, let’s update our test file. When doing TDD, you usually write your tests before implementing the code that makes the test pass. I’m taking the opposite approach here because we’re still working on setting up the database.

Create a new file, hooks.js in the test/ folder and enter the below code:

import {
} from ‘../src/utils/queryFunctions’;

before(async () => {
await createTables();
await insertIntoTables();

after(async () => {
await dropTables();

When our test starts, Mocha finds this file and executes it before running any test file. It executes the before hook to create the database and insert some items into it. The test files then run after that. Once the test is finished, Mocha runs the after hook in which we drop the database. This ensures that each time we run our tests, we do so with clean and new records in our database.

Create a new test file test/messages.test.js and add the below code:

import { expect, server, BASE_URL } from ‘./setup’;
describe(‘Messages’, () => {
it(‘get messages page’, done => {
.end((err, res) => {
res.body.messages.forEach(m => {

We assert that the result of the call to /messages is an array. For each message object, we assert that it has the name and message property.

The final step in this section is to update the CI files.

Add the following sections to the .travis.yml file:

– postgresql
postgresql: “10”
– postgresql-10
– postgresql-client-10
– sudo cp /etc/postgresql/{9.6,10}/main/pg_hba.conf
– sudo /etc/init.d/postgresql restart

This instructs Travis to spin up a PostgreSQL 10 database before running our tests.

Add the command to create the database as the first entry in the before_script section:

# add this as the first line in the before_script section

– psql -c ‘create database testdb;’ -U postgres

Create the CONNECTION_STRING environment variable on Travis, and use the below value:


Add the following sections to the .appveyor.yml file:

– SET PGUSER=postgres
– SET PGPASSWORD=Password12!
– PATH=C:Program FilesPostgreSQL10bin;%PATH%
– createdb testdb
– postgresql101

Add the connection string environment variable to appveyor. Use the below line:


Now commit your changes and push to GitHub. Your tests should pass on both Travis CI and AppVeyor.

The corresponding branch in my repo is 07-connect-postgres.

Note: I hope everything works fine on your end, but in case you should be having trouble for some reason, you can always check my code in the repo!

Now, let’s see how we can add a message to our database. For this step, we’ll need a way to send POST requests to our URL. I’ll be using Postman to send POST requests.

Let’s go the TDD route and update our test to reflect what we expect to achieve.

Open test/message.test.js and add the below test case:

it(‘posts messages’, done => {
const data = { name: ‘some name’, message: ‘new message’ };
.end((err, res) => {
res.body.messages.forEach(m => {
expect(m)‘message’, data.message);

This test makes a POST request to the /v1/messages endpoint and we expect an array to be returned. We also check for the id, name, and message properties on the array.

Run your tests to see that this case fails. Let’s now fix it.

To send post requests, we use the post method of the server. We also send the name and message we want to insert. We expect the response to be an array, with a property id and the other info that makes up the query. The id is proof that a record has been inserted into the database.

Open src/models/model.js and add the insert method:

async insertWithReturn(columns, values) {
const query = `
INSERT INTO ${this.table}(${columns})
VALUES (${values})
RETURNING id, ${columns}
return this.pool.query(query);

This is the method that allows us to insert messages into the database. After inserting the item, it returns the id, name and message.

Open src/controllers/messages.js and add the below controller:

export const addMessage = async (req, res) => {
const { name, message } = req.body;
const columns = ‘name, message’;
const values = `’${name}’, ‘${message}’`;
try {
const data = await messagesModel.insertWithReturn(columns, values);
res.status(200).json({ messages: data.rows });
} catch (err) {
res.status(200).json({ messages: err.stack });

We destructure the request body to get the name and message. Then we use the values to form an SQL query string which we then execute with the insertWithReturn method of our model.

Add the below POST endpoint to /src/routes/index.js and update your import line.

import { indexPage, messagesPage, addMessage } from ‘../controllers’;‘/messages’, addMessage);

Run your tests to see if they pass.

Open Postman and send a POST request to the messages endpoint. If you’ve just run your test, remember to run yarn query to recreate the messages table.

yarn query

POST request to messages endpoint. (Large preview)

GET request showing newly added message. (Large preview)

Commit your changes and push to GitHub. Your tests should pass on both Travis and AppVeyor. Your test coverage will drop by a few points, but that’s okay.

The corresponding branch on my repo is 08-post-to-db.


Our discussion of Express won’t be complete without talking about middleware. The Express documentation describes a middlewares as:

“[…] functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next.”

A middleware can perform any number of functions such as authentication, modifying the request body, and so on. See the Express documentation on using middleware.

We’re going to write a simple middleware that modifies the request body. Our middleware will append the word SAYS: to the incoming message before it is saved in the database.

Before we start, let’s modify our test to reflect what we want to achieve.

Open up test/messages.test.js and modify the last expect line in the posts message test case:

it(‘posts messages’, done => {

expect(m)‘message’, `SAYS: ${data.message}`); # update this line


We’re asserting that the SAYS: string has been appended to the message. Run your tests to make sure this test case fails.

Now, let’s write the code to make the test pass.

Create a new middleware/ folder inside src/ folder. Create two files inside this folder:


Enter the below code in middleware.js:

export const modifyMessage = (req, res, next) => {
req.body.message = `SAYS: ${req.body.message}`;

Here, we append the string SAYS: to the message in the request body. After doing that, we must call the next() function to pass execution to the next function in the request-response chain. Every middleware has to call the next function to pass execution to the next middleware in the request-response cycle.

Enter the below code in index.js:

# export everything from the middleware file

export * from ‘./middleware’;

This exports the middleware we have in the /middleware.js file. For now, we only have the modifyMessage middleware.

Open src/routes/index.js and add the middleware to the post message request-response chain.

import { modifyMessage } from ‘../middleware’;‘/messages’, modifyMessage, addMessage);

We can see that the modifyMessage function comes before the addMessage function. We invoke the addMessage function by calling next in the modifyMessage middleware. As an experiment, comment out the next() line in the modifyMessage middle and watch the request hang.

Open Postman and create a new message. You should see the appended string.

Message modified by middleware. (Large preview)

This is a good point to commit our changes.

The corresponding branch in my repo is 09-middleware.

Error Handling And Asynchronous Middleware

Errors are inevitable in any application. The task before the developer is how to deal with errors as gracefully as possible.

In Express:

“Error Handling refers to how Express catches and processes errors that occur both synchronously and asynchronously.

If we were only writing synchronous functions, we might not have to worry so much about error handling as Express already does an excellent job of handling those. According to the docs:

“Errors that occur in synchronous code inside route handlers and middleware require no extra work.”

But once we start writing asynchronous router handlers and middleware, then we have to do some error handling.

Our modifyMessage middleware is a synchronous function. If an error occurs in that function, Express will handle it just fine. Let’s see how we deal with errors in asynchronous middleware.

Let’s say, before creating a message, we want to get a picture from the Lorem Picsum API using this URL This is an asynchronous operation that could either succeed or fail, and that presents a case for us to deal with.

Start by installing Axios.

# install axios
yarn add axios

Open src/middleware/middleware.js and add the below function:

export const performAsyncAction = async (req, res, next) => {
try {
await axios.get(‘’);
} catch (err) {

In this async function, we await a call to an API (we don’t actually need the returned data) and afterward call the next function in the request chain. If the request fails, we catch the error and pass it on to next. Once Express sees this error, it skips all other middleware in the chain. If we didn’t call next(err), the request will hang. If we only called next() without err, the request will proceed as if nothing happened and the error will not be caught.

Import this function and add it to the middleware chain of the post messages route:

import { modifyMessage, performAsyncAction } from ‘../middleware’;‘/messages’, modifyMessage, performAsyncAction, addMessage);

Open src/app.js and add the below code just before the export default app line.

app.use((err, req, res, next) => {
res.status(400).json({ error: err.stack });

export default app;

This is our error handler. According to the Express error handling doc:

“[…] error-handling functions have four arguments instead of three: (err, req, res, next).”

Note that this error handler must come last, after every app.use() call. Once we encounter an error, we return the stack trace with a status code of 400. You could do whatever you like with the error. You might want to log it or send it somewhere.

This is a good place to commit your changes.

The corresponding branch in my repo is 10-async-middleware.

Deploy To Heroku

To get started, go to and either log in or register.
Download and install the Heroku CLI from here.
Open a terminal in the project folder to run the command.

# login to heroku on command line
heroku login

This will open a browser window and ask you to log into your Heroku account.

Log in to grant your terminal access to your Heroku account, and create a new heroku app by running:

#app name is up to you
heroku create app-name

This will create the app on Heroku and return two URLs.

# app production url and git url |

Copy the URL on the right and run the below command. Note that this step is optional as you may find that Heroku has already added the remote URL.

# add heroku remote url
git remote add heroku

Open a side terminal and run the command below. This shows you the app log in real-time as shown in the image.

# see process logs
heroku logs –tail

Heroku logs. (Large preview)

Run the following three commands to set the required environment variables:

heroku config:set TEST_ENV_VARIABLE=”Environment variable is coming across.”
heroku config:set CONNECTION_STRING=your-db-connection-string-here.
heroku config:set NPM_CONFIG_PRODUCTION=false

Remember in our scripts, we set:

“prestart”: “babel ./src –out-dir build”,
“start”: “node ./build/bin/www”,

To start the app, it needs to be compiled down to ES5 using babel in the prestart step because babel only exists in our development dependencies. We have to set NPM_CONFIG_PRODUCTION to false in order to be able to install those as well.

To confirm everything is set correctly, run the command below. You could also visit the settings tab on the app page and click on Reveal Config Vars.

# check configuration variables
heroku config

Now run git push heroku.

To open the app, run:

# open /v1 route
heroku open /v1

# open /v1/messages route
heroku open /v1/messages

If like me, you’re using the same PostgresSQL database for both development and production, you may find that each time you run your tests, the database is deleted. To recreate it, you could run either one of the following commands:

# run script locally
yarn runQuery

# run script with heroku
heroku run yarn runQuery

Continuous Deployment (CD) With Travis

Let’s now add Continuous Deployment (CD) to complete the CI/CD flow. We will be deploying from Travis after every successful test run.

The first step is to install Travis CI. (You can find the installation instructions over here.) After successfully installing the Travis CI, login by running the below command. (Note that this should be done in your project repository.)

# login to travis
travis login –pro

# use this if you’re using two factor authentication
travis login –pro –github-token enter-github-token-here

If your project is hosted on, remove the –pro flag. To get a GitHub token, visit the developer settings page of your account and generate one. This only applies if your account is secured with 2FA.

Open your .travis.yml and add a deploy section:

provider: heroku
master: app-name

Here, we specify that we want to deploy to Heroku. The app sub-section specifies that we want to deploy the master branch of our repo to the app-name app on Heroku. It’s possible to deploy different branches to different apps. You can read more about the available options here.

Run the below command to encrypt your Heroku API key and add it to the deploy section:

# encrypt heroku API key and add to .travis.yml
travis encrypt $(heroku auth:token) –add deploy.api_key –pro

This will add the below sub-section to the deploy section.

secure: very-long-encrypted-api-key-string

Now commit your changes and push to GitHub while monitoring your logs. You will see the build triggered as soon as the Travis test is done. In this way, if we have a failing test, the changes would never be deployed. Likewise, if the build failed, the whole test run would fail. This completes the CI/CD flow.

The corresponding branch in my repo is 11-cd.


If you’ve made it this far, I say, “Thumbs up!” In this tutorial, we successfully set up a new Express project. We went ahead to configure development dependencies as well as Continuous Integration (CI). We then wrote asynchronous functions to handle requests to our API endpoints — completed with tests. We then looked briefly at error handling. Finally, we deployed our project to Heroku and configured Continuous Deployment.

You now have a template for your next back-end project. We’ve only done enough to get you started, but you should keep learning to keep going. Be sure to check out the express docs as well. If you would rather use MongoDB instead of PostgreSQL, I have a template here that does exactly that. You can check it out for the setup. It has only a few points of difference.


“Create Express API Backend With MongoDB ,” Orji Chidi Matthew, GitHub
“A Short Guide To Connect Middleware,” Stephen Sugden
“Express API template,” GitHub
“AppVeyor vs Travis CI,” StackShare
“The Heroku CLI,” Heroku Dev Center
“Heroku Deployment,” Travis CI
“Using middleware,” Express.js
“Error Handling,” Express.js
“Getting Started,” Mocha
nyc (GitHub)
Travis CI
Code Climate

Smashing Editorial
(ks, yk, il)

Fresh Resources for Web Designers and Developers (April 2020)

Original Source:

Despite the doom and gloom due to the virus (COVID-19) spread, it’s not stopping us to share fresh resources and tools with our fellow Web Developers. In this edition, we have a number of…

Visit for full content.

Smashing Podcast Episode 13 With Laura Kalbag: What Is Online Privacy?

Original Source:

Smashing Podcast Episode 13 With Laura Kalbag: What Is Online Privacy?

Smashing Podcast Episode 13 With Laura Kalbag: What Is Online Privacy?

Drew McLellan


Laura KalbagIn this episode of the Smashing Podcast, we’re talking about online privacy. What should web developers be doing to make sure the privacy of our users is maintained? I spoke to Laura Kalbag to find out.

Show Notes

Laura Kalbag’s personal website
Small Technology Foundation
Better Blocker

Weekly Update

“How To Make Life Easier When Using Git,”
by Shane Hudson
“Visual Design Language: The Building Blocks Of Design,”
by Gleb Kuznetsov
“What Should You Do When A Web Design Trend Becomes Too Popular?,”
by Suzanne Scacca
“Building A Web App With Headless CMS And React,”
by Blessing Krofegha
“Django Highlights: Templating Saves Lines (Part 2),”
by Philip Kiely


Drew McLellan: She’s a designer from the UK, but now based in Ireland, she’s co-founder of the Small Technology Foundation. You’ll often find her talking about rights-respecting design, accessibility and inclusivity, privacy, and web design and development, both on her personal website and with publications such as Smashing magazine. She’s the author of the book Accessibility for Everyone from A Book Apart. And with the Small Technology Foundation, she’s part of the team behind Better Blocker, a tracking blocker tool for Safari on iOS and Mac. So we know she’s an expert in inclusive design and online privacy, but did you know she took Paris Fashion Week by storm wearing a kilt made out of spaghetti. My Smashing friends, please welcome Laura Kalbag.

Laura Kalbag: Hello.

Drew: Hello Laura, how are you?

Laura: I am smashing.

Drew: I wanted to talk to you today about the topic of online privacy and the challenges around being an active participant online without seeding too much of your privacy and personal data to companies who may or may not be trustworthy. This is an area that you think about a lot, isn’t it?

Laura: Yeah. And I don’t just think about the role of us as consumers in that, but also as people who work on the web, our role in actually doing it and how much we’re actually making that a problem for the rest of society as well.

Drew: As a web developer growing up in the ‘90s as I did, for me maintaining an active presence online involved basically building and updating my own website. Essentially, it was distributed technology but it was under my control. And these days it seems like it’s more about posting on centralized commercially operated platforms such as Twitter and Facebook, the obvious ones. That’s a really big shift in how we publish stuff online. Is it a problem?

Laura: Yeah. And I think we have gone far away from those decentralized distributed ways of posting on our own websites. And the problem is that we are essentially posting everything on somebody else’s website. And not only does that mean that we’re subject to their rules, which in some cases is a good thing, you don’t necessarily want to be on a website that is full of spam, full of trolls, full of Nazi content, we don’t want to be experiencing that. But also we have no control over whether we get kicked off, whether they decide to censor us in any way. But also everything underlying on that platform. So whether that platform is knowing where we are at all times because it’s picking up on our location. Whether it is reading our private messages because if it’s not end-to-end encrypted, if we’re sending direct messages to each other, that could be accessed by the company.

Laura: Whether it’s actively, so whether people working there could actually just read your messages. Or passively, where they are just sucking up the stuff from inside your messages and using that to build profiles about you, which they can then use to target you with ads and stuff like that. Or even combine that information with other datasets and sell that on to other people as well.

Drew: It can be quite terrifying, can’t it? Have what you considered to be a private message with somebody on a platform like Facebook, using Facebook Messenger, and find the things you’ve mentioned in a conversation then used to target ads towards you. It’s not something you think you’ve shared but it is something you’ve shared with the platform.

Laura: And I have a classic example of this that happened to me a few years ago. So, I was on Facebook, and my mom had just died, and I was getting ads for funeral directors. And I thought is was really strange because none of my family had said anything on a social media platform at that point, none of my family had said anything on Facebook because we’d agreed that no one wants to find out that kind of thing about a friend or family member via Facebook so we’d not say about it. And then, so I asked my siblings, “Have any of you said anything on Facebook that might cause this strange?” Because I just usually just get ads for make-up, and dresses, and pregnancy tests, and all those fun things they like to target women of a certain age. And my sister got back to me, she said, “Well, yeah, my friend lives in Australia so I sent her a message on Messenger, Facebook Messenger, and told her that our mom had died.”

Laura: And of course Facebook knew that we’re sisters, it has that relationship connection that you can choose to add on there, it could probably guess we were sisters anyway by the locations we’ve been together, the fact that we share a surname. And decided that’s an appropriate ad to put in her feed.

Drew: It’s sobering, isn’t it? To think that technology is making these decisions for us that actually affects people, potentially in this example, in quite a sensitive or vulnerable time.

Laura: Yeah. We say it’s creepy, but a lot of the time people say it’s almost like the microphone on my phone or my laptop was listening to me because I was just having this conversation about this particular product and suddenly it’s appearing in my feed everywhere. And I think what’s actually scary is the fact that most of them don’t have access to your microphone, but it’s the fact that your other behaviors, your search, the fact that it knows who you’re talking to because of your proximity to each other and your location on your devices. It can connect all of those things that we might not connect ourselves together in order to say, maybe they’ll be interested in this product because they’ll probably think you’re talking about it already.

Drew: And of course, it’s not as simple as just rolling back the clock and going back to a time where if you wanted to be online, you had to create your own website because there’s technical barriers to that, there’s cost barriers. And you only need to look at the explosion of things like sharing video online, there’s not an easy way to share a video online in the same way you can just by putting it on YouTube, or uploading it to Facebook, or onto Twitter, there are technical challenges there.

Laura: It’s not fair to blame anyone for it because using the web today and using these platforms today is part of participating in society. You can’t help it if your school has a Facebook group for all the parents. You can’t help it if you have to use a website that, in order to get some vital information. It’s part of our infrastructure now, particularly nowadays when everyone is suddenly relying video calling and things like that so much more. These are our infrastructure, they are as used and as important as our roads, as our utilities, so we need to have them treated accordingly. And we can’t blame people for using them, especially if there aren’t any alternatives that are better.

Drew: When the suggestion is using these big platforms that it’s easy and it’s free, but is it free?

Laura: No, because you’re paying with your personal information. And I hear a lot of developers saying things like, “Oh well, I’m not interesting, I don’t really care, it’s not really a problem for me.” And we have to think about the fact that we’re often in quite a privileged group. What about people that are more vulnerable? We think about people who have parts of their identity that they don’t necessarily want to share publicly, they don’t want to be outed by platforms to their employers, to their government. People who are in domestic abuse situations, we think about people who are scared of their governments and don’t want to spied on. That’s a huge number of people across the world, we can’t just say, “Oh well, it’s fine for me, so it has to be fine for everybody else,” it’s just not fair.

Drew: It doesn’t have to be a very big issue you’re trying to conceal from the world to be worried about what a platform might share about you.

Laura: Yeah. And the whole thing about privacy is that it isn’t about having something to hide, it’s about choosing what you want to share. So you might not feel like you have anything in particular that you want to hide, but it doesn’t necessarily mean you put a camera in your bedroom and broadcast it 24 hours, there’s things we do and don’t want to share.

Drew: Because there are risks as well in sharing social content, things like pictures of family and friends. That we could be sacrificing other peoples privacy without them really being aware, is that a risk?

Laura: Yeah. And I think that that applies to a lot of different things as well. So it’s not just if you’re uploading things of people you know and then they’re being added to facial recognition databases, which is happening quite a lot of the time. These very dodgy databases, they’ll scrape social media sites to make their facial recognition databases. So Clearview is an example of a company that’s done that, they’ve scraped images off Facebook and used those. But also things like email, you might choose… I’m not going to use Gmail because I don’t want Google to have access to everything in my email, which is everything I’ve signed up for, every event I’m attending, all of my personal communication, so I decide not to use it. But if I’m communicating with someone who uses Gmail, well, they’ve made that decision on my behalf, that everything I email them will be shared with Google.

Drew: You say that, often from a privileged position, we think okay, we’re getting all this technology, all these platforms are being given to us for free, we’re not having to pay for it, all we got to do is… We’re giving up a little bit of privacy, but that’s okay, that’s an acceptable trade-off. But is it an acceptable trade-off?

Laura: No. It’s certainly not an acceptable trade-off. But I think it’s also because you don’t necessarily immediately see the harms that are caused by giving these things up. You might feel like you’re in a safe situation today, but you may not be tomorrow. I think a good example is Facebook, they’ve actually got a pattern for approving or disproving loans based on the financial status of your friends on Facebook. So thinking, oh well, if your friend owes lots of money, and a lot of your friends owes lots of money, you’re more likely to be in that same situation as them. So all these systems, all of these algorithms, they are making decisions and influencing our lives and we have no say on them. So it’s not necessarily about what we’re choosing to share and what we’re choosing not to share in terms of something we put in a status, or a photo, or a video, but it’s also about all of this information that is derived about us from our activity on these platforms.

Laura: Things about our locations or whether we have a tendency to be out late at night, the kinds of people that we tend to spend our time with, all of this information can be collected by these platforms too and then they’ll make decisions about us based on that information. And we not only don’t have access to what’s being derived about us, we have no way of seeing it, we have no way of changing it, we have no way of removing it, bar a few things that we could do if we’re in the EU based on GDPR, if you’re in California based on their regulation there that you can go in and ask companies what data they have on you and ask them to delete it. But then what data counts under that situation? Just the data they’ve collected about you? What about the data they’ve derived and created by combining your information with other people’s information and the categories they’ve put you in, things like that. We have no transparency on that information.

Drew: People might say that this is paranoia, this is tinfoil hat stuff. And really all that these companies are doing is collecting data to show us different ads. And okay, there’s the potential for these other things, but they’re not actually doing that. All they’re doing is just tailoring ads to us. Is that the case or is this data actually actively being used in more malicious ways than just showing ads?

Laura: No. We’ve seen in many, many occasions how this information is being used in ways other than just ads. And even if one company decides to just collect it based on ads, they then later might get sold to or acquired by an organization that decides to do something different with that data and that’s parts of the problem with collecting the data at all in the first place. And it’s also a big risk to things like hacking, if you’re creating a big centralized database with people’s information, their phone numbers, their email addresses, even just the most simple stuff, that’s really juicy data for hackers. And that’s why we see massive scale hacks that result in a lot of people’s personal information ending up being publicly available. It’s because a company decided it was a good idea to collect of that information in one place in the first place.

Drew: Are there ways then that we can use these platforms, interact with friends and family that are also on these platforms, Facebook is the obvious example where you might have friends and family all over the world and Facebook is the place where they communicate. Are there ways that you can participate in that and not be giving up privacy or is it just something that if you want to be on that platform, you just have to accept?

Laura: I think there’s different layers, depending on what we would call your threat model is. So depending how vulnerable you are, but also your friends and family, and what your options are. So yeah, the ultimate thing is to not use these platforms at all. But if you do, try to use them more than they use you. So if you have things that you’re communicating one-on-one, don’t use Messenger for that because there are plenty of alternatives for one-on-one direct communication that can be end-to-end encrypted or is private and you don’t have to worry about Facebook listening in on it. And there’s not really much you can do about things like sharing your location data and stuff like that, which is really valuable information. It’s all of your meta information that’s so valuable, it’s not even necessarily the content of what you’re saying, but who you’re with and where you are when you’re saying it. That’s the kind of stuff that’s useful that companies would use to put you in different categories and be able to sell things to you accordingly or group you accordingly.

Laura: So I think we can try to use them as little as possible. I think it’s important to seek alternatives, particularly if you’re a person who is more technically savvy in your group of friends and family, you can always encourage other people to join other things as well to have. So use Wire for messaging, that’s a nice little platform that’s available in lots of places and is private. Or Signal is another option that’s just like WhatsApp but it’s end-to-end encrypted as well. And if you can be that person, I think there’s two points that we have to really forget about. One, is the idea that everyone needs to be on a platform for it to be valuable. The benefit is that everyone’s on Facebook, that’s actually the downside as well, that everyone’s on Facebook. You don’t need everyone you know to suddenly be on the same platform as you. As long as you have those few people you want to communicate with regularly on a better platform, that’s a really good start.

Laura: And the other thing that we need to embrace, we’re not going to find an alternative to a particular platform that does everything that platform does as well. You’re not going to find an alternative to Facebook that does messaging, that has status updates, that has groups, that has events, that has live, that has all of this stuff. Because the reason Facebook can do that is because Facebook is massive, Facebook has these resources, Facebook has a business model that really makes a lot out of all that data and so it’s really beneficial to provide all those services to you. And so we have to change our expectations and maybe be like, “Well okay, what’s the one function I need? To be able to share a photo. Well, let’s find the thing that I can do that will help me just share that photo.” And not be expecting just another great big company to do the right thing for us.

Drew: Is this something that RSS can help us with? I tend to think RSS is the solution to most problems, but I was thinking here if you have a service for photo sharing, and something else for status updates, and something else for all these different things is RSS the solution that brings it all together to create a virtual… That encompasses all these services?

Laura: I’m with you on that for lots of things. I, myself, I’ve built into my own site, I have a section for photos, a section for status updates, as well as my blog and stuff. So that I can allow people to, if they don’t follow me on social media platforms, if I’m posting the same stuff to my site, they can use RSS to access it and they’re not putting themselves at risk. And that’s one of the ways that I see as just a fairly ordinary designer/developer that I can not force other people to use those platforms in order to join in with me. And RSS is really good for that. RSS can have tracking, I think people can do stuff with it, but it’s rare and it’s not the point of it. That’s what I think RSS is a really good standard for.

Drew: As a web developer, I’m aware when I’m building sites that I’m frequently being required to add JavaScript from Google for things like analytics or ads, and from Facebook for like and share actions, and all that sort of thing, and from various other places, Twitter, and you name it. Are those something that we need to worry about in terms of developers or as users of the web? That there’s this code executing that it’s origin is on or

Laura: Yes. Absolutely. I think Google is a good example here of things like web fonts and libraries and stuff like that. So people are encouraged to use them because they’re told well, it’s going to very performant, it’s on Google servers, Google will grab it from the closest part of the world, you’ll have a brilliant site just by using, say a font off Google rather than embedding it, self-hosting it on your own site. There’s a reason why Google offers up all of those fonts for free and it’s not out of the goodness of their Googley little hearts, it is because they get something out of it. And what they get is, they get access to your visitors on your website when you include their script on your website. So I think it’s not just something we should be worried about as developers, I think that it’s our responsibility to know what our site is doing and know what a third party script is doing or could do, because they could change it and you don’t necessarily have control over that as well. Know what their privacy policies are and things like that before we use them.

Laura: And ideally, don’t use them at all. If we can self-host things, self-host things, a lot of the time it’s easier. If we don’t need to provide a login with Google or Facebook, don’t do it. I think we can be the gatekeepers in this situation. We as the people who have the knowledge and the skills in this area, we can be the ones that can go back to our bosses or our managers and say, “Look, we can provide this login with Facebook or we could build our own login, it will be private, it would be safer. Yeah, it might take a little bit more work but actually we’ll be able to instill more trust in what we’re building because we don’t have that association with Facebook.” Because what we’re seeing now, over time, is that even mainstream media is starting to catch up with the downsides of Facebook, and Google, and these other organizations.

Laura: And so we end up being guilty by association even if we’re just trying to make the user experience easier by adding a login where someone doesn’t have to create a new username and password. And so I think we really do need to take that responsibility and a lot of it is about valuing people’s rights and respecting their rights and their privacy over our own convenience. Because of course it’s going to be much quicker just to add that script to the page, just to add another package in without investigating what it actually does. We’re giving up a lot when we do that and I think that we need to take responsibility not to.

Drew: As web developers are there other things that we should be looking out for when it comes to protecting the privacy of our own customers in the things that we build?

Laura: We shouldn’t be collecting data at all. And I think most of the time, you can avoid it. Analytics is one of my biggest bugbears because I think that a lot of people get all these analytics scripts, all these scripts that can see what people are doing on your website and give you insights and things like that, but I don’t think we use them particularly well. I think we use them to confirm our own assumptions and all we’re being taught about is what is already on our site. It’s not telling us anything that research and actually talking to people who use our websites… We could really benefit more from that than just looking at some numbers go up and down, and guessing what the effect of that is or why it’s happening. So I think that we need to be more cautious around anything that we’re putting on our sites and anything that we’re collecting. And I think nowadays we’re also looking at regulatory and legal risks as well when we’re starting to collect people’s data.

Laura: Because when we look at things like the GDPR, we’re very restricted in what we are allowed to collect and the reasons why we’re allowed to collect it. And that’s why we’re getting all of these consent notifications and things like that coming up now. Because companies have to have your explicit consent for collecting any data that is not associated with vital function for the website. So if you’re using something like a login, you don’t need to get permission to store someone’s email and password for a login because that is implied by logging in, you need that. But things like analytics and stuff like that, you actually need to get explicit consent in order to be able to spy on the people visiting the website. So this is why we see all of these consent boxes, this is why we should actually be including them on our websites if we’re using analytics and other tools that are collecting data that aren’t vital to the functioning of the page.

Drew: I think about some of even just the side projects and things that I’ve launched, that just almost as a matter of routine I’ve put Google analytics on there. I think, “Oh, I need to track how many people are visiting.” And then I either never look at it or I only look at it to gain an understanding of the same things that I could’ve just got from server logs like we used to do in the old days, just by crunching over their web access logs.

Laura: Exactly. And yet Google is sitting there going, “Thank you very much.” Because you’ve instilled another input for them on the website. And I think once you start thinking about it, once you adjust your brain to taking this other way of looking at it, it’s much easier to start seeing the vulnerabilities. But we do have to train ourselves to think in that way, to think about how can we harm people with what we’re building, who could lose out from this, and try to build things that are a bit more considerate of people.

Drew: There’s an example, actually, that I can think of where Google analytics itself was used to breach somebody’s privacy. And that was the author of Belle de Jour, The Secret Diary of a Call Girl, who was a London call girl who kept a blog for years and it was all completely anonymous. And she diarized her daily life. And it was incredibly successful, and it became a book, and a TV series, and what have you. She was intending to be completely anonymous, but she was eventually found out. Her identity was revealed because she used the same Google analytics tracking user id on her personal blog where she was her professional self and on the call girl blog as well. And that’s how she was identified, just-

Laura: So she did it to herself in that way as well.

Drew: She did it to herself. Yeah. She leaked personal data there that she didn’t mean to leak. She didn’t even know it was personal data, I suspect. There are so many implications that you just don’t think of. And so I think it pays to start thinking of it.

Laura: Yeah. And not doing things because you feel that that’s what we always did, and that’s what we always do, or that’s what this other organization that I admire, they do it, so I should, I think. And a lot of the time it is about being a bit more restrictive and maybe not jumping on the bandwagon of I’m going to use this service like everybody else is. And stopping, reading their privacy policy, which is not something I recommend doing for fun, because it’s really tedious, and I have to do a lot of it when I’m looking into trackers for Better. But you can see a lot of red flags if you read privacy policies. You see the kinds of language that means that they’re trying to make it free and easy for them to do whatever they want with your information. And there’s a reason why I say to designers and developers, if you’re making your own projects, don’t just copy the privacy policy from somebody else. Because you might be opening yourself up to more issues and you might actually be making yourself look suspicious.

Laura: It’s much better to be transparent and clear about what you’re doing, everything doesn’t need to be written in legal ease in order for you to be clear about what you’re doing with people’s information.

Drew: So, in almost anything, people say that the solution to it is to use the JAMstack. Is the JAMstack a solution, is it a good answer, is it going to help us out of accidentally breaching the privacy of our customers?

Laura: There’s a lot of stuff I like about the JAMstack stuff, but I would say I like the “JMstack”, because it’s the APIs bit that worries me. Because if we’re taking control over our own sites, we’re building static sites, and we’re generating it all on our machines, and we’re not using servers, and that’s great that we’ve taken away a lot potential issues there. But then if we’re adding back in all of the third party functionality using APIs, we may as well be adding script tags to our pages all over again. We may as well have it on somebody else’s platform. Because we’re losing that control again. Because every time we’re adding something from a third party, we’re losing control over a little bit of our site. So I think that a lot of static site generators and things like that have a lot of value, but we still need to be cautious.

Laura: And I think one of the reasons why we love the jam stack stuff because again, it’s allowed us to knock up a site really quickly, deploy it really quickly, have a development environment set up really quickly, and we’re valuing again, our developer experience over that of the people that are using the websites.

Drew: So I guess the key there is to just be hyperaware of what every API you’re using is doing. What data you could be sending to them, what their individual privacy policies are.

Laura: Yeah. And I think we have to be cautious about being loyal to companies. We might have people that we are friends with and think are great and things like that, that are working for these companies. We might that they are producing some good work, they’re doing good blogs, they’re introducing some interesting new technologies into the world. But at the end of the day, businesses are businesses. And they all have business models. And we have to know what are their business models. How are they making their money? Who is behind the money? Because a lot of venture capital backed organizations end up having to deal in personal data, and profiling, and things like that, because it’s an easy way to make money. And it is hard to build a sustainable business on technology, particularly if you’re not selling a physical product, it’s really hard to make a business sustainable. And if an organization has taken a huge amount of money and they’re paying a huge amount of employees, they’ve got to make some money back somehow.

Laura: And that’s what we’re seeing now is, so many businesses doing what Shoshana Zuboff refers to as surveillance capitalism, tracking people, profiling them, and monetizing that information because it’s the easiest way to make money on the web. And I think that the rest of us have to try to resist it because it can be very tempting to jump in and do what everyone else is doing and make big money, and make a big name. But I think that we’re realizing too slowly the impact that that has on the rest of our society. The fact that Cambridge Analytica only came about because Facebook was collecting massive amounts of people’s information and Cambridge Analytica was just using that information in order to target people with, essentially, propaganda in order to make referendums and elections of their way. And that’s terrifying, that’s a really scary effect that’s come out of what you might think is an innocuous little banner ad.

Drew: Professionally, many people are transitioning into building client sites or helping their clients to build their own sites on platforms like Squarespace and that sort of thing, online site builders where sites are then completely hosted on that service. Is that an area that they should also be worried about in terms of privacy?

Laura: Yeah. Because you’re very much subject to the privacy policies of those platforms. And while a lot of them are paid platforms, so just because it’s a platform doesn’t necessarily mean that they are tracking you. But the inverse is also true, just because you’re paying for it, doesn’t mean they’re not tracking you. I’d use Spotify as an example of this. People pay Spotify a lot of money for their accounts. And Spotify does that brilliant thing where it shows off how much it’s tracking you by telling people all of this incredible information about them on a yearly basis, and giving them playlists for their moods, and things like that. And then you realize, oh, actually, Spotify knows what my mood is because I’m listening to a playlist that’s made for this mood that I’m in. And Spotify is with me when I’m exercising. And Spotify knows when I’m working. And Spotify knows when I’m trying to sleep. And whatever other playlists you’ve set up for it, whatever other activities you’ve done.

Laura: So I think we just have to look at everything that a business is doing in order to work out whether it’s a threat to us and really treat everything as though it could possibly cause harm to us, and use it carefully.

Drew: You’ve got a fantastic personal website where you collate all the things that you’re working on and things that you share socially. I see that your site is built using Site.js. What’s that?

Laura: Yes. So it’s something that we’ve been building. So what we do at the Small Technology Foundation, or what we did when we were called, which was the UK version of the Small Technology Foundation, is that we’re tying to work on how do we help in this situation. How do we help in a world where technology is not respecting people’s rights? And we’re a couple of designers and developers, so what is our skills? And the way we see it is we have to do a few different things. We have to first of all, prevent some of the worst harms if we can. And one of the ways we do that is having a tracker blocker, so it’s something that blocks trackers on the web, with their browser. And another thing we do is, we try to help inform things like regulation, and we campaign for better regulation and well informed regulation that is not encouraging authoritarian governments and is trying to restrict businesses from collecting people’s personal information.

Laura: And the other thing we can do is, we can try to build alternatives. Because one of the biggest problems with technology and with the web today is that there’s not actually much choice when you want to build something. A lot of things are built in the same way. And we’ve been looking at different ways of doing this for quite a few years now. And the idea behind Site.js is to make it really easy to build and deploy a personal website that is secure, has the all the HTTPS stuff going on and everything, really, really, easily. So it’s something that really benefits the developer experience, but doesn’t threaten the visitor’s experience at the same time. So it’s something that is also going to keep being rights respecting, that you have full ownership and control over as the developer of your own personal website as well. And so that’s what Site.js does.

Laura: So we’re just working on ways for people to build personal websites with the idea that in the future, hopefully those websites will also be able to communicate easily with each other. So you could use them to communicate with each other and it’s all in your own space as well.

Drew: You’ve put a lot of your expertise in this area to use with Better Blocker. You must see some fairly wild things going on there as you’re updating it and…

Laura: Yeah. You can always tell when I’m working on Better because that’s when my tweets get particularly angry and cross, because it makes me so irritated when I see what’s going on. And it also really annoys me because I spend a lot of time looking at websites, and working out what the scripts are doing, and what happens when something is blocked. One of the things that really annoys me is how developers don’t have fallbacks in their code. And so the amount of times that if you block something, for example, I block an analytics script, and if you block an analytics script, all the links stop working on the webpage, then you’re probably not using the web properly if you need JavaScript to use a link. And so I wish that developers bear that in mind, especially when they think about maybe removing these scripts from their sites. But the stuff I see is they…

Laura: I’ve seen, like The Sun tabloid newspaper, everybody hates it, it’s awful. They have about 30 different analytics scripts on every page load. And to some degree I wonder whether performance would be such a hot topic in the industry if we weren’t all sticking so much junk on our webpages all the time. Because, actually, you look at a website that doesn’t have a bunch of third party tracking scripts on, tends to load quite quickly. Because you’ve got to do a huge amount to make a webpage heavy if you haven’t got all of that stuff as well.

Drew: So is it a good idea for people who build for the web to be running things like tracker blockers and ad blockers or might it change our experience of the web and cause problems from a developer point of view?

Laura: I think in the same way that we test things across different browsers and we might have a browser that we use for our own consumer style, I hate the word consumer, use, just our own personal use, like our shopping and our social stuff, and things like that. And we wouldn’t only test webpages in that browser, we test webpages in as many browsers can get our hands on because that’s what makes us good developers. And I think the same should be for if you’re using a tracker blocker or an ad blocker in your day-to-day, then yeah, you should try it without as well. Like I keep Google Chrome on my computer for browser testing, but you can be sure that I will not be using that browser for any of my personal stuff, ever, it’s horrible. So yeah, you’ve got to be aware of what’s going in the world around you as part of your responsibility as a developer.

Drew: It’s almost just like another browser combination, isn’t it? To be aware of the configurations that the audience your site or your product might have and then testing with those configurations to find any problems.

Laura: Yeah. And also developing more robust ways of writing your code, so that your code can work without certain scripts and things like that. So not everything is hinging off one particular script unless it is absolutely necessary. Things completely fall apart when people are using third party CDNs, for example. I think that’s a really interesting thing that so many people decided to use a third party CDN, but you have very little control over it’s uptime and stuff like that. And if you block the third party CDN, what happens? Suddenly you have no images, no content, no videos, or do you have no functionality because all of your functional JavaScript is coming from a third party CND?

Drew: As a web developer or designer, if I’d not really thought about privacy concerns about the sites I’m producing up until this point, if I wanted to make a start, what should be the first thing that I do to look at the potential things I’m exposing my customers to?

Laura: I’d review one of your existing pages or one of your existing sites. And you can take it on a component by component basis even. I think any small step is better than no step. And it’s the same way you’d approach learning anything new. It’s the same way I think about accessibility as well. Is you start by, okay, what is one thing I can take away? What is one thing I can change that will make a difference? And then you start building up that way of thinking, that way of looking at how you’re doing your work. And eventually that will build up into being much more well informed about things.

Drew: So I’ve been learning a lot about online privacy. What have you been learning about lately?

Laura: One of the things I’ve been learning about is Hugo, which is a static site generator that is written using Go. And I use it for my personal site already, but right now for Site.js, I’ve been writing a starter blog theme so that people could just set up a site really easily and don’t necessarily have to know a lot about Hugo. Because Hugo is interesting, it’s very fast, but the templating is quite tricky and the documentation is not the most accessible. And so I’m trying to work my way through that to understand it better, which I think I finally got over the initial hurdle. Where I understand what I’m doing now and I can make it better. But it’s hard learning these stuff, isn’t it?

Drew: It really is.

Laura: It reminds you how inadequate you are sometimes.

Drew: If you, dear listener, would like to hear more from Laura, you can find her on the web at and Small Technology Foundation at Thanks for joining us today, Laura. Do you any parting words?

Laura: I’d say, I think we should always just be examining what we’re doing and our responsibility in the work that we do. And what can we do that can make things better for people? And what we can do to make things slightly less bad for people as well.

Smashing Editorial

Rapid Image Layers Animation

Original Source:

A while back, I came across this tweet by Twitter Marketing, which shows a video with a really nice intro animation. So I made a quick demo, trying to implement that effect. The idea is to animate some fullscreen images rapidly, like a sequence of covering layers. It’s a nice idea for an intro splash or even a page transition.

The way this can be achieved is by using the reveal trick described in the tutorial How to Create and Animate Rotated Overlays.

Basically, we have a parent wrap that is set to overflow hidden that we translate up (or down), while we translate its child in the opposite direction. It looks the same as if animating a clip-path that cuts off the image. (We could also do that, but clip-path is not a very performant thing to animate like transforms.)

When doing that animation on a set of layers, we get that really nice effect seen in the video.

For this demo, I created an initial (dummy) menu layout where you can click on the middle item to trigger the animation.

By setting a fitting animation duration and delay (I’m using GSAP here), the effect can be adjusted to look as smooth or as fast as needed. Have a look at the result:

The custom cursor animation can be found here: Animated Custom Cursor Effects

I really hope you find this little demo useful!


Images by Minh Ngoc

Rapid Image Layers Animation was written by Mary Lou and published on Codrops.

5 Essential Skills for Web Developers Today

Original Source:

Let’s pretend for a moment that you’re mentoring someone who is new to web development. They want to learn the necessary skills for becoming a professional, but aren’t sure where to focus. What would you tell them?

Now, whether you’re a newbie or a veteran of the industry, the skills you need to succeed are always evolving. Yet, there are still some foundational things that everyone should know – regardless of specialty.

Today, we’ll train our focus on a bit of both the new and traditional. Let’s take a look at the five essential skills for web developers in the present day.

Your Web Designer Toolbox

Unlimited Downloads: 500,000+ Web Templates, Icon Sets, Themes & Design Assets
Starting at only $16.50/month!



Surprised to see these two on the list? That’s understandable to some degree, as there are certainly more “exciting” technologies out there. But that doesn’t mean we should dismiss them.

To the contrary, both HTML and CSS continue to be the building blocks of the web. HTML is as important as ever, even if you’re using more robust languages such as PHP or JavaScript. Its role has evolved from something we used to style or lay out a page (though it was never intended for that purpose) to helping us build a semantic and accessible web.

CSS has also seen quite the evolution in its own right. The rise of CSS Grid and Flexbox have transformed how we create layouts. And it has also become a staple of animation, along with responsive design and advanced styling techniques. In some cases, it even serves as a solid replacement for JavaScript.

So, just like a house needs a solid foundation, web developers need to understand HTML and CSS inside-out. It would be difficult to accomplish other advanced functionalities without them.



JavaScript has also seen its own evolution. It started as a language that was often used to manipulate DOM elements and add a bit of functionality to websites. And it’s still quite adept for this purpose.

However, we’re now seeing entire interfaces being built with JavaScript as the main ingredient. This has a lot to do with some powerful frameworks that have come along in recent years. React and Vue, in particular, have led the way in this area.

While we haven’t seen these UIs take over the web just yet, it’s a segment that should continue to grow. That alone makes it worth digging into a framework or two.

Another area of growth is coming from WordPress and its Gutenberg block editor. It makes heavy use of React, which also happens to be a requirement for creating custom blocks natively.

Put it all together and you have lots of valid reasons for focusing on your JavaScript skills.

A man reading a JavaScript book.

The Command Line

Everyone loves a good GUI. It just seems more comforting to point-and-click or drag-and-drop your way to accomplishing your goals. Still, the command line remains very relevant.

The funny thing is that even the latest buzzworthy technologies rely on the command line, or at least recommend its use. Take GatsbyJS, for example. The static site generator is all the rage these days and requires the command line to both build and maintain websites.

WordPress is the world’s most popular CMS and also has a wonderful CLI tool. It’s not required, but can perform the same functions as the visually-oriented Dashboard. And it also does some things the Dashboard can’t do, like large-scale search and replace, which makes it perfect for multisite installations and enterprise-level usage.

If you’re getting into version control, Git is another tool where the command line is recommended. There are some visual tools as well, but commands generally allow for more advanced usage.

Even if you don’t feel giddy at the sight of a terminal window, it’s still important that you know your way around one. Otherwise, you may not be able to accomplish everything your projects require.

A woman sitting at a computer.

How to Work with APIs

These days, websites don’t just depend on local files or databases. They often pull data from a number of outside sources. Providers such as social media platforms, cloud services and content delivery networks (CDNs) are powering a lot of essential functionality.

In many cases, websites interface with these outside sources via an API (Application Programming Interface). This allows for accessing a service or application’s data and features through a specific set of procedures – usually via code.

APIs are not one-size-fits-all, however. They can be proprietary – so what works for one service probably won’t work for others. Tapping into one usually requires digging into a particular API’s documentation.

Therefore, it’s important to learn the details behind whichever APIs you want to work with. Whether that’s Twitter, Amazon AWS or Google Maps, you’ll have to study up to get the most out of them.

Sometimes we can get lucky and the API functionality we need is already there for us. Often, you’ll find it in something like a WordPress plugin. But there will be times when you have to work directly with a provider to accomplish what you need.

A woman using a laptop computer, standing near servers - Web Developer Skills

The Ability to Adapt

It seems like there is always some new tool, technique or code breakthrough looking for our attention. So, our last essential skill in this roundup is all about adapting to an ever-changing web.

One of the biggest fears in web design and development is that of falling behind. You don’t want to be left holding the bag while others seize on the latest and greatest trends.

That’s why it’s vitally important to adapt to new ways of doing things and seeing how they fit into your workflow. This will enable you to stay relevant in the marketplace and potentially book some exciting projects.

There’s a catch, though. Not every new thing is going to be worth your time. The challenge is in finding skills to add to your repertoire that fit the types of projects you want to work on.

Perhaps the best thing to do is keep an eye on industry trends. If you see something that can benefit your career (or looks interesting), take time to learn more about it. Once you determine it’s a good fit, you can dig deeper.

A desk with design books and a laptop computer - Web Developer Skills

Learn the Basics, Then Move Forward

There is a lot of pressure on developers to absorb libraries worth of knowledge. But the truth is that you don’t need to know every detail.

Each one of the skills mentioned here are vast. And it’s unlikely that any living soul knows everything there is to know about them. The key is in learning the foundational aspects first and foremost.

By familiarizing yourself with the basics, you will have the opportunity to add depth to your knowledge over time. Quite often, we learn how to do x, y and z because we’re working on a project that requires it. That’s a natural benefit of experience.

So, if there are some areas on this list you don’t know much about – don’t worry. Start small and work your way up. Eventually, you’ll have the skills necessary to succeed.

All the best free Photoshop brushes

Original Source:

The appeal of Photoshop brushes is that they save you time, enabling you to create your own unique work quickly and easily. Using the brushes that others have already created for you means that you don't need to create design elements from scratch, you simply need to select your favourite Photoshop brush and start creating. 

Get Adobe Creative Cloud now

If you're just starting out, the search for the perfect Photoshop brush may feel overwhelming as there's a huge spectrum available. Brushes range from those that mimic traditional medium such as pen and pencil to more experimental grunge and brushes to those that will help you achieve cloud and sun or even lightning effects, and help you recreate fur and grass. While Photoshop does ship with a set of brushes pre-installed, they only scratch the surface of what's possible with the brush engine.

To make things a bit clearer, we've split our selection into four categories to help you find the perfect Photoshop brush: 

Photoshop brushes for painting – for mimicking a traditional art effectNatural brushes – everything from hair to clouds, trees, fire and water effectsGrunge Photoshop brushes – for when you want a distressed or aged effectFantasy and comic brushes – including half-tone brushes and sparkle effects 

Whether you're using an older version of Photoshop or have recently joined Creative Cloud, you can grab the free Photoshop brush downloads below and start creating stunning design flourishes in your artwork. Please note that you need to double check the licence terms of any brush you are downloading and using.

Need some help getting started? You'll find lots of handy advice in our list of top Photoshop tutorials. If you're not sure that Photoshop is for you, see our list of the best Photoshop alternatives.

Photoshop brushes for painting
01. Photoshop and GIMP brushes

Photoshop brushes

Get a range of textures with this freebie

Designer: Obsidian DawnUsage: Free for personal and commercial use but see termsDownload here

These Photoshop brushes are actually textures, meaning you can create some interesting effects that are…well… textured. They're great for backgrounds and for experimentation in general. Check the terms for all uses as you need to credit the artist. But if you cannot provide credit, then a commercial license is only $3. 

02. Abstract paintbrushes

Photoshop brushes: Abstract brushes

Have fun with this messy brush set

Designer: Darrian LynxUsage: Free for non-commercial useDownload here

There are a range of options to explore in this abstract paintbrush set. It is totally free for non-commercial use and perfect for creating a bright, messy, modern paint effect. 

03. Wavenwater Photoshop brushes

Photoshop brushes: Wavenwater

This set features lots of options 

Designer: Michael GuimontUsage: Free for personal use (contact artist for commercial licence)Download here

This comprehensive set of Wavenwater Photoshop brushes comes from freelance concept artist and illustrator Michael Guimont. We haven't counted exactly how many brushes are included in this set, but there are lots of options to add serious flair to your artwork. 

04. Sakimichan – Photoshop Brushes for painting

Photoshop brushes: Sakimichan

These brushes work best at 70-100 percent opacity

Designer: SakimichanUsage: Free for commercial and personal useDownload here

Deviant Art member sakimichan has made 56 of her favourite custom Photoshop brushes for painting available to download for free in this big bundle. She recommends painting at 70-100 percent opacity with the pressure option on, and says that the brushes are already set up for this. 

05. Photoshop paintbrushes

Photoshop brushes: paint

Griffin is a pro illustrator and concept artist offering up the brushes he uses, for free

Designer: Aaron GriffinUsage: Free for commercial and personal useDownload here

Aaron Griffin is a self-taught illustrator and concept artist known especially for his figure paintings (his work even graced the cover of our sister magazine ImagineFX). He's generously offering up the Photoshop brushes he uses to create his digital paintings, free of charge. 

06. Free Photoshop brushes: Thick acrylic paint strokes

Photoshop brushes: Thick acrylic paint strokes

Quickly add authentic paint strokes to your work

Designer: Creative NerdsUsage: Free for commercial and personal useDownload here

The second instalment of a popular set of free Photoshop brushes from Creative Nerds, Thick Acrylic Paint Strokes volume 2 lets you quickly add an authentic paint effect to your illustrations. The brushes are free for both personal and commercial work – but you're not permitted to redistribute or modify them for resale.

07. Dry brush strokes for Photoshop

Photoshop brushes: Dry brush strokes

These brushes are amazingly detailed

Designer: Chris SpoonerUsage: Free for personal and commercial useDownload here

Dry Brush Strokes are a set of 12 excellent free Photoshop brushes from Chris Spooner. These high-resolution dry brushes are fantastically detailed, bristly and texture-rich. Featuring wispy lines and detailed edges, they're perfect for roughing up your artwork or distressing your edges.

08. Free Photoshop brushes: dry brushes

Photoshop brushes: dry brushes

The dry brushes are dynamic

Artist: Kirk WallaceUsage: Free for personal and commercial useDownload here

Artist Kirk Wallace created these Dry Brush Photoshop brushes at home using ink and paper, and offers them to you for free. Perfect for creating rough, harsh textures, they're also dynamic – you can click and drag to span larger areas without getting an ugly repeat effect, or you can paint with them.

09. Free Photoshop brushes: spray paint

Photoshop brushes: spray paint

These brushes can add a distressed, street art look to your designs

Designer: Creative NerdsUsage: Free for personal and commercial useDownload here

Creative Nerds is offering this spray paint effect Photoshop brush set completely free. The pack includes four high-res brushes (2500px each). Use them to add a distressed effect to your paintings.

10. Speedpainting set

Photoshop brushes: speedpainting

Give the illusion of speedpainting with these free brushes

Artist: Darek ZabrockiUsage: Free for personal and commercial useDownload here

Concept artist Darek Zabrocki created this speed painting set of brushes. The artist has worked for some of the biggest projects and companies in the fantasy art world, including Assassin's Creed, Magic: The Gathering and Halo Wars 2. He's generously offering the set of Photoshop brushes he uses for his speed paintings for free download.

Watercolour Photoshop brushes
11. Watercolour brushes # 2

photoshop brushes: watercolour

Designer: Snezhana SwitzerUsage: Free for personal use Download here

This extensive pack of watercolour Photoshop brushes is by Snezhana Switzer. It contains 40 Photoshop brushes, perfect for mimicking watercolours. If you like what you see, you can purchase her even bigger pack on Creative Market. 

12. Furry watercolour Photoshop brush

Photoshop brushes: Furry watercolour brush

Soften things up with this choice of brush

Designer: HeygreyUsage: Free for personal and commercial useDownload here

If you're looking to create a soft, hazy aesthetic in your work, try this free furry watercolour Photoshop brush from Heygrey. It is described as a 'furry watercolour brush', and the creator suggests using it to create hazy backgrounds. We're especially impressed with the realistic watercolour effect that has been achieved here.

13. Watercolour Photoshop brush: spray

Photoshop brushes: Watercolour paint spray

The creator says this brush was a pleasure to create

Designer: Creative NerdsUsage: Free for personal and commercial useDownload here

This large-scale watercolour spray Photoshop brush is handy for creating a watercolour spray effect in your digital artwork. The creator has achieved an impressively authentic effect, which you can apply to your own artwork with ease.

14. Watercolour splatter: free Photoshop brushes

Photoshop brushes: Watercolour splatters

There are 32 high-res brushes in the pack

Designer: pstutorialswsUsage: Free for personal and commercial useDownload here

These watercolour splatters were created with the help of professional-quality watercolour paint on cold press watercolour paper. There are 32 high-res Photoshop bushes in the pack – they work with Photoshop 7, CS, CS2, CS3, CS4, CS5, CS6 and CC – and you can download the lot for free.

Pen, ink, charcoal and pencil Photoshop brushes 
15. Free Photoshop illustration brush set

Designer: Matt HeathUsage: Free for personal and commercial useDownload here

This set of free Photoshop brushes was created by designer Matt Heath using an 8B Staedtler pencil and custom settings giving a natural feel and wide variety of textures. These are available from Heath's Gumroad page – simply enter $0 to get them for free, donations are of course appreciated, and if you want more you can get a huge set of art brushes right here.

16. Ink brushes

Photoshop brushes: Free Ink

Featuring big slabs, thin strokes, ink splotches and everything in between 

Designer: Brittney Murphy Usage: Free for personal and commercial useDownload here

Introducing designer Brittney Murphy's set of ink Photoshop brushes. Among the impressive 192 brushes included in the set, you'll find big slabs, thin strokes, ink splotches and everything in between. Murphy generously offers these brushes for free, with no attribution necessary, however, she does ask that they're not redistributed.

17. Pencil Photoshop brush

Photoshop brushes: Pencil brush

This brush is one of the most realistic out there

Designer: AndantoniusUsage: Free for personal and commercial useDownload here

Create the effect of a soft pencil sketch, but without the grubby hands and smudged paper. This pencil-effect Photoshop brush is one of the most realistic we've seen, and you can download it for free on DeviantArt, courtesy of professional digital artist Andantonius, aka Jon Neimeister.

18. Realistic charcoal Photoshop brush

Photoshop brushes: Realistic charcoal

Avoid the mess but keep the effect with this digital charcoal

Designer: WojtekFusUsage: Free for personal and commercial useDownload here

Charcoal's an essential part of any artist's toolkit, but it's undoubtedly the messiest as well. Get those soft charcoal lines – without getting charcoal all over your hands and everything else – with these excellent charcoal brushes.

19. Real markers: free Photoshop brushes

Designer: Eilert JanßenUsage: Free for personal and commercial useDownload here

Perfect for fashion illustrations, industrial design and storyboarding, this set of 12 free real marker brushes by Eilert Janßen enables you to create lively imagery that looks like it's been sketched out with marker pens. If you like what you see, you can buy more of Janßen's brushes on his website.

Next page: Natural brushes

On this page of our ultimate free Photoshop brushes collection, you’ll find a wide range of natural and nature-inspired resources to add realism and depth to your artwork. 

From Photoshop brushes to help you draw people (think: hair, skin and eyelashes) to brushes for drawing weather (cloud Photoshop brushes, snow, rain and lightening), landscapes (trees, grass, flowers) and water, you’ll find nearly every nature-inspired brush you can think of on this page. And the best part? These Photoshop brushes are all free.

Photoshop brushes for hair and fur
20. Hair brush set

Photoshop brushes: Hair brush set

Mix these brushes together to create more variety 

Designer: para-vineUsage: Free for personal and commercial useDownload here

Create realistic hair effects with this set of free Photoshop hair brushes. Mix them together for extra variety and to create different effects. This set comes courtesy of digital artist para-vine, aka Lee Alex Pearce. Please credit the artist where possible.

21. Fur brushes

Photoshop brushes: fur brush set

Create fur with these brushes

Designer: NathieUsage: Free for personal and commercial useDownload here

These brushes will help you to create realistic fur for your projects. The designer asks that you do not redistribute them, though you are free to use them in your commercial and personal projects.

Skin Photoshop brushes
22. 11 Human skin Photoshop brushes

Photoshop brushes: Skin

These are great for retouching and make-up

Designer: env1roUsage: Free for personal use; contact env1ro about commercial useDownload here

There are 11 texture-like tools in this collection of free Photoshop brushes for painting human skin. Polish artist env1ro, who created them, says they’re compatible with Photoshop PS7 and upwards, and they’re "great for retouching and make-up". 

23. Skin Photoshop brushes

Photoshop brushes: skin

This set of brushes is created by an artist who specialises in portraits

Designer: Marta DahligUsage: Free for personal and commercial useDownload here

Freelance Polish artist and illustrator Marta Dahlig has been creating digital brushes for years. She specialises in portraits, and her set of skin Photoshop brushes is an amazing boost to any digital artist's armoury.

24. Eyelash Photoshop brushes

Photoshop brushes: eyelash

These eyelash brushes are at different stages of open and closed

Designer: eriikaaUsage: Free for personal and commercial use with a creditDownload here

DeviantArt user eriikaa has shared 22 free Photoshop brushes for drawing eyelashes at different stages of the eyes being open or closed. She asks for a credit if you use them, and to let her know if – and how – you use them. 

Weather and cloud Photoshop brushes
25. Cloud Photoshop brushes

Photoshop brushes - Clouds

If you need some help with creating clouds, these brushes are for you

Designer: HelenartathomeUsage: Free for personal and commercial useDownload here

A collection of crisp clean cloud brushes that will fulfil your cloud brush needs. The set comes with 14 high-res cloud brushes and two rays and sunbursts are included. Ideal for adding more detail to a scene.

26. High res sunshine Photoshop brushes

Photoshop brushes - sunshine

These high-res brushes are ideal for web projects

Designer: ArtistmefUsage: Free for personal and commercial useDownload here

This set of 15 high quality photorealistic sunshine effect brushes will add natural and realistic light to help illuminate a scene . These hi-res brushes have a resolution of 2500px, making them ideal for both print and web projects. 

27. Snow Photoshop brushes

Photoshop brushes: snow

Designer: BrusheezyUsage: Free for personal and commercial use, with a creditDownload here

These snow brushes will add a chill to your designs with a flurry. This pack of free Photoshop brushes contains 15 effects, which you can mix up to create realistic variation in your scene. Again, make sure you follow the attribution instructions on the download page if you use them commercially.

28. Rain Photoshop brushes

Photoshop brushes: rain

There are four brushes in this set

Designer: amorphisssUsage: Free for personal and commercial useDownload here

Rain is notoriously tricky to draw and paint. That’s where these fantastic free rain Photoshop brushes from Deviant Art user amorphisss come in. There are four brushes in this set, and for each you can determine which way the rain is falling, and use the Motion Blur filter to emphasise the motion effect. 

29. Lightning strikes Photoshop brushes

Lightning bolt Photoshop brushes

Electrify the viewer with these lightening strikes

Designer: SparkleStockUsage: Free for personal and commercial useDownload here

Electrify your work with this collection of stunning lightning strikes. Tileable and available not only as Photoshop brushes but also as patterns and JPEG images, there are 18 to choose from in this set – all free.

Landscape Photoshop brushes
30. Plant Photoshop brushes

Photoshop brushes: plant

Remember to credit the owner if you use this set of brushes

Designer: B SilviaUsage: Free for personal and commercial use, with creditDownload here

Want to create beautiful plants with ease? This set of 23 high resolution plant Photoshop brushes from graphic designer and illustrator B Silvia will help you do just that. These are free for both personal and commercial use, but please remember to credit the owner.  

31. Tree borders Photoshop brushes

Photoshop brushes: tree borders

These brushes are perfect for trees
Designer: ForestGirlUsage: Free for personal use onlyDownload here

This is a nice set of Photoshop brushes that enable you to introduce tree and bush silhouettes to the edges of your composition. DeviantArt user ForestGirl, aka Julia Popova, asks for a link to any personal work you use them in.

32. Leaf brushes

Photoshop brushes: leaf

This set features seven isolated leaf images

Designer: jschillUsage: Free for personal and commercial useDownload here

Great for creating organic textured background, this set of high resolution leaf Photoshop brushes is awesome for drawing leaves and features seven isolated leaf images with intricate details and textures. They're free for personal and commercial use but make sure you attribute them according to the Creative Commons guidelines – you'll find full details on the Brusheezy site. 

33. Grass or fur brushes

Photoshop brushes: Grass or fur

Changing the colours transforms the grass into fur

Designer: s1088Usage: Free for personal and commercial useDownload here

This set of 10 grass or fur brushes is ideal for adding grassy details to your Photoshop paintings. There's a variety of styles to choose from, so you can create everything from scrubby dry patches of grass to lush meadows. A bonus tip from the creator is that if you switch up the colours, they also make great fur. 

34. Nature silhouettes Photoshop brushes

Photoshop brushes: Nature silhouette

There are 19 free nature Photoshop brushes in this pack
Designer: pinkonheadUsage: Free for personal and commercial useDownload here

This is a really useful set of 19 different nature silhouettes, each featuring a different plant, ranging from trees to grasses. They're free for personal and commercial use, but the designer says that any references back to her website would be highly appreciated.

35. Environment brushes

Photoshop brushes: Environment

Deck out your environment

Designer: SyntetycUsage: Free for personal and commercial useDownload here

This massive set of free environment Photoshop brushes should have you covered for all your environment painting needs. All are high-res, and all are specifically suited for creating realistic natural environments in Photoshop.

Water Photoshop brushes
36. Water brushes vol. 4

Photoshop brushes: Water

The creator has three other sets of brushes to check out as well

Designer: Webdesigner LabUsage: Free for personal and commercial useDownload here

There are 20 high-res water Photoshop brushes in this pack, including splashes, spills, ripples and water drops. Compatible with Photoshop CS3 and above, these realistic water tools aren't the only free Photoshop brushes released by this designer. He also has three other popular sets of water effect brushes, so if you can't find what you want in this pack, check out the others using the link above. 

Assorted effects
37. Smoke brushes

Photoshop brushes: Smoke

The creator has added some fabulous examples of what these brushes can do

Designer: Niño BatitisUsage: Free for personal useDownload here

Including 13 high quality Photoshop smoke brushes, these will make a great addition to any designer's toolkit. Designer Niño Batitis is the man behind this set of smoking hot Photoshop brushes. 

38. Feathers and birds

Photoshop brushes: Feathers and birds

Create detailed feather effects with this set of brushes

Designer: DiscopadaUsage: Free for personal and commercial use, with a creditDownload here

There’s a total of 12 individual Photoshop brushes for drawing birds and feather effects in this pack from DeviantArt user Discopada. Each brush comes with a stand-alone piece of artwork, ranging from detailed feather illustrations to whimsical birds-on-a-branch.

39. Tie-dye Photoshop brushes

Photoshop brushes: tie-dyed photoshop brush pack

Add a splash of tie dye to your work

Designer: Diego SanchezUsage: Various, see licensing rulesDownload here

These brushes are tricky to categorise, but we've put them in the natural category to reflect their hippy vibes. The 15 brushes are based on real tie dye shapes with a mix of solid and transparent areas. We think they'd make fun backgrounds for all sorts of projects.

40. Simple fabric brushes

Photoshop brushes: Simple fabric

Use these brushes for natural surfacing

Designer: BitboxUsage: Free for personal and commercial useDownload here

Straightforward fabric textures, these free fabric Photoshop brushes are high resolution (2500×2500) – so they're great for use in both print and web. You can use them to add some natural surfacing to your work. 

Next page: Grunge Photoshop brushes

On this page of our ultimate collection of free Photoshop brushes, you’ll find the best free grunge brushes the internet has to offer. You can use these brushes to add age, depth and distressed effects to your artwork.

41. Rough paint strokes

Photoshop brushes: Rough paint stroke brushes

Designer: Creative NerdsUsage: Free for personal and commercial useDownload here

If you're looking for a rough paint stroke, these brilliant grunge Photoshop brushes from Creative Nerds should do the trick. They're high res and free to download, and can be used for personal and commercial work. You'll need to subscribe to Creative Nerds to access them. 

42. Shattered glass

Photoshop brushes: Shattered glass

You can fully customise the shattered glass effect

Designer: UCreativeUsage: Free for personal and commercial useDownload here

This set consists of 12 free, high-resolution (2500px x 2500px) and high-quality Photoshop brushes for creating an intricate shattered glass effect. They're easy to customise – you can edit the opacity, blending modes or mask out different parts of the brushes to create textured effects.

43. Distressed halftone brush strokes

Photoshop brushes: distressed

Perfect for when you can’t decide between a halftone or distressed brushstroke

Designer: Designer CandiesUsage: Free for personal and commercial useDownload here

If you can't decide between a distressed brush stroke and a halftone brush stroke, why not have both? This set of 21 Photoshop brushes is perfect for adding a vintage, worn or retro effect to your work.

44. Mixergraph Grunge Brushes

Mixergraph Grunge Brushes

Designer: Marc PallàsUsage: Free for personal and commercial useDownload here

Handmade, digitised and individually edited by Marc Pallàs, this set of five grunge brushes will transform your illustrations and designs with a gloriously rough-and-ready look, making them seem like they're hot off the photocopier.

45. Sponge party

Photoshop brushes: Sponge

The sponge textures are beautiful

Designer: MelissaUsage: Free for personal and commercial useDownload here

Sponge party is a collection of eight medium-resolution Photoshop brushes that include some beautiful textures. including some excellent sponge brush marks – great for adding timbre to collage work.

46. Scorched and burned

Photoshop brushes: Scorched and burned

There are 10 different designs to choose from

Designer: WeGraphicsUsage: Free for personal and commercial useDownload here

Scorched and burned is another great set of brushes from WeGraphics. This pack features realistic scorch and burn effects in 10 different designs. You can use them directly to create burn marks, or in a more abstract way to distress your artwork. 

47. Scar face

Photoshop brushes: Scar face

Add scarring to portraits

Designer: NatalieHijaziUsage: Free for personal and commercial useDownload here

Scar face is a collection of 12 textured Photoshop brushes that is ideal if you want to introduce some scarring to portraits. But you can also use them simply to generate beautifully textured background elements, and add age and depth to your work.

48. Grunge and smooth floral brushes

Grunge and smooth floral brushes for Photoshop

Designer: KeepWaitingUsage: Free for personal useDownload here

This is great set of crisp, clean mixed-media grunge and smooth Photoshop brushes with a grungy floral theme. Created in Photoshop 7, the brushes range in size from 800px to 100px wide. DeviantArt member KeepWaiting says they're free for non-commercial use only.

49. Antique postcards

Photoshop brushes: Antique postcard

A great starting point for further design work

Designer: BitBoxUsage: Free for personal and commercial useDownload here

This wonderful collection of six hi-res antique postcard designs provides an excellent starting point for further design work. Each brush can be used as a template, and features text and a delightful patina.

50. Spray splatter

Photoshop brushes: Spray splatter

Spray splatter includes 12 spray patterns

Designer: Dimitar TsankovUsage: Free for personal and commercial useDownload here

This is a brilliant collection of 12 spray splatter Photoshop brushes that, happily, are high-res at 2500px each. This set features a range of spray patterns suitable for generating dirty backgrounds and textures, or bringing typography to life.

Next page: Sci-fi and comic brushes

Whether you’re looking for free Photoshop brushes to add a fantasy, sci-fi or comic book-inspired effect to your work, we’ve got you covered here. Scroll down for our favourite star Photoshop brushes, particle effects, blood, halftones and more.

51. Dust particle brushes

Photoshop brushes: dust particle

Great for creating a fantasy effect

Designer: Nathan BrownUsage: Free for personal and commercial useDownload here

This is a really useful set of dust particle brushes. They can be used to add instant sparkle, depth and richness to your designs, and are ideal for creating a fantasy effect.

52. Dynamic light brushes

Photoshop brushes: Dynamic light

This set creates a great special light effect

Designer: Nathan BrownUsage: Free for personal and commercial useDownload here

Create special lighting effects by using these Dynamic light brushes in combination with layer blending modes such as screen or vivid light.

53. Star brushes

Photoshop brushes: Star

The artist wants you to have fun with these star brushes

Designer: DemosthenesVoiceUsage: Free for personal and commercial useDownload here

Here are six high-resolution star Photoshop brushes from DeviantArt member DemosthenesVoice, aka Austin Pickrell. "Just have fun," says the artist. "I would love to see what people do with them… and if you make millions from your piece, I want a helicopter."

54. Stardust brushes

Photoshop brushes: Stardust brushes

Follow the attribution instructions from the website

Designer: BrusheezyUsage: Free for personal and commercial useDownload here

This set of 20 Photoshop Stardust brushes will add a sprinkle of diffused light orbs and bring a Disney-esque magic sparkle to your work. These hi-res brushes can be used for personal and commercial work, just make sure you follow the instructions on the site for giving attribution.

55. Night sky brushes

Photoshop brushes: Night sky

Use stars, moons and space dust

Designer: WebdesignerLabUsage: Free for personal and commercial useDownload here

These Night Sky Photoshop brushes includes 13 different night sky elements, including space dust, stars and moons. They're particularly good for fantasy scenes or adding sparkle to your artwork.

56. Magic spells

Photoshop brushes: Magic spells

The 21 brushes have a moon theme

Designer: TreehouseCharmsUsage: Free for personal and commercial use, with a creditDownload here

DeviantArt member TreehouseCharms created Magic Spells. This is a quirky set of 21 Photoshop brushes, each related to an overall moon theme and featuring a mythological bias. They're great for adding some whimsy to your designs, or accenting original illustrations. Donations are appreciated.

57. Fairy tales brush set

Photoshop brushes: Fairy tales

Create an other-worldly fantasy

Designer: raysheafUsage: Free for personal and commercial useDownload here

Fairy Tales is a useful collection of fractal renders at up to 2500px, gathered together under the theme of fairy tales due to their other-worldly appearance. These free Photoshop brushes are suitable for quickly creating fantasy backgrounds and textures like rocks, cave and catacomb walls and alien metals.

58. Blood drip brushes

Photoshop brushes: Blood drips

Incorporate drips, drops, splats or spurts

Designer: Falln-BrushesUsage: Free for personal use, with credit and a linkDownload here

These mildly gory blood drip brushes are perfect for comic-style horror and murderous artwork. So whether you're incorporating drips, drops, splats or spurts, you should find something to your liking here – and they're free in return for a credit and a link.

59. Blood splatter Photoshop brush

Photoshop brush: blood spatter

Includes a range of large-scale brushes

Designer: AnnFrost-stockUsage: Free for personal use, with creditDownload here

Add a splash of gore to your digital paintings with this free blood splatter Photoshop brush set. It includes a range of different large-scale brushes you can use to incorporate realistic blood stains into your designs. They're free for personal use, or $10 for commercial use.

60. Circular halftone brush set

Photoshop brush set: Circular halftone

This set is great for comic book designs

Designer: CreativeNerdsUsage: Free for personal and commercial useDownload here

If you're painting comic art, this free halftone circular brush set will come in handy: you can use it for comic-style shading in your designs. This high-res brush collection can be used for commercial and personal projects. You'll need to subscribe to access these Photoshop brushes.

61. Sketchy cartography brushes

Photoshop brushes: Sketchy cartography

The pack includes mountains, buildings and trees

Designer: StarRavenUsage: Free for personal use; ask permission for commercial useDownload here

Create a Hobbit-style map with these sketchy cartography Photoshop brushes from DeviantArt user StarRaven. The pack includes mountains, buildings, trees, grasses and a range of symbols. As well as a brush file, the download includes a transparent PNG file containing all the images. 

62. Concept art brush pack

Photoshop brushes - concept art

These brushes are ideal for creating sci-fi and fantasy worlds

Designer: SoldatNordskenUsage: Free for non-commercial useDownload here

This collection of concept art brushes is perfect for all sorts of design work. It's ideal for game and film concept art, matte painting, album cover artwork, fantasy art and much more, and includes textures, vegetation, rocks and particles.

Related articles:

How to Photoshop someone into a pictureAll the Photoshop shortcuts you need to knowThe best Photoshop plugins

Affinity Designer Review: An Affordable Tool For Creative Designs

Original Source:

Finding a software that is easy to use, has advanced cross-platform functionality, and allows you to explore different options as a graphic designer is essential.  Affinity Designer is a must-have if you’re a vector artist who likes to work on the go.Below is a detailed Affinity Designer review with all the features, pros, and cons. […]

The post Affinity Designer Review: An Affordable Tool For Creative Designs appeared first on