React Hooks: How to Get Started & Build Your Own

Original Source: https://www.sitepoint.com/react-hooks/?utm_source=rss

Hooks have been taking the React world by storm. In this tutorial, we’ll take a look at what hooks are and how you use them. I’ll introduce you to some common hooks that ship with React, as well as showing you how to write your own. By the time you’ve finished, you’ll be able to use hooks in your own React projects.

What Are React Hooks?

React Hooks are special functions that allow you to “hook into” React features in function components. For example, the useState Hook allows you to add state, whereas useEffect allows you to perform side effects. Previously, side effects were implemented using lifecycle methods. With Hooks, this is no longer necessary.

This means you no longer need to define a class when constructing a React component. It turns out that the class architecture used in React is the cause of a lot of challenges that React developers face every day. We often find ourselves writing large, complex components that are difficult to break up. Related code is spread over several lifecycle methods, which becomes tricky to read, maintain and test. In addition, we have to deal with the this keyword when accessing state, props and methods. We also have to bind methods to this to ensure they’re accessible within the component. Then we have the excessive prop drilling problem — also known as wrapper hell — when dealing with higher-order components.

In a nutshell, Hooks are a revolutionary feature that will simplify your code, making it easy to read, maintain, test in isolation and re-use in your projects. It will only take you an hour to get familiar with them, but this will make you think differently about the way you write React code.

React Hooks were first announced at a React conference that was held in October 2018, and they were officially made available in React 16.8. The feature is still under development; there are still a number of React class features being migrated into Hooks. The good news is that you can start using them now. You can still use React class components if you want to, but I doubt you’ll want to after reading this introductory guide.

If I’ve piqued your curiosity, let’s dive in and see some practical examples.

Prerequisites

This tutorial is intended for people who have a basic understanding of what React is and how it works. If you’re a React beginner, please check out our getting started with React tutorial before proceeding here.

If you wish to follow along with the examples, you should have a React app already set up. The easiest way to do this is with the Create React App tool. To use this, you’ll have Node and npm installed. If you haven’t, head to the Node.js download page and grab the latest version for your system (npm comes bundled with Node). Alternatively, you can consult our tutorial on installing Node using a version manager.

With Node installed, you can create a new React app like so:

npx create-react-app myapp

This will create a myapp folder. Change into this folder and start the development server like so:

cd myapp
npm start

Your default browser will open and you’ll see your new React app. For the purposes of this tutorial, you can work in the App component, which is located at src/App.js.

You can also find the code for this tutorial on GitHub, as well as a demo of the finished code at the end of this tutorial.

The useState Hook

Now let’s look at some code. The useState Hook is probably the most common Hook that ships with React. As the name suggests, it lets you use state in a function component.

Consider the following React class component:

import React from “react”;

export default class ClassDemo extends React.Component {
constructor(props) {
super(props);
this.state = {
name: “Agata”
};
this.handleNameChange = this.handleNameChange.bind(this);
}

handleNameChange(e) {
this.setState({
name: e.target.value
});
}

render() {
return (
<section>
<form autoComplete=”off”>
<section>
<label htmlFor=”name”>Name</label>
<input
type=”text”
name=”name”
id=”name”
value={this.state.name}
onChange={this.handleNameChange}
/>
</section>
</form>
<p>Hello {this.state.name}</p>
</section>
);
}
}

If you’re following along with Create React App, just replace the contents of App.js with the above.

This is how it looks:

React Hooks Class Name

Give yourself a minute to understand the code. In the constructor, we’re declaring a name property on our state object, as well as binding a handleNameChange function to the component instance. We then have a form with an input, whose value is set to this.state.name. The value held in this.state.name is also output to the page in the form of a greeting.

When a user types anything into the input field, the handleNameChange function is called, which updates state and consequently the greeting.

Now, we’re going to write a new version of this code using the useState Hook. Its syntax looks like this:

const [state, setState] = useState(initialState);

When you call the useState function, it returns two items:

state: the name of your state — such as this.state.name or this.state.location.
setState: a function for setting a new value for your state. Similar to this.setState({name: newValue}).

The initialState is the default value you give to your newly declared state during the state declaration phase. Now that you have an idea of what useState is, let’s put it into action:

import React, { useState } from “react”;

export default function HookDemo(props) {
const [name, setName] = useState(“Agata”);

function handleNameChange(e) {
setName(e.target.value);
}

return (
<section>
<form autoComplete=”off”>
<section>
<label htmlFor=”name”>Name</label>
<input
type=”text”
name=”name”
id=”name”
value={name}
onChange={handleNameChange}
/>
</section>
</form>
<p>Hello {name}</p>
</section>
);
}

Take note of the differences between this function version and the class version. It’s already much more compact and easier to understand than the class version, yet they both do exactly the same thing. Let’s go over the differences:

The entire class constructor has been replaced by the useState Hook, which only consists of a single line.
Because the useState Hook outputs local variables, you no longer need to use the this keyword to reference your function or state variables. Honestly, this is a major pain for most JavaScript developers, as it’s not always clear when you should use this.
The JSX code is now cleaner as you can reference local state values without using this.state.

I hope you’re impressed by now! You may be wondering what to do when you need to declare multiple state values. The answer is quite simple: just call another useState Hook. You can declare as many times as you want, provided you’re not overcomplicating your component.

Note: when using React Hooks, make sure to declare them at the top of your component and never inside a conditional.

Multiple useState Hooks

But what if we want to declare more than one property in state? No problem. Just use multiple calls to useState.

Here’s an example of a component with multiple useState Hooks:

import React, { useState } from “react”;

export default function HookDemo(props) {
const [name, setName] = useState(“Agata”);
const [location, setLocation] = useState(“Nairobi”);

function handleNameChange(e) {
setName(e.target.value);
}

function handleLocationChange(e) {
setLocation(e.target.value);
}

return (
<section>
<form autoComplete=”off”>
<section>
<label htmlFor=”name”>Name</label>
<input
type=”text”
name=”name”
id=”name”
value={name}
onChange={handleNameChange}
/>
</section>
<section>
<label htmlFor=”location”>Location</label>
<input
type=”text”
name=”location”
id=”location”
value={location}
onChange={handleLocationChange}
/>
</section>
</form>
<p>
Hello {name} from {location}
</p>
</section>
);
}

Quite simple, isn’t it? Doing the same thing in the class version would require you to use the this keyword even more.

Now, let’s move on to the next basic React Hook.

Continue reading
React Hooks: How to Get Started & Build Your Own
on SitePoint.

The Ultimate 10 UX Influencers to Follow

Original Source: https://www.webdesignerdepot.com/2020/10/the-ultimate-10-ux-influencers-to-follow/

The digital world is a place of constant change. Just as you get used to a new design trend, another one appears, forcing you to rethink the way that you approach each client project. 

As a web designer, it’s up to you to make sure that you have your finger on the pulse on the latest transformations in the industry. However, it can be challenging to know for sure which trends you should be taking seriously, and which you can simply ignore. 

One option to refine and enhance your design journey is to pay attention to influencers. 

Influencers aren’t just there to guide customers into making purchasing decisions. These people are thought-leaders in their field. They spend all of their time tracking down ideas and concepts that really work. That way, they can maintain a successful reputation online.

Sourcing information and motivation from the following UX influencers could help you to create some truly amazing websites in 2020: 

1. Andrew Kucheriavy 

Andrew Kucheriavy is the phenomenal co-founder and CEO of a company named Intechnic. Andrew was one of the first people in the world to be given the “Master in User Experience” award. This means that he’s an excellent person to pay attention to if you want help understanding the ins and outs of user experience design. 

As one of the leading visionaries in UX, business strategy, and inbound marketing, Andrew has a lot of useful information to offer professionals and learners alike. Andrew is particularly active on Twitter, where he’s constantly sharing insights on design and marketing. You can also find input from Andrew on the Intechnic blog. 

2. Jeff Veen 

Another must-follow for designers who want to learn more about understanding their audience and their position in the marketplace, Jeff Veen is a leader in UX and product design. Veen got his start with the founding team for Wired, before he created the Adaptive Path company for UX consulting. Jeff Veen is also known for being responsible for various aspects of Google Analytics. 

Over the years, Jeff has expanded his knowledge in the design space, and mentored various companies, from WordPress to Medium. He also has a fantastic podcast that you can listen to for guidance when you’re on the go. 

3. Jared Spool 

Jared Spool has been tackling the most common issues of user experience since before the term “UX” was even a thing. Excelling in the design world since 1978, Jared has become one of the biggest and most recognizable names in the user experience environment. He’s the founder of the User Interface Engineering consulting firm. The company concentrates on helping companies to improve their site and product usability. 

Jared offers plenty of handy information to stock up on in his Twitter feed. Additionally, you can find plenty of helpful links to blogs and articles that he has published around the web on Twitter too. He’s followed by Hubgets, PICUS, and many other leading brands. Make sure that you check out his collection of industry-leading talks on UIE. 

4. Jen Romano Bergstrom

An experimental psychologist, User Experience Research coach, and UX specialist, Jen is one of the most impressive women in the web design world. She helped to create the unique experiences that customers can access on Instagram and Facebook. Additionally, she has a specialist knowledge of eye-tracking on the web. You can even check out Jen’s books on eye-tracking and usability testing. 

When she’s not writing books or researching user experience, Jen is blogging and tweeting about usability and researching new strategies in the web design space. It’s definitely worth keeping up with Jen on Twitter, particularly if you want to be the first to know about her upcoming seminars and learning sessions. 

5. Katie Dill 

Katie Dill is the former Director of Experience for Airbnb, so you know that she knows her way around some unique experiences. With an expertise in working with companies that harness new technologies and UX design, Katie Dill is at the forefront of the user experience landscape. Dill attends various UX conferences throughout the year, and publishes a range of fantastic videos on YouTube. 

You can find blogs and articles from Katie published on the web; however, you’ll be able to get the most input from her by following Katie on her Twitter account. 

6. Khoi Vinh 

Khoi Vinh is one of the most friendly and unique UX bloggers and influencers on the market today. He knows how to talk to people in a way that’s interesting and engaging – even about more complicated topics in UX design. Vinh is a principle designer at Adobe, and he has his own podcast called Wireframe. However, he still finds time to keep his followers engaged on Twitter. 

Over the years, Khoi has worked as a Design Director for Etsy and the New York Times. Vinh also wrote a book called “Ordering Disorder” which examines grid principles in web design. According to Fast Company, he’s one of the most influential designers in America. Additionally, Khoi has a brilliant blog where you can check out all of his latest insights into UX design. 

7. Cory Lebson

Cory Lebson is a veteran in the world of web design and user experience. With more than 2 decades of experience in the landscape, Cory has his own dedicated UX consulting firm named Lebsontech. Lebson and his company concentrate on offering UX training, mentoring, and user experience strategy support to customers. Cory also regularly speaks on topics regarding UX career development, user experience, information architecture and more. 

Cory is an excellent influencer to follow on Twitter, where you’ll find him sharing various UX tricks and tips. You can also check out Cory’s handbook on UX careers, or find him publishing content on the Lebsontech blog too. 

8. Lizzie Dyson

Another amazing woman in the industry of UX, Lizzie Dyson is changing the experience landscape as we know it. Although she’s a relatively new figure in the web design world, she’s recognized world-wide for her amazing insights into the world of web development. Lizzie also helped to create a new group specifically for women that want to get involved in web design. 

The Ladies that UX monthly meet-up welcomes a community of women into the digital landscape, helping them to learn and expand their skills. Lizzie regularly publishes content online as part of Ladies that UX. Additionally, she appears on the Talk UX feed – an annual design and tech conference held for women around the world. 

9. Chris Messina 

Chris Messina is a product designer and a technical master who understands what it takes to avoid disappointing your users. With more than a decade of experience in the UX design landscape, Messina has worked for a variety of big-name brands, including Google and Uber. He is best known as the inventor of the hashtag!

Chris is a highly skilled individual who understands the unique elements that engage customers and keep people coming back for more on a website. You can see Chris speaking at a selection of leading conferences around the world. Check out some of his talks on YouTube or track down his schedule of upcoming talks here. Chris also has a variety of fantastic articles on Medium to read too. 

10. Elizabeth Churchill

Last, but definitely not least, Elizabeth Churchill is a UX leader with an outstanding background in psychology, research science, psychology, artificial intelligence, cognitive science, human interaction with computers and more. She knows her way around everything from cognitive economics, to everyday web design. Churchill also acts as the director of UX for Google Material Design. 

A powerhouse of innovation and information, Churchill has more than 50 patents to her name. She’s also the vice president of the Association for Computing Machinery too. When she’s not sharing information on Twitter, Elizabeth also has a regular column that you can tune into on the ACM Interactions magazine. 

Who Are You Following in 2020?

Whether you’re looking for inspiration, guidance, or information, the right influencers can deliver some excellent insights into the world of web design. There are plenty of thought leaders out there in the realm of user experience that can transform the way that you approach your client projects. You might even discover a new favourite podcast to listen to, or an amazing series of videos that help you to harness new talents. 

Influencers are more than just tools for digital marketing; they’re an excellent source of guidance for growing UX designers too.

 

Featured image via Pexels.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Build a Node.js CRUD App Using React and FeathersJS

Original Source: https://www.sitepoint.com/crud-app-node-react-feathersjs/?utm_source=rss

An operator sat at an old-fashioned telephone switchboard

Building a modern project requires splitting the logic into front-end and back-end code. The reason behind this move is to promote code re-usability. For example, we may need to build a native mobile application that accesses the back-end API. Or we may be developing a module that will be part of a large modular platform.

An operator, sitting at an old-fashioned telephone switchboard - Build a CRUD App Using React, Redux and FeathersJS

The popular way of building a server-side API is to use Node.js with a library like Express or Restify. These libraries make creating RESTful routes easy. The problem with these libraries is that we’ll find ourselves writing a ton of repetitive code. We’ll also need to write code for authorization and other middleware logic.

To escape this dilemma, we can use a framework like Feathers to help us generate an API in just a few commands.

What makes Feathers amazing is its simplicity. The entire framework is modular and we only need to install the features we need. Feathers itself is a thin wrapper built on top of Express, where they’ve added new features — services and hooks. Feathers also allows us to effortlessly send and receive data over WebSockets.

Prerequisites

To follow along with this tutorial, you’ll need the following things installed on your machine:

Node.js v12+ and an up-to-date version of npm. Check this tutorial if you need help getting set up.
MongoDB v4.2+. Check this tutorial if you need help getting set up.
Yarn package manager — installed using npm i -g yarn.

It will also help if you’re familiar with the following topics:

How to write modern JavaScript
Flow control in modern JavaScript (e.g. async … await)
The basics of React
The basics of REST APIs

Also, please note that you can find the completed project code on GitHub.

Scaffold the App

We’re going to build a CRUD contact manager application using Node.js, React, Feathers and MongoDB.

In this tutorial, I’ll show you how to build the application from the bottom up. We’ll kick-start our project using the popular Create React App tool.

You can install it like so:

npm install -g create-react-app

Then create a new project:

# scaffold a new react project
create-react-app react-contact-manager
cd react-contact-manager

# delete unnecessary files
rm src/logo.svg src/App.css src/serviceWorker.js

Use your favorite code editor and remove all the content in src/index.css. Then open src/App.js and rewrite the code like this:

import React from ‘react’;

const App = () => {
return (
<div>
<h1>Contact Manager</h1>
</div>
);
};

export default App;

And in src/index.js, change the code like so:

import React from ‘react’;
import ReactDOM from ‘react-dom’;
import ‘./index.css’;
import App from ‘./App’;

ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById(‘root’)
);

Run yarn start from the react-contact-manager directory to start the project. Your browser should automatically open http://localhost:3000 and you should see the heading “Contact Manager”. Quickly check the console tab to ensure that the project is running cleanly with no warnings or errors, and if everything is running smoothly, use Ctrl + C to stop the server.

Build the API Server with Feathers

Let’s proceed with generating the back-end API for our CRUD project using the feathers-cli tool:

# Install Feathers command-line tool
npm install @feathersjs/cli -g

# Create directory for the back-end code
# Run this command in the `react-contact-manager` directory
mkdir backend
cd backend

# Generate a feathers back-end API server
feathers generate app

? Do you want to use JavaScript or TypeScript? JavaScript
? Project name backend
? Description Contacts API server
? What folder should the source files live in? src
? Which package manager are you using (has to be installed globally)? Yarn
? What type of API are you making? REST, Realtime via Socket.io
? Which testing framework do you prefer? Mocha + assert
? This app uses authentication No
? Which coding style do you want to use? ESLint

# Ensure Mongodb is running
sudo service mongod start
sudo service mongod status

● mongod.service – MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2020-09-18 14:42:12 CEST; 4s ago
Docs: https://docs.mongodb.org/manual
Main PID: 31043 (mongod)
CGroup: /system.slice/mongod.service
└─31043 /usr/bin/mongod –config /etc/mongod.conf

# Generate RESTful routes for Contact Model
feathers generate service

? What kind of service is it? Mongoose
? What is the name of the service? contacts
? Which path should the service be registered on? /contacts
? What is the database connection string? mongodb://localhost:27017/contactsdb

# Install email and unique field validation
yarn add mongoose-type-email

Let’s open backend/config/default.json. This is where we can configure our MongoDB connection parameters and other settings. Change the default paginate value to 50, since front-end pagination won’t be covered in this tutorial:

{
“host”: “localhost”,
“port”: 3030,
“public”: “../public/”,
“paginate”: {
“default”: 50,
“max”: 50
},
“mongodb”: “mongodb://localhost:27017/contactsdb”
}

Open backend/src/models/contact.model.js and update the code as follows:

require(‘mongoose-type-email’);

module.exports = function (app) {
const modelName = ‘contacts’;
const mongooseClient = app.get(‘mongooseClient’);
const { Schema } = mongooseClient;
const schema = new Schema({
name : {
first: {
type: String,
required: [true, ‘First Name is required’]
},
last: {
type: String,
required: false
}
},
email : {
type: mongooseClient.SchemaTypes.Email,
required: [true, ‘Email is required’]
},
phone : {
type: String,
required: [true, ‘Phone is required’],
validate: {
validator: function(v) {
return /^+(?:[0-9] ?){6,14}[0-9]$/.test(v);
},
message: ‘{VALUE} is not a valid international phone number!’
}
}
}, {
timestamps: true
});

// This is necessary to avoid model compilation errors in watch mode
// see https://mongoosejs.com/docs/api/connection.html#connection_Connection-deleteModel
if (mongooseClient.modelNames().includes(modelName)) {
mongooseClient.deleteModel(modelName);
}

return mongooseClient.model(modelName, schema);
};

Mongoose introduces a new feature called timestamps, which inserts two new fields for you — createdAt and updatedAt. These two fields will be populated automatically whenever we create or update a record. We’ve also installed the mongoose-type-email plugin to perform email validation on the server.

Now, open backend/src/mongoose.js and change this line:

{ useCreateIndex: true, useNewUrlParser: true }

to:

{
useCreateIndex: true,
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
}

This will squash a couple of annoying deprecation warnings.

Open a new terminal and execute yarn test inside the backend directory. You should have all the tests running successfully. Then, go ahead and execute yarn start to start the back-end server. Once the server has initialized, it should print ‘Feathers application started on localhost:3030’ to the console.

Launch your browser and access the URL http://localhost:3030/contacts. You should expect to receive the following JSON response:

{“total”:0,”limit”:50,”skip”:0,”data”:[]}

Test the API with Hoppscotch

Now let’s use Hoppscotch (formerly Postwoman) to confirm all of our endpoints are working properly.

First, let’s create a contact. This link will open Hoppscotch with everything set up to send a POST request to the /contacts endpoint. Make sure Raw input is set to on, then press the green Send button to create a new contact. The response should be something like this:

{
“_id”: “5f64832c20745f4f282b39f9”,
“name”: {
“first”: “Tony”,
“last”: “Stark”
},
“phone”: “+18138683770”,
“email”: “tony@starkenterprises.com”,
“createdAt”: “2020-09-18T09:51:40.021Z”,
“updatedAt”: “2020-09-18T09:51:40.021Z”,
“__v”: 0
}

Now let’s retrieve our newly created contact. This link will open Hoppscotch ready to send a GET request to the /contacts endpoint. When you press the Send button, you should get a response like this:

{
“total”: 1,
“limit”: 50,
“skip”: 0,
“data”: [
{
“_id”: “5f64832c20745f4f282b39f9”,
“name”: {
“first”: “Tony”,
“last”: “Stark”
},
“phone”: “+18138683770”,
“email”: “tony@starkenterprises.com”,
“createdAt”: “2020-09-18T09:51:40.021Z”,
“updatedAt”: “2020-09-18T09:51:40.021Z”,
“__v”: 0
}
]
}

We can show an individual contact in Hoppscotch by sending a GET request to http://localhost:3030/contacts/<_id>. The _id field will always be unique, so you’ll need to copy it out of the response you received in the previous step. This is the link for the above example. Pressing Send will show the contact.

We can update a contact by sending a PUT request to http://localhost:3030/contacts/<_id> and passing it the updated data as JSON. This is the link for the above example. Pressing Send will update the contact.

Finally we can remove our contact by sending a DELETE request to the same address — that is, http://localhost:3030/contacts/<_id>. This is the link for the above example. Pressing Send will delete the contact.

Hoppscotch is a very versatile tool and I encourage you to use it to satisfy yourself that your API is working as expected, before moving on to the next step.

Build the User Interface

Originally, I had wanted to use Semantic UI for the styling, but at the time of writing, it hasn’t been updated in over two years. Fortunately, the open-source community has managed to keep the project alive by creating a popular fork, Fomantic-UI, and this is what we’ll use. There are plans to merge one back into the other when active development of Semantic UI resumes.

We’ll also use Semantic UI React to quickly build our user interface without having to define lots of class names. Fortunately, this project has been kept up to date as well.

Finally, we’ll be using React Router to handle the routing.

With that out of the way, open a new terminal in the react-contact-manager directory and enter the following commands:

# Install Fomantic UI CSS and Semantic UI React
yarn add fomantic-ui-css semantic-ui-react

# Install React Router
yarn add react-router-dom

Update the project structure by adding the following directories and files to the src directory:

src
├── App.js
├── App.test.js
├── components #(new)
│ ├── contact-form.js #(new)
│ └── contact-list.js #(new)
├── index.css
├── index.js
├── pages #(new)
│ ├── contact-form-page.js #(new)
│ └── contact-list-page.js #(new)
├── serviceWorker.js
└── setupTests.js

From the terminal:

cd src
mkdir pages components
touch components/contact-form.js components/contact-list.js
touch pages/contact-form-page.js pages/contact-list-page.js

Let’s quickly populate the JavaScript files with some placeholder code.

The ContactList component will be a functional component (a plain JavaScript function which returns a React element):

// src/components/contact-list.js

import React from ‘react’;

const ContactList = () => {
return (
<div>
<p>No contacts here</p>
</div>
);
}

export default ContactList;

For the top-level containers, I’m using pages. Let’s provide some code for the ContactListPage component:

// src/pages/contact-list-page.js

import React from ‘react’;
import ContactList from ‘../components/contact-list’;

const ContactListPage = () => {
return (
<div>
<h1>List of Contacts</h1>
<ContactList />
</div>
);
};

export default ContactListPage;

The ContactForm component will need to be smart, since it’s required to manage its own state, specifically form fields. We’ll be doing this with React hooks:

// src/components/contact-form.js

import React from ‘react’;

const ContactForm = () => {
return (
<div>
<p>Form under construction</p>
</div>
)
}

export default ContactForm;

Populate the ContactFormPage component with this code:

// src/pages/contact-form-page.js

import React from ‘react’;
import ContactForm from ‘../components/contact-form’;

const ContactFormPage = () => {
return (
<div>
<ContactForm />
</div>
);
};

export default ContactFormPage;

Now let’s create the navigation menu and define the routes for our App. App.js is often referred to as the “layout template” for a single-page application:

// src/App.js

import React from ‘react’;
import { NavLink, Route } from ‘react-router-dom’;
import { Container } from ‘semantic-ui-react’;
import ContactListPage from ‘./pages/contact-list-page’;
import ContactFormPage from ‘./pages/contact-form-page’;

const App = () => {
return (
<Container>
<div className=”ui two item menu”>
<NavLink className=”item” activeClassName=”active” exact to=”/”>
Contacts List
</NavLink>
<NavLink
className=”item”
activeClassName=”active”
exact
to=”/contacts/new”
>
Add Contact
</NavLink>
</div>
<Route exact path=”/” component={ContactListPage} />
<Route path=”/contacts/new” component={ContactFormPage} />
<Route path=”/contacts/edit/:_id” component={ContactFormPage} />
</Container>
);
};

export default App;

The above code uses React Router. If you’d like a refresher on this, please consult our tutorial.

Finally, update the src/index.js file with this code, where we import Formantic-UI for styling and the BrowserRouter component for using the HTML5 history API, which will keep our app in sync with the URL:

// src/index.js

import React from ‘react’;
import ReactDOM from ‘react-dom’;
import { BrowserRouter } from ‘react-router-dom’;
import App from ‘./App’;
import ‘fomantic-ui-css/semantic.min.css’;
import ‘./index.css’;

ReactDOM.render(
<BrowserRouter>
<App />
</BrowserRouter>,
document.getElementById(‘root’)
);

Make sure that the create-react-app server is running (if not, start it using yarn start), then visit http://localhost:3000. You should have a similar view to the screenshot below:

Screenshot of the empty list of contacts

Manage State with React Hooks and the Context API

Previously, one might have reached for Redux when tasked with managing state in a React app. However, as of React v16.8.0, it’s possible to manage global state in a React application using React Hooks and the Context API.

Using this new technique, you’ll write less code that’s easier to maintain. We’ll still use the Redux pattern, but just using React Hooks and the Context API.

Next, let’s look at hooking up the Context API.

Define a Context Store

This will be like our store for handling global state for contacts. Our state will consist of multiple variables, including a contacts array, a loading state, and a message object for storing error messages generated from the back-end API server.

In the src directory, create a context folder that contains a contact-context.js file:

cd src
mkdir context
touch context/contact-context.js

And insert the following code:

import React, { useReducer, createContext } from ‘react’;

export const ContactContext = createContext();

const initialState = {
contacts: [],
contact: {}, // selected or new
message: {}, // { type: ‘success|fail’, title:’Info|Error’ content:’lorem ipsum’}
};

function reducer(state, action) {
switch (action.type) {
case ‘FETCH_CONTACTS’: {
return {
…state,
contacts: action.payload,
};
}
default:
throw new Error();
}
}

export const ContactContextProvider = props => {
const [state, dispatch] = useReducer(reducer, initialState);
const { children } = props;

return (
<ContactContext.Provider value={[state, dispatch]}>
{children}
</ContactContext.Provider>
);
};

As you can see, we’re using the useReducer hook, which is an alternative to useState. useReducer is suitable for handling complex state logic involving multiple sub-values. We’re also using the Context API to allow sharing of data with other React components.

Continue reading
Build a Node.js CRUD App Using React and FeathersJS
on SitePoint.

Getting Started with React: A Beginner’s Guide

Original Source: https://www.sitepoint.com/getting-started-react-beginners-guide/?utm_source=rss

React is a remarkable JavaScript library that’s taken the development community by storm. In a nutshell, it’s made it easier for developers to build interactive user interfaces for web, mobile and desktop platforms. Today, thousands of companies worldwide are using React, including big names such as Netflix and Airbnb.

In this guide, I’ll introduce you to React and several of its fundamental concepts. We’ll get up and running quickly with the Create React App tool, then we’ll walk step-by-step through the process of building out a simple React application. By the time you’re finished, you’ll have a good overview of the basics and will be ready to take the next step on your React journey.

Prerequisites

Before beginning to learn React, it makes sense to have a basic understanding of HTML, CSS and JavaScript. It will also help to have a basic understanding of Node.js, as well as the npm package manager.

To follow along with this tutorial, you’ll need both Node and npm installed on your machine. To do this, head to the Node.js download page and grab the version you need (npm comes bundled with Node). Alternatively, you can consult our tutorial on installing Node using a version manager.

What is React?

React is a JavaScript library for building UI components. Unlike more complete frameworks such as Angular or Vue, React deals only with the view layer, so you’ll need additional libraries to handle things such as routing, state management, and so on. In this guide, we’ll focus on what React can do out of the box.

React applications are built using reusable UI components that can interact with each other. A React component can be class-based component or a so-called function component. Class-based components are defined using ES6 classes, whereas function components are basic JavaScript functions. These tend to be defined using an arrow function, but they can also use the function keyword. Class-based components will implement a render function, which returns some JSX (React’s extension of Regular JavaScript, used to create React elements), whereas function components will return JSX directly. Don’t worry if you’ve never heard of JSX, as we’ll take a closer look at this later on.

React components can further be categorized into stateful and stateless components. A stateless component’s work is simply to display data that it receives from its parent React component. If it receives any events or inputs, it can simply pass these up to its parent to handle.

A stateful component, on the other hand, is responsible for maintaining some kind of application state. This might involve data being fetched from an external source, or keeping track of whether a user is logged in or not. A stateful component can respond to events and inputs to update its state.

As a rule of thumb, you should aim to write stateless components where possible. These are easier to reuse, both across your application and in other projects.

Understanding the Virtual DOM

Before we get to coding, you need to be aware that React uses a virtual DOM to handle page rendering. If you’re familiar with jQuery, you know that it can directly manipulate a web page via the HTML DOM. In a lot of cases, this direct interaction poses few if any problems. However, for certain cases, such as the running of a highly interactive, real-time web application, performance can take quite a hit.

To counter this, the concept of the Virtual DOM (an in-memory representation of the real DOM) was invented, and is currently being applied by many modern UI frameworks including React. Unlike the HTML DOM, the virtual DOM is much easier to manipulate, and is capable of handling numerous operations in milliseconds without affecting page performance. React periodically compares the virtual DOM and the HTML DOM. It then computes a diff, which it applies to the HTML DOM to make it match the virtual DOM. This way, React ensures that your application is rendered at a consistent 60 frames per second, meaning that users experience little or no lag.

Start a Blank React Project

As per the prerequisites, I assume you already have a Node environment set up, with an up-to-date version of npm (or optionally Yarn).

Next, we’re going to build our first React application using Create React App, an official utility script for creating single-page React applications.

Let’s install this now:

npm i -g create-react-app

Then use it to create a new React app.

create-react-app message-app

Depending on the speed of your internet connection, this might take a while to complete if this is your first time running the create-react-app command. A bunch of packages get installed along the way, which are needed to set up a convenient development environment — including a web server, compiler and testing tools.

If you’d rather not install too many packages globally, you can also npx, which allows you to download and run a package without installing it:

npx i -g create-react-app

Running either of these commands should output something similar to the following:


Success! Created react-app at C:Usersmikeprojectsgithubmessage-app
Inside that directory, you can run several commands:

yarn start
Starts the development server.

yarn build
Bundles the app into static files for production.

yarn test
Starts the test runner.

yarn eject
Removes this tool and copies build dependencies, configuration files
and scripts into the app directory. If you do this, you can’t go back!

We suggest that you begin by typing:

cd message-app
yarn start

Happy hacking!

Once the project setup process is complete, execute the following commands to launch your React application:

cd message-app
npm start

You should see the following output:

….

Compiled successfully!

You can now view react-app in the browser.

Local: http://localhost:3000
On Your Network: http://192.168.56.1:3000

Note that the development build is not optimized.
To create a production build, use yarn build.

Your default browser should launch automatically, and you should get a screen like this:

Create React App

Now that we’ve confirmed our starter React project is running without errors, let’s have a look at what’s happened beneath the hood. You can open the folder message-app using your favorite code editor. Let’s start with package.json file:

{
“name”: “message-app”,
“version”: “0.1.0”,
“private”: true,
“dependencies”: {
“@testing-library/jest-dom”: “^4.2.4”,
“@testing-library/react”: “^9.3.2”,
“@testing-library/user-event”: “^7.1.2”,
“react”: “^16.13.1”,
“react-dom”: “^16.13.1”,
“react-scripts”: “3.4.3”
},
“scripts”: {
“start”: “react-scripts start”,
“build”: “react-scripts build”,
“test”: “react-scripts test”,
“eject”: “react-scripts eject”
},
“eslintConfig”: {
“extends”: “react-app”
},
“browserslist”: {
“production”: [
“>0.2%”,
“not dead”,
“not op_mini all”
],
“development”: [
“last 1 chrome version”,
“last 1 firefox version”,
“last 1 safari version”
]
}
}

As you can see, Create React App has installed several dependencies for us. The first three are related to the React Testing Library which (as you might guess) enables us to test our React code. Then we have react and react-dom, the core packages of any React application, and finally react-scripts, which sets up the development environment and starts a server (which you’ve just seen).

Then come four npm scripts, which are used to automate repetitive tasks:

start starts the dev server
build creates a production-ready version of your app
test runs the tests mentioned above
eject will expose your app’s development environment

This final command is worth elaborating on. The Create React App tool provides a clear separation between your actual code and the development environment. If you run npm run eject, Create React App will stop hiding what it does under the hood and dump everything into your project’s package.json file. While that gives you a finer grained control over your app’s dependencies, I wouldn’t recommend you do this, as you’ll have to manage all the complex code used in building and testing your project. If it comes to it, you can use customize-cra to configure your build process without ejecting.

Create React App also comes for support with ESLint (as can be seen from the eslintConfig property) and is configured using react-app ESLint rules.

The browserslist property of the package.json file allows you to specify a list of browsers that your app will support. This configuration is used by PostCSS tools and transpilers such as Babel.

One of the coolest features you’ll love about Create React App is that it provides hot reloading out of the box. This means any changes we make on the code will cause the browser to automatically refresh. Changes to JavaScript code will reload the page, while changes to CSS will update the DOM without reloading.

For now, let’s first stop the development server by pressing Ctrl + C. Once the server has stopped, delete everything except the serviceWorker.js and setupTests.js files in the src folder. If you’re interested in finding out what service workers do, you can learn more about them here.

Other than that, we’ll create all the code from scratch so that you can understand everything inside the src folder.

Introducing JSX Syntax

Defined by the React docs as a “syntax extension to JavaScript”, JSX is what makes writing your React components easy. Using JSX we can pass around HTML structures, or React elements as if they were standard JavaScript values.

Here’s a quick example:

import React from ‘react’;

export default function App() {
const message = <h1>I’m a heading</h1>; //JSX FTW!
return ( message );
}

Notice the line const message = <h1>I’m a heading</h1>;. That’s JSX. If you tried to run that in a web browser, it would give you an error. However, in a React app, JSX is interpreted by a transpiler, such as Babel, and rendered to JavaScript code that React can understand.

Note: you can learn more about JSX in our tutorial “An Introduction to JSX”.

In the past, React JSX files used to come with a .jsx file extension. Nowadays, the Create React App tool generates React files with a .js file extension. While the .jsx file extension is still supported, the maintainers of React recommend using .js. However, there’s an opposing group of React developers, including myself, who prefer to use the .jsx extension, for the following reasons:

In VS Code, Emmet works out of the box for .jsx files. You can, however, configure VS Code to treat all .js files as JavaScriptReact to make Emmet work on those files.
There are different linting rules for standard JavaScript and React JavaScript code.

However, for this tutorial, I’ll abide by what Create React App gives us and stick with the .js file ending.

Hello, World! in React

Let’s get down to writing some code. Inside the src folder of the newly created message-app, create an index.js file and add the following code:

import React from ‘react’;
import ReactDOM from ‘react-dom’;

ReactDOM.render(<h1>Hello World</h1>, document.getElementById(‘root’));

Start the development server again using npm start or yarn start. Your browser should display the following content:

Hello React

This is the most basic “Hello World” React example. The index.js file is the root of your project where React components will be rendered. Let me explain how the code works:

Line 1: The React package is imported to handle JSX processing.
Line 2: The ReactDOM package is imported to render the root React component.
Line 3: Call to the render function, passing in:

<h1>Hello World</h1>: a JSX element
document.getElementById(‘root’): an HTML container (the JSX element will be rendered here).

The HTML container is located in the public/index.html file. On line 31, you should see <div id=”root”></div>. This is known as the root DOM node because everything inside it will be managed by the React virtual DOM.

While JSX does look a lot like HTML, there are some key differences. For example, you can’t use a class attribute, since it’s a JavaScript keyword. Instead, className is used in its place. Also, events such as onclick are spelled onClick in JSX. Let’s now modify our Hello World code:

const element = <div>Hello World</div>;
ReactDOM.render(element, document.getElementById(‘root’));

I’ve moved the JSX code out into a constant variable named element. I’ve also replaced the h1 tags with div tags. For JSX to work, you need to wrap your elements inside a single parent tag.

Take a look at the following example:

const element = <span>Hello,</span> <span>Jane</span>;

The above code won’t work. You’ll get a syntax error indicating you must enclose adjacent JSX elements in an enclosing tag. Something like this:

const element = <div>
<span>Hello, </span>
<span>Jane</span>
</div>;

How about evaluating JavaScript expressions in JSX? Simple. Just use curly braces like this:

const name = “Jane”;
const element = <p>Hello, {name}</p>

… or like this:

const user = {
firstName: ‘Jane’,
lastName: ‘Doe’
}
const element = <p>Hello, {user.firstName} {user.lastName}</p>

Update your code and confirm that the browser is displaying “Hello, Jane Doe”. Try out other examples such as { 5 + 2 }. Now that you’ve got the basics of working with JSX, let’s go ahead and create a React component.

Declaring React Components

The above example was a simplistic way of showing you how ReactDOM.render() works. Generally, we encapsulate all project logic within React components, which are then passed to the ReactDOM.render function.

Inside the src folder, create a file named App.js and type the following code:

import React, { Component } from ‘react’;

class App extends Component {

render(){
return (
<div>
Hello World Again!
</div>
)
}
}

export default App;

Here we’ve created a React Component by defining a JavaScript class that’s a subclass of React.Component. We’ve also defined a render function that returns a JSX element. You can place additional JSX code within the <div> tags. Next, update src/index.js with the following code in order to see the changes reflected in the browser:

import React from ‘react’;
import ReactDOM from ‘react-dom’;

import App from ‘./App’;

ReactDOM.render(<App/>, document.getElementById(‘root’));

First we import the App component. Then we render App using JSX format, like so: <App/>. This is required so that JSX can compile it to an element that can be pushed to the React DOM. After you’ve saved the changes, take a look at your browser to ensure it’s rendering the correct message.

Next, we’ll look at how to apply styling.

Continue reading
Getting Started with React: A Beginner’s Guide
on SitePoint.

Astounding Examples Of Three.js In Action

Original Source: http://feedproxy.google.com/~r/1stwebdesigner/~3/Z7vNm8Ml_Yw/

Three.js is a cross-browser JavaScript library and API used to create and display animated 3D computer graphics in a web browser using WebGL. You can learn more about it here. In today’s post we are sharing some amazing examples of this library in action for your inspiration and learning. Let’s get to it!

Your Web Designer Toolbox
Unlimited Downloads: 500,000+ Web Templates, Icon Sets, Themes & Design Assets


DOWNLOAD NOW

Particles & Waves

A very nicely done animation that responds to the mouse position.

See the Pen three.js canvas – particles – waves by deathfang (@deathfang) on CodePen.dark

Procedurally Generated Minimal Environment

A rotating mountain terrain grid animation.

See the Pen Procedurally generated minimal environment by Marc Tannous (@marctannous) on CodePen.dark

Particle Head

Similar to the particles and waves animation above, this one shows particles in the 3D shape of a head that moves with your mouse.

See the Pen WebGL particle head by Robert Bue (@robbue) on CodePen.dark

The Cube

Try to not spend hours playing this addictive game!

See the Pen The Cube by Boris Šehovac (@bsehovac) on CodePen.dark

Three.js Particle Plane and Universe

Here’s another one to play with using mouse movements, clicks and arrow keys.

See the Pen Simple Particle Plane & Universe 🙂 by Unmesh Shukla (@unmeshpro) on CodePen.dark

Text Animation

A somewhat mind-boggling text animation that can also be controlled by your mouse.

See the Pen THREE Text Animation #1 by Szenia Zadvornykh (@zadvorsky) on CodePen.dark

Distortion Slider

Cool transition animation between slides. Click on the navigation dots to check it out.

See the Pen WebGL Distortion Slider by Ash Thornton (@ashthornton) on CodePen.dark

Torus Tunnel

This one will probably hurt your eyes if you look too long.

See the Pen Torus Tunnel by Mombasa (@Mombasa) on CodePen.dark

Three.js Round

This one is a beautifully captivating animation.

See the Pen three.js round 1 by alex baldwin (@cubeghost) on CodePen.dark

3D Icons

Nice animation of icons flying into becoming various words.

See the Pen Many Icons in 3D using Three.js by Yasunobu Ikeda a.k.a @clockmaker (@clockmaker) on CodePen.dark

WormHole

A great sci-fi effect featuring an infinite worm hole.

See the Pen WormHole by Josep Antoni Bover (@devildrey33) on CodePen.dark

Three.js + TweenMax Experiment

Another captivating animation that is difficult to walk away from.

See the Pen Three.js + TweenMax (Experiment) by Noel Delgado (@noeldelgado) on CodePen.dark

Three.js Point Cloud Experiment

Another particle-type animation that responds to mouse movements.

See the Pen Three Js Point Cloud Experiment by Sean Dempsey (@seanseansean) on CodePen.dark

Gravity

More 3D particles in a hypnotizing endless movement.

See the Pen Gravity (three.js / instancing / glsl) by Martin Schuhfuss (@usefulthink) on CodePen.dark

Rushing rapid in a forest by Three.js

For our last example, check out this somewhat simple geometric scene with an endlessly flowing waterfall.

See the Pen 33 | Rushing rapid in a forest by Three.js by Yiting Liu (@yitliu) on CodePen.dark

Are You Already Using Three.js In Your Projects?

Whether you are already using Three.js in your projects, are in the process of learning how to use it, or have been inspired to start learning it now, these examples should help you with further inspiration or to get a glimpse of how it can be done. Be sure to check out our other collections for more inspiration and insight into web design and development!


Windows 10 PowerToys: 8 Must-have Tools for Power Users

Original Source: https://www.hongkiat.com/blog/windows-10-powertoys-for-powerusers/

PowerToys was first released for Windows 95 as a collection of free tools for power users. It was later released for Windows XP, which offered new and better utilities for customizing the operating…

Visit hongkiat.com for full content.

Getting Started with Gatsby: Build Your First Static Site

Original Source: https://www.sitepoint.com/gatsby-guide/?utm_source=rss

Thinking about getting on the Jamstack bandwagon? If your answer is Yes, then Gatsby, one of the hottest platforms around, could be just what you’re looking for.

JAM stands for JavaScript, APIs, and Markup. In other words, while the dynamic parts of a site or app during the request/response cycle are taken care of by JavaScript in the client, all server-side processes take place using APIs accessed over HTTPS by JavaScript, and templated markup is prebuilt at deploy time, often using a static site generator. That’s the Jamstack. It’s performant, inexpensive to scale and offers better security and a smooth developer experience.

Why Use a Static Site

The static site model doesn’t fit all kinds of projects, but when it does it has a number of advantages. Here are a few of them.

Speed

The time it takes a website to load in the browser as the request is made for the first time is an important factor for user experience. Users get impatient very quickly, and things can only get worse on slow connections. A lack of database calls and the content being pre-generated make static sites really fast-loading.

A static site is made of static files which can be easily served all over the world using content delivery networks (CDNs). This makes it possible to leverage the data center that’s closer to where the request is being made.

Simplified Hosting

Hosting for static sites can be set up in a snap. Because there’s no database or server-side code, special languages or frameworks to support, all the hosting has to do is to serve static files.

Better Security

Without server-side code or a database, there isn’t anything for hackers to hack. There’s no hassle keeping the server up to date with security fixes and patches. All this means a lot more peace of mind when it comes to the security of your website.

Better Developer Experience

Setting up your static website with a hosting company like Netlify or Vercel is straightforward and, with continuous deployment, you just push your changes to your code repo of choice and they’re immediately reflected in the live version.

What Is Gatsby?

Gatsby is one of the most popular tools for building websites today. It’s more than a static site generator. In fact, it is a “React-based, open-source framework for creating websites and apps.” As Gatsby is built on top of React, all the React goodness is at your fingertips, which enables you to take advantage of this powerful library to build interactive components right into your static website. Gatsby is also built with GraphQL, so you can query data and display it on your website any way you want.

Installing Gatsby and Creating Your Project

Gatsby is put together using webpack, but you don’t need to worry about complicated set-up maneuvers; Gatsby CLI will take care of everything for you.

For this tutorial, I’ll assume you have Node.js installed locally. If this isn’t the case, then head over to the Node home page and download the correct binaries for your system. Alternatively, you might consider using a version manager to install Node. We have a tutorial on using a version anager here.

Node comes bundled with npm, the Node package manager, which we’re going to use to install some of the libraries we’ll be using. You can learn more about using npm here.

You can check that both are installed correctly by issuing the following commands from the command line:

node -v
> 12.18.4

npm -v
> 6.14.8

The first thing you need to do is install the Gatsby CLI. This is an npm package that lets you create a Gatsby site in a few seconds. In your terminal, write:

npm install -g gatsby-cli

With the Gasby CLI installed on your machine, you can go ahead and create your website. I’ll call it sitepoint-demo, but you’re free to call it whatever you like. In your terminal, type:

gatsby new sitepoint-demo

Once Gatsby CLI has installed all the necessary files and configured them appropriately, you’ll have a fully functioning Gatsby website ready for you to customize and build upon. To access it, move into the sitepoint-demo folder:

cd sitepoint-demo

and start the local server:

gatsby develop

Finally, open a window on http://localhost:8000 where you’ll find your shiny Gatsby site looking something like this:

Gatsby default template

To quickly get a website up and running, Gatsby takes advantage of several official starter boilerplates as well as starters offered by the strong community around it. The site you’ve just created uses Gatsby default starter, but you can find plenty more on the Gatsby website.

If you’d like to use a different starter from the default one, you need to specify its URL in the command line, following this pattern:

gatsby new [SITE_DIRECTORY_NAME] [URL_OF_STARTER_GITHUB_REPO]

For instance, let’s say you’d like a Material Design look and feel for your site. The quickest way to create it is to use Gatsby Material Starter by typing the following command in your terminal:

gatsby new sitepoint-demo https://github.com/Vagr9K/gatsby-material-starter

Great! Now let’s take a look at the files inside your brand new Gatsby project.

A Tour Inside Your Gatsby Site

A good place to start is the /src/ directory. Here’s what you’ll find.

pages Directory

The /src/pages/ directory contains your site’s pages. Each page is a React component. For instance, your site’s home-page code is located in /pages/index.js and looks like this:

import React from “react”
import { Link } from “gatsby”
import Layout from “../components/layout”
import Image from “../components/image”
import SEO from “../components/seo”

const IndexPage = () => (
<Layout>
<SEO title=”Home” />
<h1>Hi people</h1>
<p>Welcome to your new Gatsby site.</p>
<p>Now go build something great.</p>
<div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
<Image />
</div>
<Link to=”/page-2/”>Go to page 2</Link>
<Link to=”/using-typescript/”>Go to “Using TypeScript”</Link>
</Layout>
)

export default IndexPage

That’s the typical code for a React component.

Components let you split the UI into independent, reusable pieces, and think about each piece in isolation. … Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen. — React docs.

components Directory

The /src/components/ directory is where you find general components for your website. The default starter comes with the following components: Header (header.js), Image (image.js), Layout (layout.js), and SEO (seo.js). You’re free to customize these components and add your own to the same directory.

Now you’re ready to start making changes to your new site and customize it to your taste.

How to Make Changes to Your Gatsby Site

Let’s have a go at modifying the message displayed on the home page. Open pages/index.js in your code editor and replace the two paragraphs below the <h1> tag with this paragraph:

<p>Welcome to my SitePoint Demo Site!</p>

Of course, you can add any text you want ibetween the <p> tags.

As soon as you hit Save, your changes are displayed in the browser thanks to Gatsby’s hot reloading development environment. This means that when you develop a Gatsby site, pages are being watched in the background so that when you save your work, changes will be immediately visible without needing a page refresh or a browser restart.

Gatsby makes it easy to add new pages. For instance, let’s add an About page by creating a new file, about.js, inside the /pages/ directory and enter this content:

import React from “react”

const AboutPage = () => <h1>About Me</h1>

export default AboutPage

The code above is a React functional component which displays some text.

Save your work and navigate to http://localhost:8000/about and you should see the About Me <h1> title on your screen.

You can quickly link to your new About page from the home page using the Gatsby Link component. To see how it works, open index.js in your code editor and locate this bit of code just before the </Layout> closing tag:

<Link to=”/page-2/”>Go to page 2</Link>

Next, replace the value of the to property with /about/ and the Go to page 2 text with About:

<Link to=”/about/”>About</Link>

Save your work and you should see your new link on the screen. Click on the About link and instantly you’re on the About page.

Gatsby uses the Link component for internal links. For external links, you should use the good old <a> tag, like you would on a regular vanilla HTML website.

Now, let’s experiment with your Gatsby site’s look and feel by changing a few styles.

Styling Your Gatsby Site

Gatsby offers a number of options for applying style rules to your website.

Global Stylesheet

A familiar choice is to use a global .css file which contains rules that apply to the entire website. To get started, add a /styles/ directory inside the /src/ directory and add a global.css file to it: /src/styles/global.css . You’re free to choose any name you like both for the directory and the style-sheet file. Inside global.css, add the following CSS declaration, which is going to be applied to the entire website:

body {
background-color: yellow;
}

Now, save your work. Oops, nothing happened! Not yet, anyway. To make it work you need to take an extra step. Open gatsby-browser.js in your code editor and import the stylesheet you’ve just created:

import “./src/styles/global.css”

Head back to your browser and you should see that the background color of your website has turned into a bright yellow. Not ideal as a color choice, but it works!

Continue reading
Getting Started with Gatsby: Build Your First Static Site
on SitePoint.

Recreating the “100 Days of Poetry” Effect with Shader, ScrollTrigger and CSS Grid

Original Source: http://feedproxy.google.com/~r/tympanus/~3/Y5DrGJdHHmo/

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions that he records twice a month plus the demo he creates during the session!

For this episode of ALL YOUR HTML, I decided to replicate the effect seen on 100 DAYS OF POETRY with the technologies I know. So I did a fragment shader for the background effect, then coded a dummy CSS grid, and connected both of them with the power of GSAP’s ScrollTrigger plugin. This is of course a really simplified version of the website, but I hope you will learn something from the process of recreating this.

This coding session was streamed live on Oct 4, 2020.

Check out the live demo.

Original website: https://100daysofpoetry.gallery/

By Keita Yamada https://twitter.com/p5_keita

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Recreating the “100 Days of Poetry” Effect with Shader, ScrollTrigger and CSS Grid appeared first on Codrops.

Collective #625

Original Source: http://feedproxy.google.com/~r/tympanus/~3/_Wvt5il7qJc/

Collective 625 Item image

Inspirational Website of the Week: Celebrating 10 Years with Blind Barber

A design celebration with compelling layouts and individual effects. Our pick this week.

Get inspired

Collective 625 Item image

Our Sponsor
Bring Your Vision To Life With a Design-Oriented Website Builder

Turn your vision into a stunning reality with Elementor – a super-efficient, design-oriented WordPress website builder.

Try for Free

Collective 625 Item image

The failed promise of Web Components

Lea Verou explains why the potential of Web Components got lost in practice and shows a way this could be fixed.

Read it

Collective 625 Item image

Emulate vision deficiencies in DevTools

Addy Osmani shows how you can emulate vision deficiencies in Chrome DevTools to see how users who experience color blindness or blurred vision might see your site.

Read it

Collective 625 Item image

a11yresources

The Accessibility Resources is a growing list of more than 200+ hand-curated accessibility plugins, tools, articles, case studies, design patterns, assistive technologies, design resources and accessibility standards.

Check it out

Collective 625 Item image

Declarative Shadow DOM

DLearn about Dclarative Shadow DOM, which is a new way to implement and use Shadow DOM directly in HTML.

Read it

Collective 625 Item image

Keyboard Simulator

Design and test virtual keyboards with this wonderful Three.js based tool.

Check it out

Collective 625 Item image

Accessible Animation

Una Kravets demonstrates how to use the “prefers-reduced-motion” media query to progressively enhance animation on your website and shows how to build a simple “reduce animation” switch.

Watch it

Collective 625 Item image

urlcat

A URL builder library for JavaScript that makes building URLs very convenient and prevents common mistakes.

Check it out

Collective 625 Item image

IconPark

Transform an SVG icon into multiple themes, and generate React icons or Vue icons easily.

Check it out

Collective 625 Item image

Galaxy Simulation

A colliding galaxy simulation using direct and Barnes-Hut algorithms and built with Rust, ThreeJS, and WebAssembly. Check out the GitHub repo.

Check it out

Collective 625 Item image

HitCount

HitCount lets you add a hit counter as simply as adding an image to your website.

Check it out

Collective 625 Item image

3 things about CSS variables you might not know

Some interesting things about CSS variables that will come in handy.

Check it out

Collective 625 Item image

Introducing visx from Airbnb

Read about the collection of expressive, low-level visualization primitives for React from Airbnb.

Read it

Collective 625 Item image

Dreaming of Jupiter

A beautiful Three.js demo by isladjan.

Check it out

Collective 625 Item image

Ballpoint

Ballpoint is an experimental new vector graphics editor. It’s available as a free to use web application.

Check it out

Collective 625 Item image

Modfy

A purely browser-based privacy-first video tool capable of performing tasks like converting, compression, etc without uploading your files.

Check it out

Collective 625 Item image

Checkboxes make excellent buttons

Christian Heilmann explains why he likes to use checkboxes as buttons.

Read it

Collective 625 Item image

Detached window memory leaks

Learn how to find and fix tricky memory leaks caused by detached windows.

Check it out

Collective 625 Item image

From Our Blog
Creating Mirrors in React-Three-Fiber and Three.js

A brief walk-through of how to create a mirror scene based on the react-three-fiber ecosystem.

Check it out

The post Collective #625 appeared first on Codrops.

Understanding TypeScript Generics

Original Source: https://smashingmagazine.com/2020/10/understanding-typescript-generics/

In this article, we’ll be learning the concept of Generics in TypeScript and examining how Generics can be used to write modular, decoupled, and reusable code. Along the way, we’ll briefly discuss how they fit into better testing patterns, approaches to error handling, and domain/data-access separation.

A Real-World Example

I want to enter into the world of Generics not by explaining what they are, but rather by providing an intuitive example for why they are useful. Suppose you’ve been tasked with building a feature-rich dynamic list. You could call it an array, an ArrayList, a List, a std::vector, or whatever, depending upon your language background. Perhaps this data structure must have in-built or swappable buffer systems as well (like a circular buffer insertion option). It will be a wrapper around the normal JavaScript array so that we can work with our structure instead of plain arrays.

The immediate issue you’ll come across is that of constraints imposed by the type system. You can’t, at this point, accept any type you want into a function or method in a nice clean way (we’ll revisit this statement later).

The only obvious solution is to replicate our data structure for all different types:

const intList = IntegerList.create();
intList.add(4);

const stringList = StringList.create();
stringList.add(‘hello’);

const userList = UserList.create();
userList.add(new User(‘Jamie’));

The .create() syntax here might look arbitrary, and indeed, new SomethingList() would be more straight-forward, but you’ll see why we use this static factory method later. Internally, the create method calls the constructor.

This is terrible. We have a lot of logic within this collection structure, and we’re blatantly duplicating it to support different use cases, completely breaking the DRY Principle in the process. When we decide to change our implementation, we’ll have to manually propagate/reflect those changes across all structures and types we support, including user-defined types, as in the latter example above. Suppose the collection structure itself was 100 lines long — it would be a nightmare to maintain multiple different implementations where the only difference between them is types.

An immediate solution that might come to mind, especially if you have an OOP mindset, is to consider a root “supertype” if you will. In C#, for example, there consists a type by the name of object, and object is an alias for the System.Object class. In C#’s type system, all types, be they predefined or user-defined and be they reference types or value types, inherit either directly or indirectly from System.Object. This means that any value can be assigned to a variable of type object (without getting into stack/heap and boxing/unboxing semantics).

In this case, our issue appears solved. We can just use a type like any and that will allow us to store anything we want within our collection without having to duplicate the structure, and indeed, that’s very true:

const intList = AnyList.create();
intList.add(4);

const stringList = AnyList.create();
stringList.add(‘hello’);

const userList = AnyList.create();
userList.add(new User(‘Jamie’));

Let’s look at the actual implementation of our list using any:

class AnyList {
private values: any[] = [];

private constructor (values: any[]) {
this.values = values;

// Some more construction work.
}

public add(value: any): void {
this.values.push(value);
}

public where(predicate: (value: any) => boolean): AnyList {
return AnyList.from(this.values.filter(predicate));
}

public select(selector: (value: any) => any): AnyList {
return AnyList.from(this.values.map(selector));
}

public toArray(): any[] {
return this.values;
}

public static from(values: any[]): AnyList {
// Perhaps we perform some logic here.
// …

return new AnyList(values);
}

public static create(values?: any[]): AnyList {
return new AnyList(values ?? []);
}

// Other collection functions.
// …
}

All the methods are relatively simple, but we’ll start with the constructor. Its visibility is private, for we’ll assume that our list is complex and we wish to disallow arbitrary construction. We also may want to perform logic prior to construction, so for these reasons, and to keep the constructor pure, we delegate these concerns to static factory/helper methods, which is considered a good practice.

The static methods from and create are provided. The method from accepts an array of values, performs custom logic, and then uses them to construct the list. The create static method takes an optional array of values for in the event that we want to seed our list with initial data. The “nullish coalescing operator” (??) is used to construct the list with an empty array in the event that one is not provided. If the left side of the operand is null or undefined, we’ll fall back to the right side, for in this case, values is optional, and thus may be undefined. You can learn more about nullish coalescing at the relevant TypeScript documentation page.

I’ve also added a select and a where method. These methods just wrap JavaScript’s map and filter respectively. select permits us to project an array of elements into a new form based on the provided selector function, and where permits us to filter out certain elements based on the provided predicate function. The toArray method simply converts the list to an array by returning the array reference we hold internally.

Finally, suppose that the User class contains a getName method which returns a name and also accepts a name as its first and only constructor argument.

Note: Some readers will recognize Where and Select from C#’s LINQ, but keep in mind, I’m trying to keep this simple, thus I’m not worried about laziness or deferred execution. Those are optimizations that could and should be made in real life.

Furthermore, as an interesting note, I want to discuss the meaning of “predicate”. In Discrete Mathematics and Propositional Logic, we have the concept of a “proposition”. A proposition is some statement that can be considered true or false, such as “four is divisible by two”. A “predicate” is a proposition that contains one or more variables, thus the truthfulness of the proposition depends upon that of those variables. You can think about it like a function, such as P(x) = x is divisible by two, for we need to know the value of x to determine if the statement is true or false. You can learn more about predicate logic here.

There are a few issues that are going to arise from the use of any. The TypeScript compiler knows nothing about the elements inside the list/internal array, thus it won’t provide any help inside of where or select or when adding elements:

// Providing seed data.
const userList = AnyList.create([new User(‘Jamie’)]);

// This is fine and expected.
userList.add(new User(‘Tom’));
userList.add(new User(‘Alice’));

// This is an acceptable input to the TS Compiler,
// but it’s not what we want. We’ll definitely
// be surprised later to find strings in a list
// of users.
userList.add(‘Hello, World!’);

// Also acceptable. We have a large tuple
// at this point rather than a homogeneous array.
userList.add(0);

// This compiles just fine despite the spelling mistake (extra ‘s’):
// The type of `users` is any.
const users = userList.where(user => user.getNames() === ‘Jamie’);

// Property `ssn` doesn’t even exist on a `user`, yet it compiles.
users.toArray()[0].ssn = ‘000000000’;

// `usersWithId` is, again, just `any`.
const usersWithId = userList.select(user => ({
id: newUuid(),
name: user.getName()
}));

// Oops, it’s “id” not “ID”, but TS doesn’t help us.
// We compile just fine.
console.log(usersWithId.toArray()[0].ID);

Since TypeScript only knows that the type of all array elements is any, it can’t help us at compile time with the non-existent properties or the getNames function that doesn’t even exist, thus this code will result in multiple unexpected runtime errors.

To be honest, things are beginning to look quite dismal. We tried implementing our data structure for each concrete type we wished to support, but we quickly realized that that was not in any way maintainable. Then, we thought we were getting somewhere by using any, which is analogous to depending upon a root supertype in an inheritance chain from which all types derive, but we concluded that we lose type-safety with that method. What’s the solution, then?

It turns out that, at the beginning of the article, I lied (kind of):

“You can’t, at this point, accept any type you want into a function or method in a nice clean way.”

You actually can, and that’s where Generics come in. Notice I said “at this point”, for I was assuming we didn’t know about Generics at that point in the article.

I’ll start by showing the full implementation of our List structure with Generics, and then we’ll take a step back, discuss what they actually are, and determine their syntax more formally. I’ve named it TypedList to differentiate from our earlier AnyList:

class TypedList<T> {
private values: T[] = [];

private constructor (values: T[]) {
this.values = values;
}

public add(value: T): void {
this.values.push(value);
}

public where(predicate: (value: T) => boolean): TypedList<T> {
return TypedList.from<T>(this.values.filter(predicate));
}

public select<U>(selector: (value: T) => U): TypedList<U> {
return TypedList.from<U>(this.values.map(selector));
}

public toArray(): T[] {
return this.values;
}

public static from<U>(values: U[]): TypedList<U> {
// Perhaps we perform some logic here.
// …

return new TypedList<U>(values);
}

public static create<U>(values?: U[]): TypedList<U> {
return new TypedList<U>(values ?? []);
}

// Other collection functions.
// ..
}

Let’s try making the same mistakes as earlier once again:

// Here’s the magic. TypedList will operate on objects
// of type User due to the &lt;User&gt; syntax.
const userList = TypedList.create<User>([new User(‘Jamie’)]);

// The compiler expects this.
userList.add(new User(‘Tom’));
userList.add(new User(‘Alice’));

// Argument of type ‘0’ is not assignable to parameter
// of type ‘User’. ts(2345)
userList.add(0);

// Property ‘getNames’ does not exist on type ‘User’.
// Did you mean ‘getName’? ts(2551)
// Note: TypeScript infers the type of users to be
// TypedList&lt;User&gt;
const users = userList.where(user => user.getNames() === ‘Jamie’);

// Property ‘ssn’ does not exist on type ‘User’. ts(2339)
users.toArray()[0].ssn = ‘000000000’;

// TypeScript infers usersWithId to be of type
// TypedList&lt;{ id: string, name: string }>
const usersWithId = userList.select(user => ({
id: newUuid(),
name: user.getName()
}));

// Property ‘ID’ does not exist on type ‘{ id: string; name: string; }’.
// Did you mean ‘id’? ts(2551)
console.log(usersWithId.toArray()[0].ID)

As you can see, the TypeScript compiler is actively aiding us with type-safety. All of those comments are errors I receive from the compiler when attempting to compile this code. Generics have permitted us to specify a type that we wish to permit our list to operate on, and from that, TypeScript can tell the types of everything, all the way down to the properties of individual objects within the array.

The types we provide can be as simple or complex as we want them to be. Here, you can see we can pass both primitives and complex interfaces. We could also pass other arrays, or classes, or anything:

const numberList = TypedList.create<number>();
numberList.add(4);

const stringList = TypedList.create<string>();
stringList.add(‘Hello, World’);

// Example of a complex type
interface IAircraft {
apuStatus: ApuStatus;
inboardOneRPM: number;
altimeter: number;
tcasAlert: boolean;

pushBackAndStart(): Promise<void>;
ilsCaptureGlidescope(): boolean;
getFuelStats(): IFuelStats;
getTCASHistory(): ITCASHistory;
}

const aircraftList = TypedList.create<IAircraft>();
aircraftList.add(/* … */);

// Aggregate and generate report:
const stats = aircraftList.select(a => ({
…a.getFuelStats(),
…a.getTCASHistory()
}));

The peculiar uses of T and U and <T> and <U> in the TypedList<T> implementation are examples of Generics in action. Having fulfilled our directive of constructing a type-safe collection structure, we’ll leave this example behind for now, and we’ll return to it once we understand what Generics actually are, how they work, and their syntax. When I’m learning a new concept, I always like to begin by seeing a complex example of the concept in use, so that when I start learning the basics, I can make connections between the basic topics and the existing example I have in my head.

What Are Generics?

A simple manner by which to understand Generics is to consider them as relatively analogous to placeholders or variables but for types. That’s not to say that you can perform the same operations upon a generic type placeholder as you can a variable, but a generic type variable could be thought of as some placeholder that represents a concrete type that will be used in the future. That is, using Generics is a method of writing programs in terms of types that are to be specified at a later point in time. The reason why this is useful is because it allows us to build data structures that are reusable across the different types they operate upon (or type-agnostic).

That’s not particularly the best of explanations, so to put it in more simple terms, as we’ve seen, it is common in programming that we might need to build a function/class/data structure that will operate upon a certain type, but it is equally common that such a data structure needs to work across a variety of different types as well. If we were stuck in a position where we had to statically declare the concrete type upon which a data structure would operate at the time when we design the data structure (at compile time), we’d very quickly find that we need to rebuild those structures in an almost-exactly-the-same-manner for every type we wish to support, as we saw in the examples above.

Generics help us to solve this problem by permitting us to defer the requirement for a concrete type until it is actually known.

Generics In TypeScript

We now have somewhat of an organic idea for why Generics are useful and we’ve seen a slightly complicated example of them in practice. For most, the TypedList<T> implementation probably already makes a lot of sense, especially if you come from a statically-typed language background, but I can remember having a difficult time understanding the concept when I was first learning, thus I want to build up to that example by starting with simple functions. Concepts related to abstraction in software can be notoriously difficult to internalize, so if the notion of Generics has not quite clicked yet, that’s completely fine, and hopefully, by this article’s closing, the idea will be at least somewhat intuitive.

To build up to being able to understand that example, let’s work up from simple functions. We’ll start with the “Identity Function”, which is what most articles, including the TypeScript documentation itself, like to use.

An “Identity Function”, in mathematics, is a function that maps its input directly to its output, such as f(x) = x. What you put in is what you get out. We can represent that, in JavaScript, as:

function identity(input) {
return input;
}

Or, more tersely:

const identity = input => input;

Trying to port this to TypeScript brings back the same type system issues we saw before. The solutions are typing with any, which we know is seldom a good idea, duplicating/overloading the function for each type (breaks DRY), or using Generics.

With the latter option, we can represent the function as follows:

// ES5 Function
function identity<T>(input: T): T {
return input;
}

// Arrow Function
const identity = <T>(input: T): T => input;

console.log(identity<number>(5)); // 5
console.log(identity<string>(‘hello’)); // hello

The <T> syntax here declares this function as Generic. Just like a function allows us to pass an arbitrary input parameter into its argument list, with a Generic function, we can pass an arbitrary type parameter as well.

The <T> part of the signature of identity<T>(input: T): T and <T>(input: T): T in both cases declares that the function in question will accept one generic type parameter named T. Just like how variables can be of any name, so can our Generic placeholders, but it’s a convention to use a capital letter “T” (“T” for “Type”) and to move down the alphabet as needed. Remember, T is a type, so we also state that we will accept one function argument of name input with a type of T and that our function will return a type of T. That’s all the signature is saying. Try letting T = string in your head — replace all the Ts with string in those signatures. See how nothing all that magical is going on? See how similar it is to the non-generic way you use functions every day?

Keep in mind what you already know about TypeScript and function signatures. All we’re saying is that T is an arbitrary type that the user will provide when calling the function, just like input is an arbitrary value that the user will provide when calling the function. In this case, input must be whatever that type T is when the function is called in the future.

Next, in the “future”, in the two log statements, we “pass in” the concrete type we wish to use, just like we do a variable. Notice the switch in verbiage here — in the initial form of <T> signature, when declaring our function, it is generic — that is, it works on generic types, or types to be specified later. That’s because we don’t know what type the caller will wish to use when we actually write the function. But, when the caller calls the function, he/she knows exactly what type(s) they want to work with, which are string and number in this case.

You can imagine the idea of having a log function declared this way in a third-party library — the library author has no idea what types the developers who use the lib will want to use, so they make the function generic, essentially deferring the need for concrete types until they are actually known.

I want to stress that you should think of this process in a similar fashion that you do the notion of passing a variable to a function for the purposes of gaining a more intuitive understanding. All we’re doing now is passing a type too.

At the point where we called the function with the number parameter, the original signature, for all intents and purposes, could be thought of as identity(input: number): number. And, at the point where we called the function with the string parameter, again, the original signature might just as well have been identity(input: string): string. You can imagine that, when making the call, every generic T gets replaced with the concrete type you provide at that moment.

Exploring Generic Syntax

There are different syntaxes and semantics for specifying generics in the context of ES5 Functions, Arrow Functions, Type Aliases, Interfaces, and Classes. We’ll explore those differences in this section.

Exploring Generic Syntax — Functions

You’ve seen a few examples of generic functions by now, but it’s important to note that a generic function can accept more than one generic type parameter, just like it can variables. You could choose to ask for one, or two, or three, or however many types you want, all separated by commas (again, just like input arguments).

This function accepts three input types and randomly returns one of them:

function randomValue<T, U, V>(
one: T,
two: U,
three: V
): T | U | V {
// This is a tuple if you’re not familiar.
const options: [T, U, V] = [
one,
two,
three
];

const rndNum = getRndNumInInclusiveRange(0, 2);

return options[rndNum];
}

// Calling the function.
// `value` has type `string | number | IAircraft`
const value = randomValue<
string,
number,
IAircraft
>(
myString,
myNumber,
myAircraft
);

You can also see that the syntax is slightly different depending on whether we use an ES5 Function or an Arrow Function, but both declare the type parameters in the signature:

const randomValue = <T, U, V>(
one: T,
two: U,
three: V
): T | U | V => {
// This is a tuple if you’re not familiar.
const options: [T, U, V] = [
one,
two,
three
];

const rndNum = getRndNumInInclusiveRange(0, 2);

return options[rndNum];
}

Keep in mind that there is no “uniqueness constraint” forced on the types — you could pass in any combination you wish, such as two strings and a number, for instance. Additionally, just like the input arguments are “in scope” for the body of the function, so are the generic type parameters. The former example demonstrates that we have full access to T, U, and V from within the body of the function, and we used them to declare a local 3-tuple.

You can imagine that these generics operate over a certain “context” or within a certain “lifetime”, and that depends on where they’re declared. Generics on functions are in scope within the function signature and body (and closures created by nested functions), while generics declared on a class or interface or type alias are in scope for all members of the class or interface or type alias.

The notion of generics on functions is not limited to “free functions” or “floating functions” (functions not attached to an object or class, a C++ term), but they can also be used on functions attached to other structures, too.

We can place that randomValue in a class and we can call it just the same:

class Utils {
public randomValue<T, U, V>(
one: T,
two: U,
three: V
): T | U | V {
// …
}

// Or, as an arrow function:
public randomValue = <T, U, V>(
one: T,
two: U,
three: V
): T | U | V => {
// …
}
}

We could also place a definition within an interface:

interface IUtils {
randomValue<T, U, V>(
one: T,
two: U,
three: V
): T | U | V;
}

Or within a type alias:

type Utils = {
randomValue<T, U, V>(
one: T,
two: U,
three: V
): T | U | V;
}

Just like before, these generic type parameters are “in scope” for that particular function — they are not class, or interface, or type alias-wide. They live only within the particular function upon which they’re specified. To share a generic type across all members of a structure, you must annotate the structure’s name itself, as we’ll see below.

Exploring Generic Syntax — Type Aliases

With Type Aliases, the generic syntax is used on the alias’s name.

For instance, some “action” function that accepts a value, possibly mutates that value, but returns void could be written as:

type Action<T> = (val: T) => void;

Note: This should be familiar to C# developers who understand the Action<T> delegate.

Or, a callback function that accepts both an error and a value could be declared as such:

type CallbackFunction<T> = (err: Error, data: T) => void;

const usersApi = {
get(uri: string, cb: CallbackFunction<User>) {
/// …
}
}

With our knowledge of function generics, we could go further and make the function on the API object generic too:

type CallbackFunction<T> = (err: Error, data: T) => void;

const api = {
// `T` is available for use within this function.
get<T>(uri: string, cb: CallbackFunction<T>) {
/// …
}
}

Now, we’re saying that the get function accepts some generic type parameter, and whatever that is, CallbackFunction receives it. We’ve essentially “passed” the T that goes into get as the T for CallbackFunction. Perhaps this would make more sense if we change the names:

type CallbackFunction<TData> = (err: Error, data: TData) => void;

const api = {
get<TResponse>(uri: string, cb: CallbackFunction<TResponse>) {
// …
}
}

Prefixing type params with T is merely a convention, just like prefixing interfaces with I or member variables with _. What you can see here is that CallbackFunction accepts some type (TData) which represents the data payload available to the function, while get accepts a type parameter that represents the HTTP Response data type/shape (TResponse). The HTTP Client (api), similar to Axios, uses whatever that TResponse is as the TData for CallbackFunction. This allows the API caller to select the data type they’ll be receiving back from the API (suppose somewhere else in the pipeline we have middleware that parses the JSON into a DTO).

If we wanted to take this a little further, we could modify the generic type parameters on CallbackFunction to accept a custom error type as well:

type CallbackFunction<TData, TError> = (err: TError, data: TData) => void;

And, just like you can make function arguments optional, you can with type parameters too. In the event that the user does not provide an error type, we’ll set it to the error constructor by default:

type CallbackFunction<TData, TError = Error> = (err: TError, data: TData) => void;

With this, we can now specify a callback function type in multiple ways:

const apiOne = {
// Error is used by default for CallbackFunction.
get<TResponse>(uri: string, cb: CallbackFunction<TResponse>) {
// …
}
};

apiOne.get<string>(‘uri’, (err: Error, data: string) => {
// …
});

const apiTwo = {
// Override the default and use HttpError instead.
get<TResponse>(uri: string, cb: CallbackFunction<TResponse, HttpError>) {
// …
}
};

apiTwo.get<string>(‘uri’, (err: HttpError, data: string) => {
// …
});

This idea of default parameters is acceptable across functions, classes, interfaces, and so on — it’s not just limited to type aliases. In all the examples we’ve seen so far, we could have assigned any type parameter we wanted to a default value. Type Aliases, just like functions, can take as many generic type parameters as you wish.

Exploring Generic Syntax — Interfaces

As you’ve seen, a generic type parameter can be provided to a function on an interface:

interface IUselessFunctions {
// Not generic
printHelloWorld();

// Generic
identity<T>(t: T): T;
}

In this case, T lives only for the identity function as its input and return type.

We can also make a type parameter available to all members of an interface, just like with classes and type aliases, by specifying that the interface itself accepts a generic. We’ll talk about the Repository Pattern a little later when we discuss more complex use cases for generics, so it’s alright if you’ve never heard of it. The Repository Pattern permits us to abstract away our data storage as to make business logic persistence-agnostic. If you wished to create a generic repository interface that operated on unknown entity types, we could type it as follows:

interface IRepository<T> {
add(entity: T): Promise<void>;
findById(id: string): Promise<T>;
updateById(id: string, updated: T): Promise<void>;
removeById(id: string): Promise<void>;
}

Note: There are many different thoughts around Repositories, from Martin Fowler’s definition to the DDD Aggregate definition. I’m merely attempting to show a use case for generics, so I’m not too concerned with being fully correct implementation-wise. There’s definitely something to be said for not using generic repositories, but we’ll talk about that later.

As you can see here, IRepository is an interface that contains methods for storing and retrieving data. It operates on some generic type parameter named T, and T is used as input to add and updateById, as well as the promise resolution result of findById.

Keep in mind that there’s a very big difference between accepting a generic type parameter on the interface name as opposed to allowing each function itself to accept a generic type parameter. The former, as we’ve done here, ensures that each function within the interface operates on the same type T. That is, for an IRepository<User>, every method that uses T in the interface is now working on User objects. With the latter method, each function would be allowed to work with whatever type it wants. It would be very peculiar to only be able to add Users to the Repository but be able to receive Policies or Orders back, for instance, which is the potential situation we’d find ourselves in if we couldn’t enforce that the type is uniform across all methods.

A given interface can contain not only a shared type, but also types unique to its members. For instance, if we wanted to mimic an array, we could type an interface like this:

interface IArray<T> {
forEach(func: (elem: T, index: number) => void): this;
map<U>(func: (elem: T, index: number) => U): IArray<U>;
}

In this case, both forEach and map have access to T from the interface name. As stated, you can imagine that T is in scope for all members of the interface. Despite that, nothing stops individual functions within from accepting their own type parameters as well. The map function does, with U. Now, map has access to both T and U. We had to name the parameter a different letter, like U, because T is already taken and we don’t want a naming collision. Quite like its name, map will “map” elements of type T within the array to new elements of type U. It maps Ts to Us. The return value of this function is the interface itself, now operating on the new type U, so that we can somewhat mimic JavaScript’s fluent chainable syntax for arrays.

We’ll see an example of the power of Generics and Interfaces shortly when we implement the Repository Pattern and discuss Dependency Injection. Once again, we can accept as many generic parameters as well as select one or more default parameters stacked at the end of an interface.

Exploring Generic Syntax — Classes

Much the same as we can pass a generic type parameter to a type alias, function, or interface, we can pass one or more to a class as well. Upon doing so, that type parameter will be accessible to all members of that class as well as extended base classes or implemented interfaces.

Let’s build another collection class, but a little simpler than TypedList above, so that we can see the interop between generic types, interfaces, and members. We’ll see an example of passing a type to a base class and interface inheritance a little later.

Our collection will merely support basic CRUD functions in addition to a map and forEach method.

class Collection<T> {
private elements: T[] = [];

constructor (elements: T[] = []) {
this.elements = elements;
}

add(elem: T): void {
this.elements.push(elem);
}

contains(elem: T): boolean {
return this.elements.includes(elem);
}

remove(elem: T): void {
this.elements = this.elements.filter(existing => existing !== elem);
}

forEach(func: (elem: T, index: number) => void): void {
return this.elements.forEach(func);
}

map<U>(func: (elem: T, index: number) => U): Collection<U> {
return new Collection<U>(this.elements.map(func));
}
}

const stringCollection = new Collection<string>();
stringCollection.add(‘Hello, World!’);

const numberCollection = new Collection<number>();
numberCollection.add(3.14159);

const aircraftCollection = new Collection<IAircraft>();
aircraftCollection.add(myAircraft);

Let’s discuss what’s going on here. The Collection class accepts one generic type parameter named T. That type becomes accessible to all members of the class. We use it to define a private array of type T[], which we could also have denoted in the form Array<T> (See? Generics again for normal TS array typing). Further, most member functions utilize that T in some way, such as by controlling the types that are added and removed or checking if the collection contains an element.

Finally, as we’ve seen before, the map method requires its own generic type parameter. We need to define in the signature of map that some type T is mapped to some type U through a callback function, thus we need a U. That U is unique to that function in particular, which means we could have another function in that class that also accepts some type named U, and that’d be fine, because those types are only “in scope” for their functions and not shared across them, thus there are no naming collisions. What we can’t do is have another function that accepts a generic parameter named T, for that’d conflict with the T from the class signature.

You can see that when we call the constructor, we pass in the type we want to work with (that is, what type each element of the internal array will be). In the calling code at the bottom of the example, we work with strings, numbers, and IAircrafts.

How could we make this work with an interface? What if we have different collection interfaces that we might want to swap out or inject into calling code? To get that level of reduced coupling (low coupling and high cohesion is what we should always aim for), we’ll need to depend on an abstraction. Generally, that abstraction will be an interface, but it could also be an abstract class.

Our collection interface will need to be generic, so let’s define it:

interface ICollection<T> {
add(t: T): void;
contains(t: T): boolean;
remove(t: T): void;
forEach(func: (elem: T, index: number) => void): void;
map<U>(func: (elem: T, index: number) => U): ICollection<U>;
}

Now, let’s suppose we have different kinds of collections. We could have an in-memory collection, one that stores data on disk, one that uses a database, and so on. By having an interface, the dependent code can depend upon the abstraction, permitting us to swap out different implementations without affecting the existing code. Here is the in-memory collection.

class InMemoryCollection<T> implements ICollection<T> {
private elements: T[] = [];

constructor (elements: T[] = []) {
this.elements = elements;
}

add(elem: T): void {
this.elements.push(elem);
}

contains(elem: T): boolean {
return this.elements.includes(elem);
}

remove(elem: T): void {
this.elements = this.elements.filter(existing => existing !== elem);
}

forEach(func: (elem: T, index: number) => void): void {
return this.elements.forEach(func);
}

map<U>(func: (elem: T, index: number) => U): ICollection<U> {
return new InMemoryCollection<U>(this.elements.map(func));
}
}

The interface describes the public-facing methods and properties that our class is required to implement, expecting you to pass in a concrete type that those methods will operate upon. However, at the time of defining the class, we still don’t know what type the API caller will wish to use. Thus, we make the class generic too — that is, InMemoryCollection expects to receive some generic type T, and whatever it is, it’s immediately passed to the interface, and the interface methods are implemented using that type.

Calling code can now depend on the interface:

// Using type annotation to be explicit for the purposes of the
// tutorial.
const userCollection: ICollection<User> = new InMemoryCollection<User>();

function manageUsers(userCollection: ICollection<User>) {
userCollection.add(new User());
}

With this, any kind of collection can be passed into the manageUsers function as long as it satisfies the interface. This is useful for testing scenarios — rather than dealing with over-the-top mocking libraries, in unit and integration test scenarios, I can replace my SqlServerCollection<T> (for example) with InMemoryCollection<T> instead and perform state-based assertions instead of interaction-based assertions. This setup makes my tests agnostic to implementation details, which means they are, in turn, less likely to break when refactoring the SUT.

At this point, we should have worked up to the point where we can understand that first TypedList<T> example. Here it is again:

class TypedList<T> {
private values: T[] = [];

private constructor (values: T[]) {
this.values = values;
}

public add(value: T): void {
this.values.push(value);
}

public where(predicate: (value: T) => boolean): TypedList<T> {
return TypedList.from<T>(this.values.filter(predicate));
}

public select<U>(selector: (value: T) => U): TypedList<U> {
return TypedList.from<U>(this.values.map(selector));
}

public toArray(): T[] {
return this.values;
}

public static from<U>(values: U[]): TypedList<U> {
// Perhaps we perform some logic here.
// …

return new TypedList<U>(values);
}

public static create<U>(values?: U[]): TypedList<U> {
return new TypedList<U>(values ?? []);
}

// Other collection functions.
// ..
}

The class itself accepts a generic type parameter named T, and all members of the class are provided access to it. The instance method select and the two static methods from and create, which are factories, accept their own generic type parameter named U.

The create static method permits the construction of a list with optional seed data. It accepts some type named U to be the type of every element in the list as well as an optional array of U elements, typed as U[]. When it calls the list’s constructor with new, it passes that type U as the generic parameter to TypedList. This creates a new list where the type of every element is U. It is exactly the same as how we could call the constructor of our collection class earlier with new Collection<SomeType>(). The only difference is that the generic type is now passing through the create method rather than being provided and used at the top-level.

I want to make sure this is really, really clear. I’ve stated a few times that we can think about passing around types in a similar way that we do variables. It should already be quite intuitive that we can pass a variable through as many layers of indirection as we please. Forget generics and types for a moment and think about an example of the form:

class MyClass {
private constructor (t: number) {}

public static create(u: number) {
return new MyClass(u);
}
}

const myClass = MyClass.create(2.17);

This is very similar to what is happening with the more-involved example, the difference being that we’re working on generic type parameters, not variables. Here, 2.17 becomes the u in create, which ultimately becomes the t in the private constructor.

In the case of generics:

class MyClass<T> {
private constructor () {}

public static create<U>() {
return new MyClass<U>();
}
}

const myClass = MyClass.create<number>();

The U passed to create is ultimately passed in as the T for MyClass<T>. When calling create, we provided number as U, thus now U = number. We put that U (which, again, is just number) into the T for MyClass<T>, so that MyClass<T> effectively becomes MyClass<number>. The benefit of generics is that we’re opened up to be able to work with types in this abstract and high-level fashion, similar to how we can normal variables.

The from method constructs a new list that operates on an array of elements of type U. It uses that type U, just like create, to construct a new instance of the TypedList class, now passing in that type U for T.

The where instance method performs a filtering operation based upon a predicate function. There’s no mapping happening, thus the types of all elements remain the same throughout. The filter method available on JavaScript’s array returns a new array of values, which we pass into the from method. So, to be clear, after we filter out the values that don’t satisfy the predicate function, we get an array back containing the elements that do. All those elements are still of type T, which is the original type that the caller passed to create when the list was first created. Those filtered elements get given to the from method, which in turn creates a new list containing all those values, still using that original type T. The reason why we return a new instance of the TypedList class is to be able to chain new method calls onto the return result. This adds an element of “immutability” to our list.

Hopefully, this all provides you with a more intuitive example of generics in practice, and their reason for existence. Next, we’ll look at a few of the more advanced topics.

Generic Type Inference

Throughout this article, in all cases where we’ve used generics, we’ve explicitly defined the type we’re operating on. It’s important to note that in most cases, we do not have to explicitly define the type parameter we pass in, for TypeScript can infer the type based on usage.

If I have some function that returns a random number, and I pass the return result of that function to identity from earlier without specifying the type parameter, it will be inferred automatically as number:

// `value` is inferred as type `number`.
const value = identity(getRandomNumber());

To demonstrate type inference, I’ve removed all the technically extraneous type annotations from our TypedList structure earlier, and you can see, from the pictures below, that TSC still infers all types correctly:

TypedList without extraneous type declarations:

class TypedList<T> {
private values: T[] = [];

private constructor (values: T[]) {
this.values = values;
}

public add(value: T) {
this.values.push(value);
}

public where(predicate: (value: T) => boolean) {
return TypedList.from(this.values.filter(predicate));
}

public select<U>(selector: (value: T) => U) {
return TypedList.from(this.values.map(selector));
}

public toArray() {
return this.values;
}

public static from<U>(values: U[]) {
// Perhaps we perform some logic here.
// …

return new TypedList(values);
}

public static create<U>(values?: U[]) {
return new TypedList(values ?? []);
}

// Other collection functions.
// ..
}

Based on function return values and based on the input types passed into from and the constructor, TSC understands all type information. On the image below, I’ve stitched multiple images together which shows Visual Studio’s Code TypeScript’s Language Extension (and thus the compiler) inferring all the types:

Generic Constraints

Sometimes, we want to put a constraint around a generic type. Perhaps we can’t support every type in existence, but we can support a subset of them. Let’s say we want to build a function that returns the length of some collection. As seen above, we could have many different types of arrays/collections, from the default JavaScript Array to our custom ones. How do we let our function know that some generic type has a length property attached to it? Similarly, how do restrict the concrete types we pass into the function to those that contain the data we need? An example such as this one, for instance, would not work:

function getLength<T>(collection: T): number {
// Error. TS does not know that a type T contains a length property.
return collection.length;
}

The answer is to utilize Generic Constraints. We can define an interface that describes the properties we need:

interface IHasLength {
length: number;
}

Now, when defining our generic function, we can constrain the generic type to be one that extends that interface:

function getLength<T extends IHasLength>(collection: T): number {
// Restricting `collection` to be a type that contains
// everything within the `IHasLength` interface.
return collection.length;
}

Real-World Examples

In the next couple of sections, we’ll discuss some real-world examples of generics that create more elegant and easy-to-reason-about code. We’ve seen a lot of trivial examples, but I want to discuss some approaches to error handling, data access patterns, and front-end React state/props.

Real-World Examples — Approaches To Error Handling

JavaScript contains a first-class mechanism for handling errors, as do most programming languages — try/catch. Despite that, I’m not a very big fan of how it looks when used. That’s not to say I don’t use the mechanism, I do, but I tend to try and hide it as much as I can. By abstracting try/catch away, I can also reuse error handling logic across likely-to-fail operations.

Suppose we’re building some Data Access Layer. This is a layer of the application that wraps the persistence logic for dealing with the data storage method. If we’re performing database operations, and if that database is used across a network, particular DB-specific errors and transient exceptions are likely to occur. Part of the reason for having a dedicated Data Access Layer is to abstract away the database from the business logic. Due to that, we can’t be having such DB-specific errors being thrown up the stack and out of this layer. We need to wrap them first.

Let’s look at a typical implementation that would use try/catch:

async function queryUser(userID: string): Promise<User> {
try {
const dbUser = await db.raw(`
SELECT * FROM users WHERE user_id = ?
`, [userID]);

return mapper.toDomain(dbUser);
} catch (e) {
switch (true) {
case e instanceof DbErrorOne:
return Promise.reject(new WrapperErrorOne());
case e instanceof DbErrorTwo:
return Promise.reject(new WrapperErrorTwo());
case e instanceof NetworkError:
return Promise.reject(new TransientException());
default:
return Promise.reject(new UnknownError());
}
}
}

Switching over true is merely a method to be able to use the switch case statements for my error checking logic as opposed to having to declare a chain of if/else if — a trick I first heard from @Jeffijoe.

If we have multiple such functions, we have to replicate this error-wrapping logic, which is a very bad practice. It looks quite good for one function, but it’ll be a nightmare with many. To abstract away this logic, we can wrap it in a custom error handling function that will pass through the result, but catch and wrap any errors should they be thrown:

async function withErrorHandling<T>(
dalOperation: () => Promise<T>
): Promise<T> {
try {
// This unwraps the promise and returns the type `T`.
return await dalOperation();
} catch (e) {
switch (true) {
case e instanceof DbErrorOne:
return Promise.reject(new WrapperErrorOne());
case e instanceof DbErrorTwo:
return Promise.reject(new WrapperErrorTwo());
case e instanceof NetworkError:
return Promise.reject(new TransientException());
default:
return Promise.reject(new UnknownError());
}
}
}

To ensure this makes sense, we have a function entitled withErrorHandling that accepts some generic type parameter T. This T represents the type of the successful resolution value of the promise we expect returned from the dalOperation callback function. Usually, since we’re just returning the return result of the async dalOperation function, we wouldn’t need to await it for that would wrap the function in a second extraneous promise, and we could leave the awaiting to the calling code. In this case, we need to catch any errors, thus await is required.

We can now use this function to wrap our DAL operations from earlier:

async function queryUser(userID: string) {
return withErrorHandling<User>(async () => {
const dbUser = await db.raw(`
SELECT * FROM users WHERE user_id = ?
`, [userID]);

return mapper.toDomain(dbUser);
});
}

And there we go. We have a type-safe and error-safe function user query function.

Additionally, as you saw earlier, if the TypeScript Compiler has enough information to infer the types implicitly, you don’t have to explicitly pass them. In this case, TSC knows that the return result of the function is what the generic type is. Thus, if mapper.toDomain(user) returned a type of User, you wouldn’t need to pass the type in at all:

async function queryUser(userID: string) {
return withErrorHandling(async () => {
const dbUser = await db.raw(`
SELECT * FROM users WHERE user_id = ?
`, [userID]);

return mapper.toDomain(user);
});
}

Another approach to error handling that I tend to like is that of Monadic Types. The Either Monad is an algebraic data type of the form Either<T, U>, where T can represent an error type, and U can represent a failure type. Using Monadic Types hearkens to functional programming, and a major benefit is that errors become type-safe — a normal function signature doesn’t tell the API caller anything about what errors that function might throw. Suppose we throw a NotFound error from inside queryUser. A signature of queryUser(userID: string): Promise<User> doesn’t tell us anything about that. But, a signature like queryUser(userID: string): Promise<Either<NotFound, User>> absolutely does. I won’t explain how monads like the Either Monad work in this article because they can be quite complex, and there are a variety of methods they must have to be considered monadic, such as mapping/binding. If you’d like to learn more about them, I’d recommend two of Scott Wlaschin’s NDC talks, here and here, as well as Daniel Chamber’s talk here. This site as well these blog posts may be useful too.

Real-World Examples — Repository Pattern

Let’s take a look at another use case where Generics might be helpful. Most back-end systems are required to interface with a database in some manner — this could be a relational database like PostgreSQL, a document database like MongoDB, or perhaps even a graph database, such as Neo4j.

Since, as developers, we should aim for lowly coupled and highly cohesive designs, it would be a fair argument to consider what the ramifications of migrating database systems might be. It would also be fair to consider that different data access needs might prefer different data access approaches (this starts to get into CQRS a little bit, which is a pattern for separating reads and writes. See Martin Fowler’s post and the MSDN listing if you wish to learn more. The books “Implementing Domain Driven Design” by Vaughn Vernon and “Patterns, Principles, and Practices of Domain-Driven Design” by Scott Millet are good reads as well). We should also consider automated testing. The majority of tutorials that explain the building of back-end systems with Node.js intermingle data access code with business logic with routing. That is, they tend to use MongoDB with the Mongoose ODM, taking an Active Record approach, and not having a clean separation of concerns. Such techniques are frowned upon in large applications; the moment you decide you’d like to migrate one database system for another, or the moment you realize that you’d prefer a different approach to data access, you have to rip out that old data access code, replace it with new code, and hope you didn’t introduce any bugs to routing and business logic along the way.

Sure, you might argue that unit and integration tests will prevent regressions, but if those tests find themselves coupled and dependent upon implementation details to which they should be agnostic, they too will likely break in the process.

A common approach to solve this issue is the Repository Pattern. It says that to calling code, we should allow our data access layer to mimic a mere in-memory collection of objects or domain entities. In this way, we can let the business drive the design rather than the database (data model). For large applications, an architectural pattern called Domain-Driven Design becomes useful. Repositories, in the Repository Pattern, are components, most commonly classes, that encapsulate and hold internal all the logic to access data sources. With this, we can centralize data access code to one layer, making it easily testable and easily reusable. Further, we can place a mapping layer in between, permitting us to map database-agnostic domain models to a series of one-to-one table mappings. Each function available on the Repository could optionally use a different data access method if you so choose.

There are many different approaches and semantics to Repositories, Units of Work, database transactions across tables, and so on. Since this is an article about Generics, I don’t want to get into the weeds too much, thus I’ll illustrate a simple example here, but it’s important to note that different applications have different needs. A Repository for DDD Aggregates would be quite different than what we’re doing here, for instance. How I portray the Repository implementations here is not how I implement them in real projects, for there is a lot of missing functionality and less-than-desired architectural practices in use.

Let’s suppose we have Users and Tasks as domain models. These could just be POTOs — Plain-Old TypeScript Objects. There is no notion of a database baked into them, thus, you wouldn’t call User.save(), for instance, as you would using Mongoose. Using the Repository Pattern, we might persist a user or delete a task from our business logic as follows:

// Querying the DB for a User by their ID.
const user: User = await userRepository.findById(userID);

// Deleting a Task by its ID.
await taskRepository.deleteById(taskID);

// Deleting a Task by its owner’s ID.
await taskRepository.deleteByUserId(userID);

Clearly, you can see how all the messy and transient data access logic is hidden behind this repository facade/abstraction, making business logic agnostic to persistence concerns.

Let’s start by building a few simple domain models. These are the models that the application code will interact with. They are anemic here but would hold their own logic to satisfy business invariants in the real-world, that is, they wouldn’t be mere data bags.

interface IHasIdentity {
id: string;
}

class User implements IHasIdentity {
public constructor (
private readonly _id: string,
private readonly _username: string
) {}

public get id() { return this._id; }
public get username() { return this._username; }
}

class Task implements IHasIdentity {
public constructor (
private readonly _id: string,
private readonly _title: string
) {}

public get id() { return this._id; }
public get title() { return this._title; }
}

You’ll see in a moment why we extract identity typing information to an interface. This method of defining domain models and passing everything through the constructor is not how I’d do it in the real world. Additionally, relying on an abstract domain model class would have been more preferable than the interface to get the id implementation for free.

For the Repository, since, in this case, we expect that many of the same persistence mechanisms will be shared across different domain models, we can abstract our Repository methods to a generic interface:

interface IRepository<T> {
add(entity: T): Promise<void>;
findById(id: string): Promise<T>;
updateById(id: string, updated: T): Promise<void>;
deleteById(id: string): Promise<void>;
existsById(id: string): Promise<boolean>;
}

We could go further and create a Generic Repository too to reduce duplication. For brevity, I won’t do that here, and I should note that Generic Repository interfaces such as this one and Generic Repositories, in general, tend to be frowned upon, for you might have certain entities which are read-only, or write-only, or which can’t be deleted, or similar. It depends on the application. Also, we don’t have a notion of a “unit of work” in order to share a transaction across tables, a feature I would implement in the real world, but, again, since this is a small demo, I don’t want to get too technical.

Let’s start by implementing our UserRepository. I’ll define an IUserRepository interface which holds methods specific to users, thus permitting calling code to depend on that abstraction when we dependency inject the concrete implementations:

interface IUserRepository extends IRepository<User> {
existsByUsername(username: string): Promise<boolean>;
}

class UserRepository implements IUserRepository {
// There are 6 methods to implement here all using the
// concrete type of `User` – Five from IRepository<User>
// and the one above.
}

The Task Repository would be similar but would contain different methods as the application sees fit.

Here, we’re defining an interface that extends a generic one, thus we have to pass the concrete type we’re working on. As you can see from both interfaces, we have the notion that we send these POTO domain models in and we get them out. The calling code has no idea what the underlying persistence mechanism is, and that’s the point.

The next consideration to make is that depending on the data access method we choose, we’ll have to handle database-specific errors. We could place Mongoose or the Knex Query Builder behind this Repository, for example, and in that case, we’ll have to handle those specific errors — we don’t want them to bubble up to business logic for that would break separation of concerns and introduce a larger degree of coupling.

Let’s define a Base Repository for the data access methods we wish to use that can handle errors for us:

class BaseKnexRepository {
// A constructor.

/*
Wraps a likely to fail database operation within a function that handles errors by catching
them and wrapping them in a domain-safe error.

@param dalOp The operation to perform upon the database.
/
public async withErrorHandling<T>(dalOp: () => Promise<T>) {
try {
return await dalOp();
} catch (e) {
// Use a proper logger:
console.error(e);

// Handle errors properly here.
}
}
}

Now, we can extend this Base Class in the Repository and access that Generic method:

interface IUserRepository extends IRepository<User> {
existsByUsername(username: string): Promise<boolean>;
}

class UserRepository extends BaseKnexRepository implements IUserRepository {
private readonly dbContext: Knex | Knex.Transaction;

public constructor (private knexInstance: Knex | Knex.Transaction) {
super();
this.dbContext = knexInstance;
}

// Example findById implementation:
public async findById(id: string): Promise<User> {
return this.withErrorHandling<User>(async () => {
const dbUser = await this.dbContext<DbUser>()
.select()
.where({ user_id: id })
.first();

// Maps type DbUser to User
return mapper.toDomain(dbUser);
});
}

// There are 5 methods to implement here all using the
// concrete type of User.
}

Notice that our function retrieves a DbUser from the database and maps it to a User domain model before returning it. This is the Data Mapper pattern and it’s crucial to maintaining separation of concerns. DbUser is a one-to-one mapping to the database table — it’s the data model that the Repository operates upon — and is thus highly dependent on the data storage technology used. For this reason, DbUsers will never leave the Repository and will be mapped to a User domain model before being returned. I didn’t show the DbUser implementation, but it could just be a simple class or interface.

Thus far, using the Repository Pattern, powered by Generics, we’ve managed to abstract away data access concerns into small units as well as maintain type-safety and re-usability.

Finally, for the purposes of Unit and Integration Testing, let’s say that we’ll keep an in-memory repository implementation so that in a test environment, we can inject that repository, and perform state-based assertions on disk rather than mocking with a mocking framework. This method forces everything to rely on the public-facing interfaces rather than permitting tests to be coupled to implementation details. Since the only differences between each repository are the methods they choose to add under the ISomethingRepository interface, we can build a generic in-memory repository and extend that within type-specific implementations:

class InMemoryRepository<T extends IHasIdentity> implements IRepository<T> {
protected entities: T[] = [];

public findById(id: string): Promise<T> {
const entityOrNone = this.entities.find(entity => entity.id === id);

return entityOrNone
? Promise.resolve(entityOrNone)
: Promise.reject(new NotFound());
}

// Implement the rest of the IRepository<T> methods here.
}

The purpose of this base class is to perform all the logic for handling in-memory storage so that we don’t have to duplicate it within in-memory test repositories. Due to methods like findById, this repository has to have an understanding that entities contain an id field, which is why the generic constraint on the IHasIdentity interface is necessary. We saw this interface before — it’s what our domain models implemented.

With this, when it comes to building the in-memory user or task repository, we can just extend this class and get most of the methods implemented automatically:

class InMemoryUserRepository extends InMemoryRepository<User> {
public async existsByUsername(username: string): Promise<boolean> {
const userOrNone = this.entities.find(entity => entity.username === username);
return Boolean(userOrNone);

// or, return !!userOrNone;
}

// And that’s it here. InMemoryRepository implements the rest.
}

Here, our InMemoryRepository needs to know that entities have fields such as id and username, thus we pass User as the generic parameter. User already implements IHasIdentity, so the generic constraint is satisfied, and we also state that we have a username property too.

Now, when we wish to use these repositories from the Business Logic Layer, it’s quite simple:

class UserService {
public constructor (
private readonly userRepository: IUserRepository,
private readonly emailService: IEmailService
) {}

public async createUser(dto: ICreateUserDTO) {
// Validate the DTO:
// …

// Create a User Domain Model from the DTO
const user = userFactory(dto);

// Persist the Entity
await this.userRepository.add(user);

// Send a welcome email
await this.emailService.sendWelcomeEmail(user);
}
}

(Note that in a real application, we’d probably move the call to emailService to a job queue as to not add latency to the request and in the hopes of being able to perform idempotent retries on failures (— not that email sending is particularly idempotent in the first place). Further, passing the whole user object to the service is questionable too. The other issue to note is that we could find ourselves in a position here where the server crashes after the user is persisted but before the email is sent. There are mitigation patterns to prevent this, but for the purposes of pragmatism, human intervention with proper logging will probably work just fine).

And there we go — using the Repository Pattern with the power of Generics, we’ve completely decoupled our DAL from our BLL and have managed to interface with our repository in a type-safe manner. We’ve also developed a way to rapidly construct equally type-safe in-memory repositories for the purposes of unit and integration testing, permitting true black-box and implementation-agnostic tests. None of this would have been possible without Generic types.

As a disclaimer, I want to once again note that this Repository implementation is lacking in a lot. I wanted to keep the example simple since the focus is the utilization of generics, which is why I didn’t handle the duplication or worry about transactions. Decent repository implementations would take an article all by themselves to explain fully and correctly, and the implementation details change depending on whether you’re doing N-Tier Architecture or DDD. That means that if you wish to use the Repository Pattern, you should not look at my implementation here as in any way a best practice.

Real-World Examples — React State & Props

The state, ref, and the rest of the hooks for React Functional Components are Generic too. If I have an interface containing properties for Tasks, and I want to hold a collection of them in a React Component, I could do so as follows:

import React, { useState } from ‘react’;

export const MyComponent: React.FC = () => {
// An empty array of tasks as the initial state:
const [tasks, setTasks] = useState<Task[]>([]);

// A counter:
// Notice, type of `number` is inferred automatically.
const [counter, setCounter] = useState(0);

return (
<div>
<h3>Counter Value: {counter}</h3>
<ul>
{
tasks.map(task => (
<li key={task.id}>
<TaskItem {…task} />
</li>
))
}
</ul>
</div>
);
};

Additionally, if we want to pass a series of props into our function, we can use the generic React.FC<T> type and get access to props:

import React from ‘react’;

interface IProps {
id: string;
title: string;
description: string;
}

export const TaskItem: React.FC<IProps> = (props) => {
return (
<div>
<h3>{props.title}</h3>
<p>{props.description}</p>
</div>
);
};

The type of props is inferred automatically to be IProps by the TS Compiler.

Conclusion

In this article, we’ve seen many different examples of Generics and their use cases, from simple collections, to error handling approaches, to data access layer isolation, and so on. In the simplest of terms, Generics permit us to build data structures without needing to know the concrete time upon which they will operate at compile-time. Hopefully, this helps to open up the subject a little more, make the notion of Generics a little bit more intuitive, and bring across their true power.