Creating Voice Skills For Google Assistant And Amazon Alexa

Original Source:

Creating Voice Skills For Google Assistant And Amazon Alexa

Creating Voice Skills For Google Assistant And Amazon Alexa

Tris Tolliday


Over the past decade, there has been a seismic shift towards conversational interfaces. As people reach ‘peak screen’ and even begin to scale back their device usage with digital wellbeing features being baked into most operating systems.

To combat screen fatigue, voice assistants have entered the market to become a preferred option for quickly retrieving information. A well-repeated stat states that 50% of searches will be done by voice in year 2020. Also, as adoption rises, it’s up to developers to add “Conversational Interfaces” and “Voice Assistants” to their tool belt.

Designing The Invisible

For many, embarking on a voice UI (VUI) project can be a bit like entering the Unknown. Find out more about the lessons learned by William Merrill when designing for voice. Read article →

What Is A Conversational Interface?

A Conversational Interface (sometimes shortened to CUI, is any interface in a human language. It is tipped to be a more natural interface for the general public than the Graphic User Interface GUI, which front end developers are accustomed to building. A GUI requires humans to learn its specific syntaxes of the interface (think buttons, sliders, and drop-downs).

This key difference in using human language makes CUI more natural for people; it requires little knowledge and puts the burden of understanding on the device.

Commonly CUIs comes in two guises: Chatbots and Voice Assistants. Both have seen a massive rise in uptake over the last decade thanks to advances in Natural Language Processing (NLP).

Understanding Voice Jargon

(Large preview)


A voice application, which can fulfill a series of intents

Intended action for the skill to fulfill, what the user wants the skill to do in response to what they say.

The sentence a user says, or utters.

Wake Word
The word or phrase used to start a voice assistant listening, e.g. ‘Hey google’, ‘Alexa’ or ‘Hey Siri’

The pieces of contextual information within an utterance, that helps the skill fulfill an intent, e.g. ‘today’, ‘now’, ‘when I get home’.

What Is A Voice Assistant?

A voice assistant is a piece of software capable of NLP (Natural Language Processing). It receives a voice command and returns an answer in audio format. In recent years the scope of how you can engage with an assistant is expanding and evolving, but the crux of the technology is natural language in, lots of computation, natural language out.

For those looking for a bit more detail:

The software receives an audio request from a user, processes the sound into phonemes, the building blocks of language.
By the magic of AI (Specifically Speech-To-Text), these phonemes are converted into a string of the approximated request, this is kept within a JSON file which also contains extra information about the user, the request and the session.
The JSON is then processed (usually in the cloud) to work out the context and intent of the request.
Based on the intent, a response is returned, again within a larger JSON response, either as a string or as SSML (more on that later)
The response is processed back using AI (naturally the reverse – Text-To-Speech) which is then returned to the user.

There’s a lot going on there, most of which don’t require a second thought. But each platform does this differently, and it’s the nuances of the platform that require a bit more understanding.

(Large preview)

Voice-Enabled Devices

The requirements for a device to be able to have a voice assistant baked in are pretty low. They require a Microphone, an internet connection, and a Speaker. Smart Speakers like the Nest Mini & Echo Dot provide this kind of low-fi voice control.

Next up in the ranks is voice + screen, this is known as a ‘Multimodal’ device (more on these later), and are devices like the Nest Hub and the Echo Show. As smartphones have this functionality, they can also be considered a type of Multimodal voice-enabled device.

Voice Skills

First off, every platform has a different name for their ‘Voice Skills’, Amazon goes with skills, which I will be sticking with as a universally understood term. Google opts for ‘Actions’, and Samsung goes for ‘capsules’.

Each platform has its own baked-in skills, like asking the time, weather and sports games. Developer-made (third-party) skills can be invoked with a specific phrase, or, if the platform likes it, can be implicitly invoked, without a key phrase.

Explicit Invocation: ”Hey Google, Talk to <app name>.”

It is explicitly stated which skill is being asked for:

Implicit Invocation: ”Hey Google, what is the weather like today?”

It is implied by the context of the request what service the user wants.

What Voice Assistants Are There?

In the western market, voice assistants are very much a three-horse race. Apple, Google and Amazon have very different approaches to their assistants, and as such, appeal to different types of developers and customers.

Apple’s Siri

Device Names: ”Google Home, Nest”

Wake Phrase: ”Hey Siri”

Siri has over 375 million active users, but for the sake of brevity, I am not going into too much detail for Siri. While it may be globally well adopted, and baked into most Apple devices, it requires developers to already have an app on one of Apple’s platforms and is written in swift (whereas the others can be written in everyone’s favorite: Javascript). Unless you are an app developer who wants to expand their app’s offering, you can currently skip past apple until they open up their platform.

Google Assistant

Device Names: ”Google Home, Nest”

Wake Phrase: ”Hey Google”

Google has the most devices of the big three, with over 1 Billion worldwide, this is mostly due to the mass of Android devices that have Google Assistant baked in, with regards to their dedicated smart speakers, the numbers are a little smaller. Google’s overall mission with its assistant is to delight users, and they have always been very good at providing light and intuitive interfaces.

Their primary aim on the platform is to use time — with the idea of becoming a regular part of customers’ daily routine. As such, they primarily focus on utility, family fun, and delightful experiences.

Skills built for Google are best when they are engagement pieces and games, focusing primarily on family-friendly fun. Their recent addition of canvas for games is a testament to this approach. The Google platform is much stricter for submissions of skills, and as such, their directory is a lot smaller.

Amazon Alexa

Device Names: “Amazon Fire, Amazon Echo”

Wake Phrase: “Alexa”

Amazon has surpassed 100 million devices in 2019, this predominantly comes from sales of their smart speakers and smart displays, as well as their ‘fire’ range or tablets and streaming devices.

Skills built for Amazon tend to be aimed at in skill purchasing. If you are looking for a platform to expand your e-commerce/service, or offer a subscription then Amazon is for you. That being said, ISP isn’t a requirement for Alexa Skills, they support all sorts of uses, and are much more open to submissions.

The Others

There are even more Voice assistants out there, such as Samsung’s Bixby, Microsoft’s Cortana, and the popular open-source voice assistant Mycroft. All three have a reasonable following, but are still in the minority compared to the three Goliaths of Amazon, Google and Apple.

Building On Amazon Alexa

Amazons Ecosystem for voice has evolved to allow developers to build all of their skills within the Alexa console, so as a simple example, I am going to use its built-in features.

(Large preview)

Alexa deals with the Natural Language Processing and then finds an appropriate Intent, which is passed to our Lambda function to deal with the logic. This returns some conversational bits (SSML, text, cards, and so on) to Alexa, which converts those bits to audio and visuals to show on the device.

Working on Amazon is relatively simple, as they allow you to create all parts of your skill within the Alexa Developer Console. The flexibility is there to use AWS or an HTTPS endpoint, but for simple skills, running everything within the Dev console should be sufficient.

Let’s Build A Simple Alexa Skill

Head over to the Amazon Alexa console, create an account if you don’t have one, and log in,

Click Create Skill then give it a name,

Choose custom as your model,

and choose Alexa-Hosted (Node.js) for your backend resource.

Once it is done provisioning, you will have a basic Alexa skill, It will have your intent built for you, and some back end code to get you started.

If you click on the HelloWorldIntent in your Intents, you will see some sample utterances already set up for you, let’s add a new one at the top. Our skill is called hello world, so add Hello World as a sample utterance. The idea is to capture anything the user might say to trigger this intent. This could be “Hi World”, “Howdy World”, and so on.

What’s Happening In The Fulfillment JS?

So what is the code doing? Here is the default code:

const HelloWorldIntentHandler = {

canHandle(handlerInput) {

return Alexa.getRequestType(handlerInput.requestEnvelope) === ‘IntentRequest’

&& Alexa.getIntentName(handlerInput.requestEnvelope) === ‘HelloWorldIntent’;


handle(handlerInput) {

const speakOutput = ‘Hello World!’;

return handlerInput.responseBuilder





This is utilizing the ask-sdk-core and is essentially building JSON for us. canHandle is letting ask know it can handle intents, specifically ‘HelloWorldIntent’. handle takes the input, and builds the response. What this generates looks like this:


“body”: {

“version”: “1.0”,

“response”: {

“outputSpeech”: {

“type”: “SSML”,

“ssml”: “Hello World!”




“sessionAttributes”: {},

“userAgent”: “ask-node/2.3.0 Node/v8.10.0”



We can see that speak outputs ssml in our json, which is what the user will hear as spoken by Alexa.

Building For Google Assistant

(Large preview)

The simplest way to build Actions on Google is to use their AoG console in combination with Dialogflow, you can extend your skills with firebase, but as with the Amazon Alexa tutorial, let’s keep things simple.

Google Assistant uses three primary parts, AoG, which deals with the NLP, Dialogflow, which works out your intents, and Firebase, that fulfills the request, and produces the response that will be sent back to AoG.

Just like with Alexa, Dialogflow allows you to build your functions directly within the platform.

Let’s Build An Action On Google

There are three platforms to juggle at once with Google’s solution, which are accessed by three different consoles, so tab up!

Setting Up Dialogflow

Let’s start by logging into the Dialogflow console. Once you have logged in, create a new agent from the dropdown just below the Dialogflow logo.

Give your agent a name, and add on the ‘Google Project Dropdown’, while having “Create a new Google project” selected.

Click the create button, and let it do its magic, it will take a little bit of time to set up the agent, so be patient.

Setting Up Firebase Functions

Right, now we can start to plug in the Fulfillment logic.

Head on over to the Fulfilment tab. Tick to enable the inline editor, and use the JS snippets below:


‘use strict’;

// So that you have access to the dialogflow and conversation object
const { dialogflow } = require(‘actions-on-google’);

// So you have access to the request response stuff >> functions.https.onRequest(app)
const functions = require(‘firebase-functions’);

// Create an instance of dialogflow for your app
const app = dialogflow({debug: true});

// Build an intent to be fulfilled by firebase,
// the name is the name of the intent that dialogflow passes over
app.intent(‘Default Welcome Intent’, (conv) => {

// Any extra logic goes here for the intent, before returning a response for firebase to deal with
return conv.ask(`Welcome to a firebase fulfillment`);


// Finally we export as dialogflowFirebaseFulfillment so the inline editor knows to use it
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);


“name”: “functions”,
“description”: “Cloud Functions for Firebase”,
“scripts”: {
“lint”: “eslint .”,
“serve”: “firebase serve –only functions”,
“shell”: “firebase functions:shell”,
“start”: “npm run shell”,
“deploy”: “firebase deploy –only functions”,
“logs”: “firebase functions:log”
“engines”: {
“node”: “10”
“dependencies”: {
“actions-on-google”: “^2.12.0”,
“firebase-admin”: “~7.0.0”,
“firebase-functions”: “^3.3.0”
“devDependencies”: {
“eslint”: “^5.12.0”,
“eslint-plugin-promise”: “^4.0.1”,
“firebase-functions-test”: “^0.1.6”
“private”: true

Now head back to your intents, go to Default Welcome Intent, and scroll down to fulfillment, make sure ‘Enable webhook call for this intent’ is checked for any intents your wish to fulfill with javascript. Hit Save.

(Large preview)

Setting Up AoG

We are getting close to the finish line now. Head over to the Integrations Tab, and click Integration Settings in the Google Assistant Option at the top. This will open a modal, so let’s click test, which will get your Dialogflow integrated with Google, and open up a test window on Actions on Google.

On the test window, we can click Talk to my test app (We will change this in a second), and voila, we have the message from our javascript showing on a google assistant test.

We can change the name of the assistant in the Develop tab, up at the top.

So What’s Happening In The Fulfillment JS?

First off, we are using two npm packages, actions-on-google which provides all the fulfillment that both AoG and Dialogflow need, and secondly firebase-functions, which you guessed it, contains helpers for firebase.

We then create the ‘app’ which is an object that contains all of our intents.

Each intent that is created passed ‘conv’ which is the conversation object Actions On Google sends. We can use the content of conv to detect information about previous interactions with the user (such as their ID and information about their session with us).

We return a ‘conv.ask object’, which contains our return message to the user, ready for them to respond with another intent. We could use ‘conv.close’ to end the conversation if we wanted to end the conversation there.

Finally, we wrap everything up in a firebase HTTPS function, that deals with the server-side request-response logic for us.

Again, if we look at the response that is generated:


“payload”: {

“google”: {

“expectUserResponse”: true,

“richResponse”: {

“items”: [


“simpleResponse”: {

“textToSpeech”: “Welcome to a firebase fulfillment”








We can see that conv.ask has had its text injected into the textToSpeech area. If we had chosen conv.close the expectUserResponse would be set to false and the conversation would close after the message had been delivered.

Third-Party Voice Builders

Much like the app industry, as voice gains traction, 3rd party tools have started popping up in an attempt to alleviate the load on developers, allowing them to build once deploy twice.

Jovo and Voiceflow are currently the two most popular, especially since PullString’s acquisition by Apple. Each platform offers a different level of abstraction, so It really just depends on how simplified you’re like your interface.

Extending Your Skill

Now that you have gotten your head around building a basic ‘Hello World’ skill, there are bells and whistles aplenty that can be added to your skill. These are the cherry on top of the cake of Voice Assistants and will give your users a lot of extra value, leading to repeat custom, and potential commercial opportunity.


SSML stands for speech synthesis markup language and operates with a similar syntax to HTML, the key difference being that you are building up a spoken response, not content on a webpage.

‘SSML’ as a term is a little misleading, it can do so much more than speech synthesis! You can have voices going in parallel, you can include ambiance noises, speechcons (worth a listen to in their own right, think emojis for famous phrases), and music.

When Should I Use SSML?

SSML is great; it makes a much more engaging experience for the user, but what is also does, is reduce the flexibility of the audio output. I recommend using it for more static areas of speech. You can use variables in it for names etc, but unless you intend on building an SSML generator, most SSML is going to be pretty static.

Start with simple speech in your skill, and once it is complete, enhance areas which are more static with SSML, but get your core right before moving on to the bells and whistles. That being said, a recent report says 71% of users prefer a human (real) voice over a synthesized one, so if you have the facility to do so, go out and do it!

(Large preview)

In Skill Purchases

In-skill purchases (or ISP) are similar to the concept of in-app purchases. Skills tend to be free, but some allow for the purchase of ‘premium’ content/subscriptions within the app, these can enhance the experience for a user, unlock new levels on games, or allow access to paywalled content.


Multimodal responses cover so much more than voice, this is where voice assistants can really shine with complementary visuals on devices that support them. The definition of multimodal experiences is much broader and essentially means multiple inputs (Keyboard, Mouse, Touchscreen, Voice, and so on.).

Multimodal skills are intended to complement the core voice experience, providing extra complementary information to boost the UX. When building a multimodal experience, remember that voice is the primary carrier of information. Many devices don’t have a screen, so your skill still needs to work without one, so make sure to test with multiple device types; either for real or in the simulator.

(Large preview)


Multilingual skills are skills that work in multiple languages and open up your skills to multiple markets.

The complexity of making your skill multilingual is down to how dynamic your responses are. Skills with relatively static responses, e.g. returning the same phrase every time, or only using a small bucket of phrases, are much easier to make multilingual than sprawling dynamic skills.

The trick with multilingual is to have a trustworthy translation partner, whether that is through an agency or a translator on Fiverr. You need to be able to trust the translations provided, especially if you don’t understand the language being translated into. Google translate will not cut the mustard here!


If there was ever a time to get into the voice industry, it would be now. Both in its prime and infancy, as well as the big nine, are plowing billions into growing it and bringing voice assistants into everybody’s homes and daily routines.

Choosing which platform to use can be tricky, but based on what you intend to build, the platform to use should shine through or, failing that, utilize a third-party tool to hedge your bets and build on multiple platforms, especially if your skill is less complicated with fewer moving parts.

I, for one, am excited about the future of voice as it becomes ubiquitous; screen reliance will reduce and customers will be able to interact naturally with their assistant. But first, it’s up to us to build the skills that people will want from their assistant.

Smashing Editorial
(dm, il)

Chaos engineering: What it is and how to use it

Original Source:

Netflix is the birthplace of chaos engineering, an increasingly significant approach to how complex modern technology architectures are developed. It essentially means that as you’re binging on your favourite Netflix show, the platform is testing its software while you watch. (Take a look at alternative user testing software.) 

The practice of chaos engineering began when Netflix’s core business was online DVD rentals. A single database corruption meant a big systems outage, which delayed the shipping of DVDs for three days. This prompted Netflix’s engineers to migrate from a monolithic on-premises software stack to a distributed cloud-based architecture running on Amazon Web Services (AWS).

While users of a distributed architecture and hundreds of micro-services benefitted from the elimination of a single point of failure, it created a much more complex system to manage and maintain. This consequently resulted in the counterintuitive realisation that in order to avoid any possibility of failure, the Netflix engineering team needed to get used to failing regularly!

10 painful UI fails (and what you can learn from them)

Enter Chaos Monkey: Netflix’s unique tool that enables users to roam across its intricate architecture and cause failures in random places and at arbitrary intervals throughout the systems. Through its implementation, the team was able to quickly verify if the services were robust and resilient enough to overcome unplanned incidents.

This was the beginning of chaos engineering – the practice of experimenting on a distributed system to build confidence in the system’s capability to withstand turbulent conditions in production and unexpected failures.

Chaos Monkey’s open source licence permits a growing number of organisations like Amazon, Google and Nike to use chaos engineering in their architectures. But how chaotic can chaos engineering really get?

Chaos Monkey logo

Chaos Monkey is used by an increasing number of organisations

Successful chaos engineering includes a series of thoughtful, planned and controlled experiments, designed to demonstrate how your systems behave in the face of failure.

Ironically, this sounds like the opposite of chaos. However, practitioners must keep in mind that the goal is learning in order to prepare for the unexpected. Modern software systems are often too complex to fully interpret, so this discipline is about performing experiments to expose all elements of the unknown. A chaos engineering experiment expands our knowledge about systemic weaknesses.

Before chaos engineering can be put into practice, you must first have some level of steadiness in your systems. We do not recommend inducing chaos if you are constantly fighting fires. If that’s in place, here are some key tips for conducting successful chaos engineering experiments:

01. Figure out steady systems

Begin by identifying metrics that indicate your systems are healthy and functioning as they should. Netflix uses ‘streams per second’ – the rate at which customers press the play button on a video streaming device – to measure its steady state.

02. Create a hypothesis

Every experiment needs a hypothesis to test. As you’re trying to disrupt the steady state your hypothesis should look something like, 'When we do X, there should be no change in the steady state of this system’. All chaos engineering activities should involve real experiments, using real unknowns.

03. Consider real world scenarios

For optimal results, think: ‘What could go wrong?’ and then simulate that. Ensure you prioritise potential errors too. Chaos engineering might seem scary at first but when done in a controlled way, it can be invaluable for understanding how complex modern systems can be made more resilient and robust. Learning to embrace organised chaos will help your teams fully understand the efficiency and resiliency of your systems against hazardous conditions.

This article was originally published in issue 324 of net, the world's best-selling magazine for web designers and developers. Buy issue 324 or subscribe to net today.

Related articles:

3 big reasons Agile projects fail (and how to avoid them)8 steps to inclusive web design7 fantastic design fails – and what we can learn from them

Ways to get a 3D effect of a product image

Original Source:

You are visualizing our new article on the Ghost Mannequin Services. You might know mannequin is employed to create 3D effects of various apparel products like the cardigan, T-shirt, jeans, Polo shirt, sweater, jacket and swimming costume. Many e-commerce sellers even use this for visualizing their e-commerce correctly instead of the human model. But if […]

The post Ways to get a 3D effect of a product image appeared first on

6 Tips for Writing Content Regularly

Original Source:

Okay, you need three to five new ideas for articles, all on the same general topic. Go…

Now here’s where we separate the people who have to come up with regular content all the time from the people who don’t. The people who have things they want to write about but can never get around to it can probably start listing ideas off the top of their head. The people who have to do this for a living, or at least as a part of their job, just groaned in mock agony.

Generating new ideas regularly can be rough, whether you’re doing it for blog posts, social media posts, videos, memes, newsletters, art, or anything else you can imagine. Waiting for inspiration to strike is, for the most part, a sucker’s game. If you want that lightning, you’re going to have to make like a scientist and shoot off a few rockets, metaphorically speaking.

1. Make it Your Livelihood

Okay, this is the nuclear option. You probably shouldn’t start here. But, I’d be remiss if I didn’t talk about just how fast your brain can work when you don’t have a choice. When your rent is on the line, you’ve got a special kind of motivation to pump out ideas. You start to consider ideas you might have passed up as “too weird”, or “too boring” before.

Then you find a way to make those articles work. And you find new ways to come up with ideas. You find yourself doing some of your best—and sometimes worst—work in the middle of the night, because that’s when things finally clicked.

2. Dedicate Some Time to Idea Generation

Looking for new ideas is a mindset, not just something you do. Some people are in this state of mind at all times, every day. For those of us who aren’t Elon Musk, it takes a bit more scheduling.

Set aside some time to look for new ideas: Search the Internet; see what other people have been writing about; stop and ask yourself questions like, “What most interests me about my chosen theme, these days?”. Put yourself in an inquisitive, creative mindset.

Now, you may or may not come up with all of the ideas you need in one sitting. However, dedicating some time to getting your brain in gear can help you come up with new ideas throughout the next day or so. Then you just come back and write them down as soon as you can. Having a note taking app on your phone can help with this.

3. Write From the Heart

As a writer, as a reader, and occasionally as an editor, I prefer articles with passion in them. It’s far more entertaining to read articles by people who clearly feel strongly about their chosen topic. Sure, those strong feelings can lead to biased opinions, but I like it better when they actually have opinions. Writing, or vlogging, meme-ing, or whatever, about topics that get your brain moving at full speed is a good way to make great content.

That’s not to say that you can’t write about things that are less interesting to you, or that you don’t have a lot of experience with. If it’s your job, you might have to. This is where thorough research will save your rear end. But ideally, write what you know and love.

4. Follow the Trends From a Distance

Okay, trends can always give you something to write about. However, don’t just repeat what other people have said. Try to add something new to the conversation. Reference what others have to say on the subject, and add your own insights. Or come at the subject from a completely different angle.

Wait for other people to have the knee-jerk reactions, and write the hot takes. It’s probable that other people will always do this faster than you, so be patient. Take advantage of others’ immediate reactions and the extra time to build a more nuanced perspective on any given issue.

Of course, the Internet is a big place, and chances are good that you’ll end up saying something rather similar to what everyone else says. That’s kind of inevitable. But it’s worth trying to say something new.

5. Get Off the Internet Once in a While

Some of you best ideas hit you in the shower because that’s what happens when you give your brain a break. Your subconscious mind needs time to make connections, and it often works best when you’re doing other things that don’t require as much concentration. This is why career writers spend half their work day (and maybe longer, if they write fiction) pacing, staring out windows, and making coffee.

Also, talk to people. I know I’ve mentioned this before in other recent articles of mine, but… just talk to them. Whether you’re getting ideas from your conversation with them, or using them as a sounding board, never underestimate the value of a good conversation partner.

6. Never Stop Learning

The more you know, the more you’ll have to say. Learning new things, whether in your industry and chosen writing theme or not, will give you more to talk about. It will broaden your perspective by introducing you to new ways of thinking. As in, learning new things literally changes the way your brain works a little bit, which can lead to new ideas.

Plus, if you can draw parallels between what you’ve learned and the topic you write about, there’s an article idea right there. I mean, I managed to compare cat behavior to principles of UX design not so long ago. There are good ideas out there. Just go looking.


Featured image via Unsplash.


p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

It’s Time to Start Making Your Web Apps Reactive

Original Source:

It's Time to Start Making Your Web Apps Reactive

This article was created in partnership with Manning Publications. Thank you for supporting the partners who make SitePoint possible.

You’ve heard of the principle of “survival of the fittest”, and you know that it’s especially true in web development. Your users expect split-second performance and bug-free interfaces — and if you can’t deliver them, you can be sure they’ll go straight to a competitor who can. But when it comes to survival, it’s important to remember the full principal of evolution: the best way to thrive is to be adaptable to change.

That’s where reactive programming comes in. Reactive applications are created to be adaptable to their environments by design. Right from the start, you’re building something made to react to load, react to failure, and react to your users. Whatever being deployed to production throws at your application, reactive programming will mean it can handle it.

How does reactive programming achieve this? It embeds sound programming principles into your application right from the very beginning.

Reactive Applications Are Message-driven

In reactive programming, data is pushed, not pulled. Rather than making requests of data that may or may not be available, client recipients await the arrival of messages with instructions only when data is ready. The designs of sender and recipient aren’t affected by how you propagate your messages, so you can design your system in isolation without needing to worry about how messages are transmitted. This also means that data recipients are only consuming resources when they’re active, rather than bogging down your application with requests for unavailable data.

Reactive Applications Are Elastic

Reactive applications are designed to elastically scale up or scale down, based on the amount of workload they have to deal with. A reactive system can both increase or decrease the resources it gives to its inputs, working without bottlenecks or contention points to more easily shard components and then distribute resources among them. Not only does this save you money on unused computing power, but even more importantly, it means that your application can easily service spikes in user activity.

Reactive Applications Are Responsive

Reactive applications must react to their users, and to their users’ behavior. It’s essential that the system responds in a timely manner, not only for improved user experience, but so that problems can be quickly identified and (hopefully!) solved. With rapid response times and a consistent quality of service, you’ll find that your application has simpler error handling as well as much greater user confidence.

Reactive Applications Are Resilient

Reactive applications need to respond, adapt, and be flexible in the face of failure. Because a system can fail at any time, reactive applications are designed to boost resiliency through distribution. If there’s a single point of failure, it’s just that — singular. The rest of your reactive application keeps running, because it’s been built to work without relying on any one part.

Further Resources

Reactive programming can be challenging to master. Fortunately, there’s lots of resources to help you out. Some of the best are the books and videos of Manning Publications, publishers of the highest quality tech books and videos you can buy today.

Exploring Modern Web Development is a 100% free guide to the most common tools for reactive programming. With this well-rounded sampler, you’ll have a firm foundation for developing awesome web apps with all the modern reactive features and functions today’s users expect.

SitePoint users can get 40% off top Manning reactive programming and web development books and videos with the coupon code NLSITEPOINT40. Check out popular bestsellers here.

The post It’s Time to Start Making Your Web Apps Reactive appeared first on SitePoint.

Free HTML Templates for Photographers

Original Source:

Every photographer should have an online portfolio, but it’s not exactly easy to make one yourself. If you’re not familiar with web development, it’s going to be very hard to make a site from scratch.

Thanks to template designers, you won’t have to. You can find hundreds of totally free HTML templates online. All you have to do is add your own text and images, and you can even use them as a base to add your own HTML elements.

Want to try it? We’ve collected some of the best, free, photography-focused HTML templates. Give your photos the gorgeous presentation they deserve.

UNLIMITED DOWNLOADS: Email, admin, landing page & website templates



Example of Studio

Studio is an absolutely stunning photography theme jam-packed with opportunities to show off your photos. The homepage is dedicated to a slider of your work, and nearly every page besides includes at least a small scrolling gallery.


Example of Sentra

Professional photographers and videographers will love this one. The header features an animated photography slideshow that scrolls seamlessly into a lightbox gallery. There’s room for a blog and contact form too. This is a template that’s sure to help you attract more customers.


Example of Snapshot

Snapshot is a dark theme for photo agencies. The professional, modern look is enhanced by beautiful scrolling animations and plenty of room to show off photos. There’s a contact form, testimonial slider, and a small portfolio gallery with lightbox support too.


Example of Shutter

Shutter is all about the photography. The dark homepage is free of fluff and completely dedicated to a gallery featuring your work. Fixed sidebar navigation leads you to another gallery, bio page, contact form, and even a blog.


Example of Dimension

This really cool one-page design uses a card-based layout. Each link opens a popup card with room for text and images. It’s very simple, but with the full-screen photo background, it can make an effective small portfolio for photographers.


Example of Photon

Photon uses a scrolling slider to hook new users, with a call to action to lead them right to your gallery pages. Once they’re done exploring your work, they’ll definitely want to visit your services or contact page to learn more.


Example of Strata

This simplistic design features a fixed sidebar with social links, avatar, and a small about section. Scrolling reveals a beautiful lightbox gallery and contact form so clients can reach out. One nice touch here is the subtle parallax effect on the left as you scroll.


Example of Earth

Earth has a pretty unique design: the site is made entirely of a full-screen gallery, which swipes to show a second gallery slider, contact form, or about page as you click. It’s definitely an interesting idea and it’s implemented very well.


Example of Louie

Neatly and carefully designed, Louie is the perfect portfolio for photographers. It uses a fixed left sidebar that follows you across the whole site, with multiple bite-sized pages to concisely show off your work.


Example of Photography

This beautiful homepage portfolio is great for photographers of all caliber. Featuring a gorgeous full-screen header, collapsible fixed navigation, a moderately sized lightbox gallery, a contact form, and support for a blog, you’ve got all the elements you’ll need to make your debut as a professional photographer.

The Card

Example of The Card

Sometimes simple is best, and a picture is worth a thousand words. This design uses small cards for a minimal amount of text, and otherwise relies on the full-screen background and lightbox gallery page to prove that you’re worth hiring. This site is like a big slideshow for your best work.

Gorgeous HTML Photography Themes

Free HTML templates have made creating a website much more accessible. Now you don’t need to learn advanced web design skills or hire a developer to make a portfolio site; just upload a file and tweak some text.

You can have a beautiful, stand-out website with just a little work, and you can get back to focusing on your photography career.

Apple’s latest Mac Pro is easy to fix (just expensive to buy)

Original Source:

The latest Apple Mac Pro may look like a cheese grater, but it is one hell of a machine. Buying one also has a hefty price tag. The Mac Pro will cost you at least $6,000, and you could spend over $52,000 on the highest spec version.

With such a big outlay, you'd want it to be reliable. No one wants to splash out a such a wad of cash to only have to pay again to get to it fixed. But according to iFixit – a site that that teaches people how to fix almost anything – the latest Mac Pro is "Fixmas miracle,". It's "a masterclass in repairability”. Though iFixit sadly didn't test out how to fix those $400 Mac Pro wheels you can buy as an extra. 

Mac Pro

Looks easy to fix. Doesn’t it?

So what makes it easy to repair? Well you can fix it using "only standard tools" and more interestingly, with "no tools at all", meaning that some parts are repairable with just your fingers.

Another plus point is the adoption of "industry-standard sockets and interfaces" for its major components. This means that they're easy to get hold of (see Apple’s list of approved repairs) and replace.  

But, be warned, there are a few drawbacks as noted by iFixit. The SSDs (Solid State Hard Drives) used by Apple are custom-made, and replacing these does require an Apple technician.

We love that the Mac Pro is offering peace of mind (well sort of) and making its desktops easier to fix. But, when you are paying a premium for a top of the range model, you really shouldn't be worrying about having to fix it. You should simply be enjoying what it brings to your everyday existence. And if you'd spent $6,000 or more on a machine, would you be brave enough to fix it yourself?

Read iFixit's full breakdown here.

IKEA takes a bite out of Apple in hilarious new adApple MacBook Pro 16-inch review$999 Mac Pro stand: Has Apple lost the plot?

How to Quickly and Easily Remove a Background in Photoshop

Original Source:

How to Quickly and Easily Remove a Background in Photoshop

This article on how to remove a background in Photoshop remains one of our most popular posts and was updated in 2019 for Adobe Photoshop 2020.

Photoshop offers many different techniques for removing a background from an image. For simple backgrounds, using the standard magic wand tool to select and delete the background may well be more than adequate. For more complicated backgrounds, you might use the Background Eraser tool.

The Background Eraser Tool

The Background Eraser tool samples the color at the center of the brush and then deletes pixels of a similar color as you “paint”. The tool isn’t too difficult to get the hang of. Let me show you how it works.

Remove a Background, Step 1: Open your Image

Start by grabbing an image that you want to remove the background from. I’ll be using the image below, as it features areas that range from easy removal through to more challenging spots. I snagged this one for free from Unsplash.

The example image: man standing against lattice background

Now let’s open it in Photoshop.

The example image opened in Photoshop

Remove a Background, Step 2: Select Background Eraser

Select the Background Eraser tool from the Photoshop toolbox. It may be hidden beneath the Eraser tool. If it is, simply click and hold the Eraser tool to reveal it. Alternatively, you can press Shift + E to cycle through all the eraser tools to get to the Background Eraser. If you had the default Eraser tool selected, press Shift + E twice to select the Background Eraser Tool.

choosing the background eraser tool

Remove a Background, Step 3: Tune Your Tool Settings

On the tool options bar at the top of the screen, select a round, hard brush. The most appropriate brush size will vary depending on the image you’re working on. Use the square bracket key ([ or ]) for quickly scaling your brush size.

selecting a brush

Alternatively, you can right-click your mouse anywhere on the artboard to change the size and hardness of your brush too.

alternative way to change brush size

Next, on the tool options bar, make sure Sampling is set to Continuous (it’s the first of the three icons), the Limits to Find Edges* and the *Tolerance has a range of 20-25%.

sampling, limits and tolerance

Note: a lower tolerance means the eraser will pick up on fewer color variations. While a higher tolerance expands the range of colors your eraser will select.

Remove a Background, Step 4: Begin Erasing

Bring your brush over your background and begin to erase. You should see a brush-sized circle with small crosshairs in the center. The crosshairs show the “hotspot” and delete that color wherever it appears inside the brush area. It also performs smart color extraction at the edges of any foreground objects to remove “color halos” that might otherwise be visible if the foreground object is overlaid onto another background.

beginning the process

When erasing, zoom up your work area and try to keep the crosshairs from overlapping on the edge of your foreground. It’s likely that you’ll need to reduce the size of the brush in some places to ensure that you don’t accidentally erase part of your foreground subject.

The post How to Quickly and Easily Remove a Background in Photoshop appeared first on SitePoint.

5 Best Mozilla Firefox Privacy-focused Add-ons

Original Source:

If you’re a privacy enthusiast, you might be using one of the best open-source web browsers: Mozilla Firefox. Even if you don’t care about online privacy, it’s best to opt for…

Visit for full content.

How We Can Solve the Cryptocurrency Energy Usage Problem

Original Source:

Cryptocurrencies and Energy Usage: Problems and Solutions

Bitcoin is still the most important cryptocurrency people know about, and it serves as the entry point of the crypto space. However, every innovative project has to pay its price. For Bitcoin, it is its high carbon footprint created by mining.

Bitcoin mining works by solving cryptographic puzzles, also referred to Proof of Work (PoW). The miner that’s first to find the solution receives a Bitcoin reward. However, this race towards finding the solution comes with high energy usage, as it’s a resource-intensive process requiring a lot of electricity.

Currently, Bitcoin mining uses 58.93 TWh per year. An online tool by the University of Cambridge showed that Bitcoin uses as much energy as the whole of Switzerland. More important is the carbon footprint of Bitcoin. The electricity generated for powering the Bitcoin network equals 22 megatons of CO2 on a yearly basis. You can compare this carbon footprint with the footprint of a city like Kansas City (US).

This article will cover the following topics:

how the amount of energy consumed by each blockchain project differs depending on the implemented consensus algorithm
possible solutions for the high energy usage of Bitcoin
the effect of the Bitcoin network using a lot of excess and green energy.

To get started, let’s discuss if Bitcoin’s energy usage really is a problem?

Are We Thinking the Wrong Way about Bitcoin’s Energy Usage?

Let’s take a moment to think about where the energy for Bitcoin mining comes from. It’s worth questioning if the electricity the Bitcoin nodes use does harm the environment?

Many countries have an excess of electricity, especially when it comes to green energy solutions. The energy coming from green solutions like wind farms or solar plants is often hard to store or sell when the supply outweighs demand. This is true for many countries, especially China, which is responsible for 70 percent of the world’s Bitcoin mining.

As Bitcoin mining requires a lot of energy, node operators look for countries with cheap electricity prices. Reuters reported that “wasted [Chinese] wind power amounted to around 12 percent of total generation in 2017”. This means that node operators often end up in countries with an excess of energy. In those countries, Bitcoin mining plays an important role in neutralizing the energy market. Besides that, without Bitcoin mining, this excess of electricity is otherwise wasted.

Is it safe to say that Bitcoin does not contribute to environmental CO2 production? No, it does contribute for sure. However, the energy usage and CO2 pollution we think Bitcoin is responsible for is actually much lower.

Think about making a credit card payment. Every time you pull out your credit card to make a transaction, you also contribute to environmental pollution. You are not aware of the gigantic server plants of up to 100,000 square-feet to store and process all your transactions. Not to mention other things like their offices, payment terminals, or bank vaults.

It’s easy to attack Bitcoin for its energy usage. Therefore, it’s important to know that there is also an enormous hidden energy usage behind the VISA network. On the other side, the Bitcoin network only processes 100 million transactions per year, whereas the financial industry reaches up to 500 billion transactions per year.

The post How We Can Solve the Cryptocurrency Energy Usage Problem appeared first on SitePoint.