Getting Started With DOMmy.Js

Original Source: https://www.webdesignerdepot.com/2018/11/getting-started-with-dommy-js/

DOMmy.js is a super-lightweight, standalone Javascript library, designed to work easily with the DOM and produce powerful CSS3 animations via JS.

Full disclosure: I developed DOMmy.js. And in this tutorial I want to demonstrate how it can be used to keep your webpages nice and light.

DOMmy.js has a very shallow learning curve; it’s even shallower if you have ever used an old-generation style framework such as jQuery or Prototype.

DOMmy.js isn’t a next-generation framework like Vue.js, React, or Angular; these are tools which use new technologies such as the virtual DOM, dynamic templating, and data binding; you use next-generation tools to build asyncronous applications.

DOMmy.js is a Javascript frame work for writing “classic” Javascript code, working with the DOM at the core level. A Javascript framework like jQuery does a similar task, with three big differences:

jQuery uses a proprietary, internal engine to work with selectors and to produce animations. This engine is entirely Javascript-based. Conversely, DOMmy.js allows you to select any element in the DOM and create powerful animations, by using the modern and super-powerful specifics of both Javascript and CSS3. I didn’t need to write a Javascript engine to work with DOM and animations. The cross-browser, flexible and powerful tools that allow you to do it are already available. I just wanted a Javascript structure that would assists developers in writing DOM controls and CSS3 animations using the Javascript language.
DOMmy.js is a Javascript structure that looks at the future. It is written to be compatible with some of the latest versions of the major browsers, but I don’t want my code to be compatible with very old software like IE6/7 and similar.
jQuery and Prototype both have complete APIs based on an internal engine, DOMmy.js provides controls for just two main things: DOM operations and animations; other tasks can easily be accomplished with vanilla Javascript or by extending the DOMmy.js central structure.

So, DOMmy.js is a cross-browser, super-lightweight (the minified version weights only 4kb), super-easy to learn, super-fast to execute, Javascript library. In a nutshell, with DOMmy.js you can:

navigate throughout the DOM, by selecting and working with HTML elements and collections of elements;
create powerful CSS3 animations and collections of animations;
add (multiple) events, CSS properties and attributes to elements;
use an element storage to store and retrieve specific content;
work with a coherent this structure;
have a cross-browser DOMReady fashion, with which you do not need to wait for resources (like images and videos) to completely load in order to work with DOM.

Installing DOMmy.js

Implementing DOMmy.js into your web page is simple. You only need to include the script through the script tag, and you’ll be ready to start. You can download the script and use it locally or load it through the project’s website:

<script src=”https://www.riccardodegni.com/projects/dommy/dommy-min.js”></script>
<script>
// use dommy.js
$$$(function() {
// …
});
</script>
The DOM is Ready!

Like I said on before, with DOMmy.js we don’t need to wait for the resources of the page to load in order to work with DOM. To do this, we use the $$$ function. The content placed inside this handy function will be executed when the DOM structure (and not the “page”) is ready. Writing code with DOMmy.js is super-fast. I wanted to create a snippet that allowed me to write as less code as possible, so I guess that nothing is faster than writing:

$$$(function() {
// when DOM is ready do this
});

…in a standalone fashion. Of course, you can use as many DOMReady blocks as you want or need:

// block 1
$$$(function() {
// when DOM is ready do this
});

// block 2
$$$(function() {
// when DOM is ready do this
});

// block 3
$$$(function() {
// when DOM is ready do this
});
Select DOM Elements

So now we can start to work with our DOM structure. You can select the element you want by using an HTML “id”. This is done with the $ function:

// select an element by ID.
// In this case you select the element with ID “myElement”
$(‘myElement’);

And you can select the collection/list of elements you want by using a CSS selector. This is done with the $$ function:

// select a collection of elements by CSS selector
$$(‘#myid div.myclass p’)

Of course you can select multiple elements by using multiple selectors, too:

// a selection of HTML elements
$$(‘#myfirstelement, #mysecondelement’)

// another selection of HTML elements
$$(‘#myfirstelement div.myclass a, #mysecondelement span’)

There are no limits to DOM selection. The elements will be included in the final collection with which you can work with the DOMmy.js methods.

Adding Events

Adding events to elements (in a cross-browser fashion) is very simple. Just use to the on method on the collection of element you want to attach the event(s) to with the specific event:

// add an event to an element that fires when you click the element
$(‘myElement’).on(‘click’, function() {
log(‘Hey! You clicked on me!’);
});

Note: the function log is a built-in function that works as a global-cross-browser shortcut for console.log. If the browser does not support the console object the result will be printed in a global alert box.

You can add multiple events at once, of course:

// add a events to an element
$$(‘#myElement p’).on({
// CLICK event
‘click’: function() {
log(‘Hey, you clicked here!’);
},

// MOUSEOUT event
‘mouseout’: function() {
log(‘Hey you mouseovered here!’);
}
});

As you can see, you don’t need to apply the DOMmy.js methods to each element. You apply the methods directly to the result of the DOM selection and the internal engine will properly iterate through the HTML elements.

You can access the “current” element in the iteration simpy by using the this keyword:

$(‘demo’).on({
‘click’: function() {
this.css({‘width’: ‘300px’})
.html(‘Done!’);
}
});
Working With Attributes

In the same way, you can add, edit and retrieve the values of HTML attributes:

// get an attribute
var title = $(‘myElement’).attr(‘title’);

// set an attribute
$(‘myElement’).attr(‘title’, ‘my title’);

// set multiple attributes
$(‘myElement’).attr({‘title’: ‘my title’, ‘alt’: ‘alternate text’});

The attr method works in three different ways:

it returns the value of the specified attribute if the argument you provided is a string;
it sets an HTML attribute to a new value if you pass two arguments;
it sets a collection of HTML attributes if you pass an object of key/value pairs representing the element’s attributes.

Setting CSS Styles

Just like HTML attributes, you can set and get CSS values by means of the css method:

// set single CSS
$(‘myElement’).css(‘display’, ‘block’);

// set multiple CSS
$(‘myElement’).css({‘display’: ‘block’, ‘color’: ‘white’});

// get single CSS
$(‘myElement’).css(‘display’);

// get multiple CSS
$(‘myElement’).css([‘display’, ‘color’]);

As you can see, with the powerful css method you can:

set a single CSS property to a new value, if you pass two strings;
get the value of a CSS property, if you pass one string;
set multiple CSS properties, if you pass an object of key/value pairs;
get an array of values, if you pass an array of strings representing CSS properties.

Getting and Setting HTML Content

With the html method you can set and get the element’s HTML value:

// set html
$(‘myElement’).html(‘new content’);

// get html
var content = $(‘myElement’).html();

// logs ‘new content’
log ( content );

Iteration

If you select more than one element, you can apply a DOMmy.js method to every element just in one call.
However, when you want to work with each element manually, like when you are getting contents (i.e. HTML content or stored content). In this case, you can use the handy forEach function in the following way:

// get all divs
var myels = $$(‘div’);

// set a stored content
myels.set(‘val’, 10);

// ITERATE through each single div and print its attributes
myels.forEach(function(el, i) {
log(el.attr(‘id’) + el.get(‘val’) + ‘ n’);
});

The forEach funtion is the preferred way to iterate through HTML collections of elements using DOMmy.js. When applied on a DOMmy.js element, it uses two parameters:

element: the DOMmy.js element you are selecting. You can apply every DOMmy.js method to it;
index: an index representing the position of the element in the collections of elements.

Storage

The storage is a place, that belongs to elements, where you can store as many values as you want and retrieve them at the desired moment. You can work with the storage by using the set and get methods:

// set storage
var myVal = “hello”;
$(‘myElement’).set(‘myVal’, myVal);

// multiple storage
var mySecondVal = “everybody”;
$(‘myElement’).set({‘myVal’: myVal, ‘mySecondVal’: mySecondVal});

// get
$(‘myElement’).get(‘myVal’) + $(‘myel’).get(‘mySecondVal’);
// “hello everybody”

As you can see, you can store single item or multple items at once. The items you store belong to the element that you are selecting.
Note: remember that if you are selecting multiple elements, the item will be stored in each of these elements, even if the CSS is slightly different, because DOMmy.js recognizes each specific element:

// set an item to div#a and div#b
$$(‘div#a, div#b’).set(‘myStoredValue’, 10);

// get from #a, that of course is the same as div#a
$(‘a’).get(‘myStoredValue’); // 10

Of course DOMmy.js internal mechanics identify “div#a” and “a” / “#a” as the same pointer to the same element, so you can safely work with storage and others DOMmy.js methods in a coherent way.

If you store the DOM element in a single variable, which is the best way to work with HTML elements, you can bypass concurrent calls and earn memory space:

const myEl = $(“div#a div”);

// store data
myEl.set(‘myStoredValue’, 10);

// get data
myEl.get(‘myStoredValue’); // 10
CSS3 Animations

The crown jewel of DOMmy.js is its animation engine. This is based on CSS3 animations engine, so it works with all the major browsers. Animations are generated through the fx method, that accepts the following arguments:

an object, representing the CSS property to animate;
a number, representing the duration of the animation, in seconds. Default value is 5 seconds;
a function, representing a callback that will be called once the animation is done;
a boolean, representing whether to chain concurrent animations or not. Default is false.

Let’s see how to use the fx method, by creating two simple animations.

// simple animation
$(‘myel’).fx({‘width’: ‘300px’, ‘height’: ‘300px’}, 2);

Here we simply alter the CSS properties width and height of #myel in 2 seconds. In the following example we create the same animation with a duration of 1 second and with a callback function that will edit the HTML content of the element with the “Completed!” string.

You can access the current element by using the this keyword:

// simple animation with callback
$(‘myel’).fx({‘width’: ‘300px’, ‘height’: ‘300px’}, 1, function() {
this.html(‘Completed!’);
});
Chaining

You can create magic with “animation chaining”: by using true as a value of the fourth parameter, you can chain as many animation as you want. To do this, simple use the fx method more than once on a specific selector. In the following example we change the width of all HTML elements that match the “.myel” selector on multiple times:

var callBack = function() {
// do something cool
};

// queue animations
$$(‘.myel’).fx({‘width’: ‘400px’}, 2, callBack, true);
.fx({‘width’: ‘100px’}, 4, callBack, true);
.fx({‘width’: ’50px’}, 6, callBack, true);
.fx({‘width’: ‘600px’}, 8, callBack, true);

Of course you can chain everything. DOMmy.js’s structure allows you to set concurrent calls to elements:

// multiple calls
$$(‘div#e, #d’)
.fx({‘font-size’: ’40px’, ‘color’: ‘yellow’}, 1)
.fx({‘font-size’: ’10px’, ‘color’: ‘red’}, 1)
.attr(‘title’, ‘thediv’)
.attr(‘class’, ‘thediv’)
.attr({‘lang’: ‘en’, ‘dir’: ‘ltr’});

Remember that the chained calls will be executed immediately. If you want to chain something at the end of a specific animation you have to set a callback for that animation.

Create an Event Handler That Fires Animations

Now, we want to set up a snippet that produces an animation on a specific element. This animation will fire when the user moves the mouse over the element itself and when he leaves back the mouse. At the end of each step, a proper HTML content will be set:

$(‘myElement’).on({
‘mouseover’: function() {
this.fx({‘width’: ‘300px’}, 1, function() {
this.html(‘Completed!’);
});
},
‘mouseout’: function() {
this.fx({‘width’: ‘100px’}, 1, function() {
this.html(‘Back again!’);
});
}
});

As you can see, with DOMmy.js is super-easy to work with CSS3 animations. Always remember that this refers to the current element.

Now, we want to produce a chained animation that alters the CSS style of an element in four different steps, using four different callbacks and fire this animation when the user clicks the element:

var clicked = false;

$(‘myElement’).on({
‘click’: function() {
if( !clicked ) {
clicked = true;
this.fx({‘width’: ‘300px’, ‘height’: ‘300px’, ‘background-color’: ‘red’, ‘border-width’: ’10px’}, 1, function() {
this.html(‘1’);
}, true)
.fx({‘height’: ’50px’, ‘background-color’: ‘yellow’, ‘border-width’: ‘4px’}, 1, function() {
this.html(‘2’);
}, true)
.fx({‘width’: ‘100px’, ‘background-color’: ‘blue’, ‘border-width’: ’10px’}, 1, function() {
this.html(‘3’);
}, true)
.fx({‘height’: ‘100px’, ‘background-color’: ‘#3dac5f’, ‘border-width’: ‘2px’}, 1, function() {
this.html(‘4’);
clicked = false;
}, true);
}
}
});

You can see these snippets in action directly in the Demo section of the DOMmy.js project.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Happy First Anniversary, Smashing Members!

Original Source: https://www.smashingmagazine.com/2018/11/smashing-membership-first-anniversary/

Happy First Anniversary, Smashing Members!

Happy First Anniversary, Smashing Members!

Bruce Lawson

2018-11-21T14:00:59+01:00
2018-11-28T09:44:08+00:00

Doesn’t time fly? And don’t ships sail? A year ago, we launched our Smashing Membership programme so that members of the Smashing readership could support us for a small amount of money (most people pay $5 or $9 a month, and can cancel at any time). In return get access to our ebooks, members-only webinars, discounts on printed books and conferences, and other benefits.

We did this because we wanted to reduce advertising on the site; ad revenues were declining, and the tech-savvy Smashing audience was becoming increasingly aware of the security and privacy implications of ads. And we were inspired by the example of The Guardian, a British newspaper that decided to keep its content outside a paywall but ask readers for support. Just last week, the Guardian’s editor-in-chief revealed that they have the financal support of 1 million people.

Smashing Memeber’s Ship

Welcome aboard — we’re celebrating! It’s the first year of Smashing Membership (or Smashing Members’ Ship… get it?)!

Into Year Two

We recently welcomed Bruce Lawson to the team as our Membership Commissioning Editor. Bruce is well known for his work on accessibility and web standards, as well as his fashion blog and world-class jokes.

So now that the team is larger, we’ll be bringing you more content — going up to three webinars a month. The price stays the same. And, of course, we’d love your input on subjects or speakers — let us know on Slack.

When we set up Membership, we promised that it would be an inclusive place where lesser-heard voices (in addition to big names) would be beamed straight to your living room/ home office/ sauna over Smashing TV. Next month, for example, Bruce is pleased to host a webinar by Eka, Jing, and Sophia from Indonesia, Singapore, and the Philippines to tell us about the state of the web in South East Asia. Perhaps you’d like to join us?

Please consider becoming a Smashing Member. Your support allows us to bring you great content, pay all our contributors fairly, and reduce advertising on the site.

Thank you so much to all who have helped to make it happen! We sincerely appreciate it.

Smashing Editorial
(bl, sw, il)

Image Reveal Hover Effects

Original Source: http://feedproxy.google.com/~r/tympanus/~3/0VgRsK13dTU/

Today we’d like to share a set of link hover effects with you. The main idea is to reveal a thumbnail image with a special effect when hovering a link. The inspiration for this idea comes from the effect seen on Fuge’s website where you can see a thumbnail showing when hovering the underlined links. More effect inspiration comes from Louis Ansa’s portfolio and Zhenya Rynzhuk’s Dribbble shot “Blown Art Works and News Platform”.

HoverImageReveal_featured

The animations are made using TweenMax.

Attention: Note that we use modern CSS properties that might not be supported in older browsers.

Have a look at some of the effects:

Hoverreveal01

Hoverreveal02

Hoverreveal03

We hope you like this little effects and find them inspirational!

References and Credits

Images from Unsplash.com
TweenMax by Greensock
imagesLoaded by Dave DeSandro

Image Reveal Hover Effects was written by Mary Lou and published on Codrops.

An Extensive Guide To Progressive Web Applications

Original Source: https://www.smashingmagazine.com/2018/11/guide-pwa-progressive-web-applications/

An Extensive Guide To Progressive Web Applications

An Extensive Guide To Progressive Web Applications

Ankita Masand

2018-11-27T14:00:29+01:00
2018-11-27T14:10:54+00:00

It was my dad’s birthday, and I wanted to order a chocolate cake and a shirt for him. I headed over to Google to search for chocolate cakes and clicked on the first link in the search results. There was a blank screen for a few seconds; I didn’t understand what was happening. After a few seconds of staring patiently, my mobile screen filled with delicious-looking cakes. As soon as I clicked on one of them to check its details, I got an ugly fat popup, asking me to install an Android application so that I could get a silky smooth experience while ordering a cake.

That was disappointing. My conscience didn’t allow me to click on the “Install” button. All I wanted to do was order a small cake and be on my way.

I clicked on the cross icon at the very right of the popup to get out of it as soon as I could. But then the installation popup sat at the bottom of the screen, occupying one-fourth of the space. And with the flaky UI, scrolling down was a challenge. I somehow managed to order a Dutch cake.

After this terrible experience, my next challenge was to order a shirt for my dad. As before, I search Google for shirts. I clicked on the first link, and in a blink, the entire content was right in front of me. Scrolling was smooth. No installation banner. I felt as if I was browsing a native application. There was a moment when my terrible internet connection gave up, but I was still able to see the content instead of a dinosaur game. Even with my janky internet, I managed to order a shirt and jeans for my dad. Most surprising of all, I was getting notifications about my order.

I would call this a silky smooth experience. These people were doing something right. Every website should do it for their users. It’s called a progressive web app.

As Alex Russell states in one of his blog posts:

“It happens on the web from time to time that powerful technologies come to exist without the benefit of marketing departments or slick packaging. They linger and grow at the peripheries, becoming old-hat to a tiny group while remaining nearly invisible to everyone else. Until someone names them.”

A Silky Smooth Experience On The Web, Sometimes Known As A Progressive Web Application

Progressive web applications (PWAs) are more of a methodology that involves a combination of technologies to make powerful web applications. With an improved user experience, people will spend more time on websites and see more advertisements. They tend to buy more, and with notification updates, they are more likely to visit often. The Financial Times abandoned its native apps in 2011 and built a web app using the best technologies available at the time. Now, the product has grown into a full-fledged PWA.

But why, after all this time, would you build a web app when a native app does the job well enough?

Let’s look into some of the metrics shared in Google IO 17.

Our new book, in which Alla Kholmatova explores
how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents →

Five billion devices are connected to the web, making the web the biggest platform in the history of computing. On the mobile web, 11.4 million monthly unique visitors go to the top 1000 web properties, and 4 million go to the top thousand apps. The mobile web garners around four times as many users as native applications. But this number drops sharply when it comes to engagement.

A user spends an average of 188.6 minutes in native apps and only 9.3 minutes on the mobile web. Native applications leverage the power of operating systems to send push notifications to give users important updates. They deliver a better user experience and boot more quickly than websites in a browser. Instead of typing a URL in the web browser, users just have to tap an app’s icon on the home screen.

Most visitors on the web are unlikely to come back, so developers came up with the workaround of showing them banners to install native applications, in an attempt to keep them deeply engaged. But then, users would have to go through the tiresome procedure of installing the binary of a native application. Forcing users to install an application is annoying and reduces further the chance that they will install it in the first place. The opportunity for the web is clear.

Recommended reading: Native And PWA: Choices, Not Challengers!

If web applications come with a rich user experience, push notifications, offline support and instant loading, they can conquer the world. This is what a progressive web application does.

A PWA delivers a rich user experience because it has several strengths:

Fast
The UI is not flaky. Scrolling is smooth. And the app responds quickly to user interaction.

Reliable
A normal website forces users to wait, doing nothing, while it is busy making rides to the server. A PWA, meanwhile, loads data instantaneously from the cache. A PWA works seamlessly, even on a 2G connection. Every network request to fetch an asset or piece of data goes through a service worker (more on that later), which first verifies whether the response for a particular request is already in the cache. When users get real content almost instantly, even on a poor connection, they trust the app more and view it as more reliable.

Engaging
A PWA can earn a place on the user’s home screen. It offers a native app-like experience by providing a full-screen work area. It makes use of push notifications to keep users engaged.

Now that we know what PWAs bring to the table, let’s get into the details of what gives PWAs an edge over native applications. PWAs are built with technologies such as service workers, web app manifests, push notifications and IndexedDB/local data structure for caching. Let’s look into each in detail.

Service Workers

A service worker is a JavaScript file that runs in the background without interfering with the user’s interactions. All GET requests to the server go through a service worker. It acts like a client-side proxy. By intercepting network requests, it takes complete control over the response being sent back to the client. A PWA loads instantly because service workers eliminate the dependency on the network by responding with data from the cache.

A service worker can only intercept a network request that is in its scope. For example, a root-scoped service worker can intercept all of the fetch requests coming from a web page. A service worker operates as an event-driven system. It goes into a dormant state when it is not needed, thereby conserving memory. To use a service worker in a web application, we first have to register it on the page with JavaScript.

(function main () {

/* navigator is a WEB API that allows scripts to register themselves and carry out their activities. */
if (‘serviceWorker’ in navigator) {
console.log(‘Service Worker is supported in your browser’)
/* register method takes in the path of service worker file and returns a promises, which returns the registration object */
navigator.serviceWorker.register(‘./service-worker.js’).then (registration => {
console.log(‘Service Worker is registered!’)
})
} else {
console.log(‘Service Worker is not supported in your browser’)
}

})()

We first check whether the browser supports service workers. To register a service worker in a web application, we provide its URL as a parameter to the register function, available in navigator.serviceWorker (navigator is a web API that allows scripts to register themselves and carry out their activities). A service worker is registered only once. Registration does not happen on every page load. The browser downloads the service worker file (./service-worker.js) only if there is a byte difference between the existing activated service worker and the newer one or if its URL has changed.

The above service worker will intercept all requests coming from the root (/). To limit the scope of a service worker, we would pass an optional parameter with one of the keys as the scope.

if (‘serviceWorker’ in navigator) {
/* register method takes in an optional second parameter as an object. To restrict the scope of a service worker, the scope should be provided.
scope: ‘/books’ will intercept requests with ‘/books’ in the url. */
navigator.serviceWorker.register(‘./service-worker.js’, { scope: ‘/books’ }).then(registration => {
console.log(‘Service Worker for scope /books is registered’, registration)
})
}

The service worker above will intercept requests that have /books in the URL. For example, it will not intercept request with /products, but it could very well intercept requests with /books/products.

As mentioned, a service worker operates as an event-driven system. It listens for events (install, activate, fetch, push) and accordingly calls the respective event handler. Some of these events are a part of the life cycle of a service worker, which goes through these events in sequence to get activated.

Installation

Once a service worker has been registered successfully, an installation event is fired. This is a good place to do the initialization work, like setting up the cache or creating object stores in IndexedDB. (IndexedDB will make more sense to you once we get into its details. For now, we can just say that it’s a key-value pair structure.)

self.addEventListener(‘install’, (event) => {
let CACHE_NAME = ‘xyz-cache’
let urlsToCache = [
‘/’,
‘/styles/main.css’,
‘/scripts/bundle.js’
]
event.waitUntil(
/* open method available on caches, takes in the name of cache as the first parameter. It returns a promise that resolves to the instance of cache
All the URLS above can be added to cache using the addAll method. */
caches.open(CACHE_NAME)
.then (cache => cache.addAll(urlsToCache))
)
})

Here, we’re caching some of the files so that the next load is instant. self refers to the service worker instance. event.waitUntil makes the service worker wait until all of the code inside it has finished execution.

Activation

Once a service worker has been installed, it cannot yet listen for fetch requests. Rather, an activate event is fired. If no active service worker is operating on the website in the same scope, then the installed service worker gets activated immediately. However, if a website already has an active service worker, then the activation of a new service worker is delayed until all of the tabs operating on the old service worker are closed. This makes sense because the old service worker might be using the instance of the cache that is now modified in the newer one. So, the activation step is a good place to get rid of old caches.

self.addEventListener(‘activate’, (event) => {
let cacheWhitelist = [‘products-v2’] // products-v2 is the name of the new cache

event.waitUntil(
caches.keys().then (cacheNames => {
return Promise.all(
cacheNames.map( cacheName => {
/* Deleting all the caches except the ones that are in cacheWhitelist array */
if (cacheWhitelist.indexOf(cacheName) === -1) {
return caches.delete(cacheName)
}
})
)
})
)
})

In the code above, we’re deleting the old cache. If the name of a cache doesn’t match with the cacheWhitelist, then it is deleted. To skip the waiting phase and immediately activate the service worker, we use skip.waiting().

self.addEventListener(‘activate’, (event) => {
self.skipWaiting()
// The usual stuff
})

Once service worker is activated, it can listen for fetch requests and push events.

Fetch Event Handler

Whenever a web page fires a fetch request for a resource over the network, the fetch event from the service worker gets called. The fetch event handler first looks for the requested resource in the cache. If it is present in the cache, then it returns the response with the cached resource. Otherwise, it initiates a fetch request to the server, and when the server sends back the response with the requested resource, it puts it to the cache for subsequent requests.

/* Fetch event handler for responding to GET requests with the cached assets */
self.addEventListener(‘fetch’, (event) => {
event.respondWith(
caches.open(‘products-v2’)
.then (cache => {
/* Checking if the request is already present in the cache. If it is present, sending it directly to the client */
return cache.match(event.request).then (response => {
if (response) {
console.log(‘Cache hit! Fetching response from cache’, event.request.url)
return response
}
/* If the request is not present in the cache, we fetch it from the server and then put it in cache for subsequent requests. */
fetch(event.request).then (response => {
cache.put(event.request, response.clone())
return response
})
})
})
)
})

event.respondWith lets the service worker send a customized response to the client.

Offline-first is now a thing. For any non-critical request, we must serve the response from the cache, instead of making a ride to the server. If any asset is not present in the cache, we get it from the server and then cache it for subsequent requests.

Service workers only work on HTTPS websites because they have the power to manipulate the response of any fetch request. Someone with malicious intent might tamper the response for a request on an HTTP website. So, hosting a PWA on HTTPS is mandatory. Service workers do not interrupt the normal functioning of the DOM. They cannot communicate directly with the web page. To send any message to a web page, it makes use of post messages.

Web Push Notifications

Let’s suppose you’re busy playing a game on your mobile, and a notification pops up telling you of a 30% discount on your favorite brand. Without any further ado, you click on the notification and shop your breath out. Getting live updates on, say, a cricket or football match or getting important emails and reminders as notifications is a big deal when it comes to engaging users with a product. This feature was only available in native applications until PWA came along. A PWA makes use of web push notifications to compete with this powerful feature that native apps provide out of the box. A user would still receive a web push notification even if the PWA is not open in any of the browser tabs and even if the browser is not open.

A web application has to ask permission of the user to send them push notifications.

Browser Prompt for asking permission for Web Push notifications

Browser Prompt for asking permission for Web Push notifications. (Large preview)

Once the user confirms by clicking the “Allow” button, a unique subscription token is generated by the browser. This token is unique for this device. The format of the subscription token generated by Chrome is as follows:

{
“endpoint”: “https://fcm.googleapis.com/fcm/send/c7Veb8VpyM0:APA91bGnMFx8GIxf__UVy6vJ-n9i728CUJSR1UHBPAKOCE_SrwgyP2N8jL4MBXf8NxIqW6NCCBg01u8c5fcY0kIZvxpDjSBA75sVz64OocQ-DisAWoW7PpTge3SwvQAx5zl_45aAXuvS”,
“expirationTime”: null,
“keys”: {
“p256dh”: “BJsj63kz8RPZe8Lv1uu-6VSzT12RjxtWyWCzfa18RZ0-8sc5j80pmSF1YXAj0HnnrkyIimRgLo8ohhkzNA7lX4w”,
“auth”: “TJXqKozSJxcWvtQasEUZpQ”
}
}

The endpoint contained in the token above will be unique for every subscription. On an average website, thousands of users would agree to receive push notifications, and for each of them, this endpoint would be unique. So, with the help of this endpoint, the application is able to target these users in the future by sending them push notifications. The expirationTime is the amount of time that the subscription is valid for a particular device. If the expirationTime is 20 days, it means that the push subscription of the user will expire after 20 days and the user won’t be able to receive push notifications on the older subscription. In this case, the browser will generate a new subscription token for that device. The auth and p256dh keys are used for encryption.

Now, to send push notifications to these thousands of users in the future, we first have to save their respective subscription tokens. It’s the job of the application server (the back-end server, maybe a Node.js script) to send push notifications to these users. This might sound as simple as making a POST request to the endpoint URL with the notification data in the request payload. However, it should be noted that if a user is not online when a push notification intended for them is triggered by the server, they should still get that notification once they come back online. The server would have to take care of such scenarios, along with sending thousands of requests to the users. A server keeping track of the user’s connection sounds complicated. So, something in the middle would be responsible for routing web push notifications from the server to the client. This is called a push service, and every browser has its own implementation of a push service. The browser has to tell the following information to the push service in order to send any notification:

The time to live
This is how long a message should be queued, in case it is not delivered to the user. Once this time has elapsed, the message will be removed from the queue.
Urgency of the message
This is so that the push service preserves the user’s battery by sending only high-priority messages.

The push service routes the messages to the client. Because push has to be received by the client even if its respective web application is not open in the browser, push events have to be listened to by something that continuously monitors in the background. You guessed it: That’s the job of the service worker. The service worker listens for push events and does the job of showing notifications to the user.

So, now we know that the browser, push service, service worker and application server work in harmony to send push notifications to the user. Let’s look into the implementation details.

Web Push Client

Asking permission of the user is a one-time thing. If a user has already granted permission to receive push notifications, we shouldn’t ask again. The permission value is saved in Notification.permission.

/* Notification.permission can have one of these three values: default, granted or denied. */
if (Notification.permission === ‘default’) {
/* The Notification.requestPermission() method shows a notification permission prompt to the user. It returns a promise that resolves to the value of permission*/
Notification.requestPermission().then (result => {
if (result === ‘denied’) {
console.log(‘Permission denied’)
return
}

if (result === ‘granted’) {
console.log(‘Permission granted’)
/* This means the user has clicked the Allow button. We’re to get the subscription token generated by the browser and store it in our database.

The subscription token can be fetched using the getSubscription method available on pushManager of the serviceWorkerRegistration object. If subscription is not available, we subscribe using the subscribe method available on pushManager. The subscribe method takes in an object.
*/

serviceWorkerRegistration.pushManager.getSubscription()
.then (subscription => {
if (!subscription) {
const applicationServerKey = ”
serviceWorkerRegistration.pushManager.subscribe({
userVisibleOnly: true, // All push notifications from server should be displayed to the user
applicationServerKey // VAPID Public key
})
} else {
saveSubscriptionInDB(subscription, userId) // A method to save subscription token in the database
}
})
}
})
}

In the subscribe method above, we’re passing userVisibleOnly and applicationServerKey to generate a subscription token. The userVisibleOnly property should always be true because it tells the browser that any push notification sent by the server will be shown to the client. To understand the purpose of applicationServerKey, let’s consider a scenario.

If some person gets ahold of your thousands of subscription tokens, they could very well send notifications to the endpoints contained in these subscriptions. There is no way for the endpoint to be linked to your unique identity. To provide a unique identity to the subscription tokens generated on your web application, we make use of the VAPID protocol. With VAPID, the application server voluntarily identifies itself to the push service while sending push notifications. We generate two keys like so:

const webpush = require(‘web-push’)
const vapidKeys = webpush.generateVAPIDKeys()

web-push is an npm module. vapidKeys will have one public key and one private key. The application server key used above is the public key.

Web Push Server

The job of the web push server (application server) is straightforward. It sends a notification payload to the subscription tokens.

const options = {
TTL: 24*60*60, //TTL is the time to live, the time that the notification will be queued in the push service
vapidDetails: {
subject: ’email@example.com’,
publicKey: ”,
privateKey: ”
}
}
const data = {
title: ‘Update’,
body: ‘Notification sent by the server’
}
webpush.sendNotification(subscription, data, options)

It uses the sendNotification method from the web push library.

Service Workers

The service worker shows the notification to the user as such:

self.addEventListener(‘push’, (event) => {
let options = {
body: event.data.body,
icon: ‘images/example.png’,
}
event.waitUntil(
/* The showNotification method is available on the registration object of the service worker.
The first parameter to showNotification method is the title of notification, and the second parameter is an object */
self.registration.showNotification(event.data.title, options)
)
})

Till now, we’ve seen how a service worker makes use of the cache to store requests and makes a PWA fast and reliable, and we’ve seen how web push notifications keep users engaged.

To store a bunch of data on the client side for offline support, we need a giant data structure. Let’s look into the Financial Times PWA. You’ve got to witness the power of this data structure for yourself. Load the URL in your browser, and then switch off your internet connection. Reload the page. Gah! Is it still working? It is. (Like I said, offline is the new black.) Data is not coming from the wires. It is being served from the house. Head over to the “Applications” tab of Chrome Developer Tools. Under “Storage”, you’ll find “IndexedDB”.

IndexedDB stores the articles data in Financial Times PWA

IndexedDB on Financial Times PWA. (Large preview)

Check out the “Articles” object store, and expand any of the items to see the magic for yourself. The Financial Times has stored this data for offline support. This data structure that lets us store a massive amount of data is called IndexedDB. IndexedDB is a JavaScript-based object-oriented database for storing structured data. We can create different object stores in this database for various purposes. For example, as we can see in the image above that “Resources”, “ArticleImages” and “Articles” are called as object stores. Each record in an object store is uniquely identified with a key. IndexedDB can even be used to store files and blobs.

Let’s try to understand IndexedDB by creating a database for storing books.

let openIdbRequest = window.indexedDB.open(‘booksdb’, 1)

If the database booksdb doesn’t already exist, the code above will create a booksdb database. The second parameter to the open method is the version of the database. Specifying a version takes care of the schema-related changes that might happen in future. For example, booksdb now has only one table, but when the application grows, we intend to add two more tables to it. To make sure our database is in sync with the updated schema, we’ll specify a higher version than the previous one.

Calling the open method doesn’t open the database right away. It’s an asynchronous request that returns an IDBOpenDBRequest object. This object has success and error properties; we’ll have to write appropriate handlers for these properties to manage the state of our connection.

let dbInstance
openIdbRequest.onsuccess = (event) => {
dbInstance = event.target.result
console.log(‘booksdb is opened successfully’)
}

openIdbRequest.onerror = (event) => {
console.log(’There was an error in opening booksdb database’)
}

openIdbRequest.onupgradeneeded = (event) => {
let db = event.target.result
let objectstore = db.createObjectStore(‘books’, { keyPath: ‘id’ })
}

To manage the creation or modification of object stores (object stores are analogous to SQL-based tables — they have a key-value structure), the onupgradeneeded method is called on the openIdbRequest object. The onupgradeneeded method will be invoked whenever the version changes. In the code snippet above, we’re creating a books object store with unique key as the ID.

Let’s say that, after deploying this piece of code, we have to create one more object store, called as users. So, now the version of our database will be 2.

let openIdbRequest = window.indexedDB.open(‘booksdb’, 2) // New Version – 2

/* Success and error event handlers remain the same.
The onupgradeneeded method gets called when the version of the database changes. */
openIdbRequest.onupgradeneeded = (event) => {
let db = event.target.result
if (!db.objectStoreNames.contains(‘books’)) {
let objectstore = db.createObjectStore(‘books’, { keyPath: ‘id’ })
}

let oldVersion = event.oldVersion
let newVersion = event.newVersion

/* The users tables should be added for version 2. If the existing version is 1, it will be upgraded to 2, and the users object store will be created. */
if (oldVersion === 1) {
db.createObjectStore(‘users’, { keyPath: ‘id’ })
}
}

We’ve cached dbInstance in the success event handler of the open request. To retrieve or add data in IndexedDB, we’ll make use of dbInstance. Lets add some book records in our books object store.

let transaction = dbInstance.transaction(‘books’)
let objectstore = dbInstance.objectstore(‘books’)

let bookRecord = {
id: ‘1’,
name: ’The Alchemist’,
author: ‘Paulo Coelho’
}
let addBookRequest = objectstore.add(bookRecord)

addBookRequest.onsuccess = (event) => {
console.log(‘Book record added successfully’)
}

addBookRequest.onerror = (event) => {
console.log(’There was an error in adding book record’)
}

We make use of transactions, especially while writing records on object stores. A transaction is simply a wrapper around an operation to ensure data integrity. If any of the actions in a transaction fails, then no action is performed on the database.

Let’s modify a book record with the put method:

let modifyBookRequest = objectstore.put(bookRecord) // put method takes in an object as the parameter
modifyBookRequest.onsuccess = (event) => {
console.log(‘Book record updated successfully’)
}

Let’s retrieve a book record with the get method:

let transaction = dbInstance.transaction(‘books’)
let objectstore = dbInstance.objectstore(‘books’)

/* get method takes in the id of the record */
let getBookRequest = objectstore.get(1)

getBookRequest.onsuccess = (event) => {
/* event.target.result contains the matched record */
console.log(‘Book record’, event.target.result)
}

getBookRequest.onerror = (event) => {
console.log(‘Error while retrieving the book record.’)
}

Adding Icon On Home Screen

Now that there is hardly any distinction between a PWA and a native application, it makes sense to offer a prime position to the PWA. If your website fulfills the basic criteria of a PWA (hosted on HTTPS, integrates with service workers and has a manifest.json) and after the user has spent some time on the web page, the browser will invoke a prompt at the bottom, asking the user to add the app to their home screen, as shown below:

Prompt to add Financial Times PWA on home screen

Prompt to add Financial Times PWA on home screen. (Large preview)

When a user clicks on “Add FT to Home screen”, the PWA gets to set its foot on the home screen, as well as in the app drawer. When a user searches for any application on their phone, any PWAs that match the search query will be listed. They will also be seen in the system settings, which makes it easy for users to manage them. In this sense, a PWA behaves like a native application.

PWAs make use of manifest.json to provide this feature. Let’s look into a simple manifest.json file.

{
“name”: “Demo PWA”,
“short_name”: “Demo”,
“start_url”: “/?standalone”,
“background_color”: “#9F0C3F”,
“theme_color”: “#fff1e0”,
“display”: “standalone”,
“icons”: [{
“src”: “/lib/img/icons/xxhdpi.png?v2”,
“sizes”: “192×192”
}]
}

The short_name appears on the user’s home screen and in the system settings. The name appears in the chrome prompt and on the splash screen. The splash screen is what the user sees when the app is getting ready to launch. The start_url is the main screen of your app. It’s what users get when they tap an icon on the home screen. The background_color is used on the splash screen. The theme_color sets the color of the toolbar. The standalone value for display mode says that the app is to be operated in full-screen mode (hiding the browser’s toolbar). When a user installs a PWA, its size is merely in kilobytes, rather than the megabytes of native applications.

Service workers, web push notifications, IndexedDB, and the home screen position make up for offline support, reliability, and engagement. It should be noted that a service worker doesn’t come to life and start doing its work on the very first load. The first load will still be slow until all of the static assets and other resources have been cached. We can implement some strategies to optimize the first load.

Bundling Assets

All of the resources, including the HTML, style sheets, images and JavaScript, are to be fetched from the server. The more files, the more HTTPS requests needed to fetch them. We can use bundlers like WebPack to bundle our static assets, hence reducing the number of HTTP requests to the server. WebPack does a great job of further optimizing the bundle by using techniques such as code-splitting (i.e. bundling only those files that are required for the current page load, instead of bundling all of them together) and tree shaking (i.e. removing duplicate dependencies or dependencies that are imported but not used in the code).

Reducing Round Trips

One of the main reasons for slowness on the web is network latency. The time it takes for a byte to travel from A to B varies with the network connection. For example, a particular round trip over Wi-Fi takes 50 milliseconds and 500 milliseconds on a 3G connection, but 2500 milliseconds on a 2G connection. These requests are sent using the HTTP protocol, which means that while a particular connection is being used for a request, it cannot be used for any other requests until the response of the previous request is served. A website can make six asynchronous HTTP requests at a time because six connections are available to a website to make HTTP requests. An average website makes roughly 100 requests; so, with a maximum of six connections available, a user might end up spending around 833 milliseconds in a single round trip. (The calculation is 833 milliseconds – 100⁄6 = 1666. We have to divide 1666 by 2 because we’re calculating the time spend on a round trip.) With HTTP2 in place, the turnaround time is drastically reduced. HTTP2 doesn’t block the connection head, so multiple requests can be sent simultaneously.

Most HTTP responses contain last-modified and etag headers. The last-modified header is the date when the file was last modified, and an etag is a unique value based on the contents of the file. It will only be changed when the contents of a file are changed. Both of these headers can be used to avoid downloading the file again if a cached version is already locally available. If the browser has a version of this file locally available, it can add any of these two headers in the request as such:

Add ETag and Last-Modified Headers to prevent downloading of valid cached assets

ETag and Last-Modified Headers. (Large preview)

The server can check whether the contents of the file have changed. If the contents of the file have not changed, then it responds with a status code of 304 (not modified).

If-None-Match Header to prevent downloading of valid cached assets

If-None-Match Header. (Large preview)

This indicates to the browser to use the locally available cached version of the file. By doing all of this, we’ve prevented the file from being downloaded.

Faster responses are in now place, but our job is not done yet. We still have to parse the HTML, load the style sheets and make the web page interactive. It makes sense to show some empty boxes with a loader to the user, instead of a blank screen. While the HTML document is getting parsed, when it comes across <script src=’asset.js’></script>, it will make a synchronous HTTP request to the server to fetch asset.js, and the whole parsing process will be paused until the response comes back. Imagine having a dozen of synchronous static asset references. These could very well be managed just by making use of the async keyword in script references, like <script src=’asset.js’ async></script>. With the introduction of the async keyword here, the browser will make an asynchronous request to fetch asset.js without hindering the parsing of the HTML. If a script file is required at a later stage, we can defer the downloading of that file until the entire HTML has been parsed. A script file can be deferred by using the defer keyword, like <script src=’asset.js’ defer></script>.

Conclusion

We’ve learned a lot of many new things that make for a cool web application. Here’s a summary of all of the things we’ve explored in this article:

Service workers make good use of the cache to speed up the loading of assets.
Web push notifications work under the hood.
We use IndexedDB to store a massive amount of data.
Some of the optimizations for instant first load, like using HTTP2 and adding headers like Etag, last-modified and If-None-Match, prevent the downloading of valid cached assets.

That’s all, folks!

Smashing Editorial
(rb, ra, al, yk, il)

OptinMonster Black Friday / Cyber Monday – 35% OFF all plans

Original Source: https://inspiredm.com/optinmonster-black-friday-cyber-monday-35-off-all-plans/

Here’s a quick one. Guess what digital marketers consider to be the most difficult part of their job.

Ok, the fact is 65% of businesses believe that generating site visitors and subsequently leads is exceptionally difficult. It’s undoubtedly that one task you don’t want to be assigned to you.

But then again, generating leads is oddly satisfying. Especially when you systematically source them from your site’s organic traffic.

And you know what? Believe it or not, the most challenging bit is arguably attracting a consistent flow of traffic. That’s why it would be extremely painful if you end up losing your visitors without converting a substantial fraction of them.

Interestingly, however, many marketers consider that a real possibility because of the simple fact that 96% of a website’s visitors are usually not ready to purchase yet.

And then there’s a special group of marketers who’ve managed to overcome that by strategically leveraging lead generation tools. I’m talking about the elite squad that powers onsite lead generation with effective opt-in lead generation solutions- like OptinMonster and the likes.

Astonishing, isn’t it? I guess that’s why I’m particularly fond of seeking ways to beat the system. And this time around, I’ll let you in on a juicy one I stumbled upon. It just so happens that OptinMonster is offering an exclusive 35% discount on all its plans this Black Friday and Cyber Monday- from November the 19th to 26th.

To top it off, it turns out the discount is applicable for more than one month. So, let’s find out more about this, starting with the features;

Feature Highlights of OptinMonster Lead Generation
Drag-and-Drop Builder

OptinMonster, for starters, provides a dynamic drag-and-drop builder for creating your own uniquely striking customized optin forms.

Then get this. It’s entirely manipulated without any form of programming.  The tool basically allows users to build from scratch using the canvas, or proceed with pre-designed templates that can be customized further. In the end, you’re able to combine various eye-catching forms with a wide selection of sound effects and animations.

Multiple Campaign Types

Businesses often use varying approaches when it comes to lead generation and marketing. OptinMonster makes a good attempt at facilitating all the possible strategies through an extensively flexible optin framework. In essence, it comes with the following campaigns:

·         Content Locker- For changing articles into gated elements to facilitate opt-ins.

·         Inline Forms- For attaching forms to your webpage’s content.

·         Sidebar Forms- For placing forms on either side of your site’s pages.

·         Countdown Timer- For encouraging visitors to urgently submit their optin information.

·         Floating Bar- For flexible forms that follow visitors around as they surf through a page.

·         Slide-In Scroll Box- For special forms that shoot up from one of the screen’s corners.

·         Fullscreen Welcome Mat- For momentarily displaying forms that take up the entire screen.

·         Lightbox Popup- For standard forms that pop up on the screen.

Campaign Triggers

OptinMonster heavily uses artificial intelligence to analyze visitor behavior, and subsequently display forms at the right time- based on pre-set trigger parameters. The triggering systems are:

·         Campaign Scheduling- Establish periods for showing selected campaigns.

·         Timed Display Control- Define specific times for displaying campaigns.

·         InactivitySensor- Use special campaigns that specifically show up when visitors are inactive.

·         MonsterLinks 2-Step Optins- Use ideal images and links as pathways to optin forms.

·         Scroll Trigger- Unleash selected optin forms as soon as visitors scroll to a certain point on the web page.

·         Exit-Intent Technology- Drop campaign forms on surfers exiting your web page.

Targeted Campaigns

The AI also comes in handy for when you need to engage a defined set of visitors based on their exact demographics. The systems used to target them include:

·         AdBlock Detection- For engaging visitors who are locking out potential ad income with their AdBlockers.

·         Device-Based Targeting- For reaching out to surfers based on their respective devices.

·         Cookie Retargeting- For unleashing forms aligned to visitors’ cookies.

·         Geo-location Targeting- For showing forms according to visitors’ geographical positions.

·         Onsite Retargeting- For engaging traffic that has visited your site before.

·         Onsite Follow Up Campaigns- For launching systematic messages aligned with actions taken by visitors.

·         Page-Level Targeting- For determining forms as they apply to special site zones accessed by visitors.

·         Referrer Detection- For displaying forms based on the traffic source.

Seamless Integrations

Lead generation is only the first stage of the conversion pipeline. There’s still a long way to go to successfully trigger purchases. And the bulk of it involves a thorough conversion process, complete with the relevant engagement tools. But since OptinMonster is not capable of facilitating all that, it chooses to conveniently integrate with a wide array of third-party email marketing services plus website and ecommerce platforms.

Actionable Insights

All things considered, lead generation is a holistic operation with multiple interconnected variables. Their collective impact depends on the complex set of decisions made at every stage, regarding each distinct parameter. And to help you with that, OptinMonster will keep you informed through its system of actionable insights- involving conversion analytics, A/B testing, and real-time behavior automation.

Normal OptinMonster Pricing

OptinMonster user subscription plans usually cost:

·         $19 per month or alternatively $108 per year for the Basic Plan.

·         $39 per month or alternatively $228 per year for the Plus Plan.

·         $59 per month or alternatively $348 per year for the Pro Plan.

·         $99 per month or alternatively, $588 per year for the Growth Plan.

Special OptinMonster Black Friday/Cyber Monday Discount Pricing

This good news? Well, at a 35% all-year discount, you get to pay:

·         $70.20 instead of $108 for Basic

·         $148.20 instead of $228 for Plus

·         $226.20 instead of $348 for Pro

·         $382.20 instead of $588 for Growth

That essentially translates to 12 discounted months. Quite a hack, right?

What To Do

And now for the revelation…

Just proceed to OptinMonster’s main site and enter BF2018 as the discount code.  Then have all the fun generating leads at a substantially reduced price.

The post OptinMonster Black Friday / Cyber Monday – 35% OFF all plans appeared first on Inspired Magazine.

Avoiding The Pitfalls Of Automatically Inlined Code

Original Source: https://www.smashingmagazine.com/2018/11/pitfalls-automatically-inlined-code/

Avoiding The Pitfalls Of Automatically Inlined Code

Avoiding The Pitfalls Of Automatically Inlined Code

Leonardo Losoviz

2018-11-26T14:00:08+01:00
2018-11-26T18:45:55+00:00

Inlining is the process of including the contents of files directly in the HTML document: CSS files can be inlined inside a style element, and JavaScript files can be inlined inside a script element:

<style>
/* CSS contents here */
</style>

<script>
/* JS contents here */
</script>

By printing the code already in the HTML output, inlining avoids render-blocking requests and executes the code before the page is rendered. As such, it is useful for improving the perceived performance of the site (i.e. the time it takes for a page to become usable.) For instance, we can use the buffer of data delivered immediately when loading the site (around 14kb) to inline the critical styles, including styles of above-the-fold content (as had been done on the previous Smashing Magazine site), and font sizes and layout widths and heights to avoid a jumpy layout re-rendering when the rest of the data is delivered.

However, when overdone, inlining code can also have negative effects on the performance of the site: Because the code is not cacheable, the same content is sent to the client repeatedly, and it can’t be pre-cached through Service Workers, or cached and accessed from a Content Delivery Network. In addition, inline scripts are considered not safe when implementing a Content Security Policy (CSP). Then, it makes a sensible strategy to inline those critical portions of CSS and JS that make the site load faster but avoided as much as possible otherwise.

With the objective of avoiding inlining, in this article we will explore how to convert inline code to static assets: Instead of printing the code in the HTML output, we save it to disk (effectively creating a static file) and add the corresponding <script> or <link> tag to load the file.

Let’s get started!

Recommended reading: WordPress Security As A Process

Front-end is messy and complicated these days. That’s why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬

Smashing Cat, just preparing to do some magic stuff.

When To Avoid Inlining

There is no magic recipe to establish if some code must be inlined or not, however, it can be pretty evident when some code must not be inlined: when it involves a big chunk of code, and when it is not needed immediately.

As an example, WordPress sites inline the JavaScript templates to render the Media Manager (accessible in the Media Library page under /wp-admin/upload.php), printing a sizable amount of code:

JavaScript templates inlined by the WordPress Media Manager.

Occupying a full 43kb, the size of this piece of code is not negligible, and since it sits at the bottom of the page it is not needed immediately. Hence, it would make plenty of sense to serve this code through static assets instead or printing it inside the HTML output.

Let’s see next how to transform inline code into static assets.

Triggering The Creation Of Static Files

If the contents (the ones to be inlined) come from a static file, then there is not much to do other than simply request that static file instead of inlining the code.

For dynamic code, though, we must plan how/when to generate the static file with its contents. For instance, if the site offers configuration options (such as changing the color scheme or the background image), when should the file containing the new values be generated? We have the following opportunities for creating the static files from the dynamic code:

On request
When a user accesses the content for the first time.
On change
When the source for the dynamic code (e.g. a configuration value) has changed.

Let’s consider on request first. The first time a user accesses the site, let’s say through /index.html, the static file (e.g. header-colors.css) doesn’t exist yet, so it must be generated then. The sequence of events is the following:

The user requests /index.html;
When processing the request, the server checks if the file header-colors.css exists. Since it does not, it obtains the source code and generates the file on disk;
It returns a response to the client, including tag <link rel="stylesheet" type="text/css" href="/staticfiles/header-colors.css">
The browser fetches all the resources included in the page, including header-colors.css;
By then this file exists, so it is served.

However, the sequence of events could also be different, leading to an unsatisfactory outcome. For instance:

The user requests /index.html;
This file is already cached by the browser (or some other proxy, or through Service Workers), so the request is never sent to the server;
The browser fetches all the resources included in the page, including header-colors.css. This image is, however, not cached in the browser, so the request is sent to the server;
The server hasn’t generated header-colors.css yet (e.g. it was just restarted);
It will return a 404.

Alternatively, we could generate header-colors.css not when requesting /index.html, but when requesting /header-colors.css itself. However, since this file initially doesn’t exist, the request is already treated as a 404. Even though we could hack our way around it, altering the headers to change the status code to a 200, and returning the content of the image, this is a terrible way of doing things, so we will not entertain this possibility (we are much better than this!)

That leaves only one option: generating the static file after its source has changed.

Creating The Static File When The Source Changes

Please notice that we can create dynamic code from both user-dependant and site-dependant sources. For instance, if the theme enables to change the site’s background image and that option is configured by the site’s admin, then the static file can be generated as part of the deployment process. On the other hand, if the site allows its users to change the background image for their profiles, then the static file must be generated on runtime.

In a nutshell, we have these two cases:

User Configuration
The process must be triggered when the user updates a configuration.
Site Configuration
The process must be triggered when the admin updates a configuration for the site, or before deploying the site.

If we considered the two cases independently, for #2 we could design the process on any technology stack we wanted. However, we don’t want to implement two different solutions, but a unique solution which can tackle both cases. And because from #1 the process to generate the static file must be triggered on the running site, then it is compelling to design this process around the same technology stack the site runs on.

When designing the process, our code will need to handle the specific circumstances of both #1 and #2:

Versioning
The static file must be accessed with a “version” parameter, in order to invalidate the previous file upon the creation of a new static file. While #2 could simply have the same versioning as the site, #1 needs to use a dynamic version for each user, possibly saved in the database.
Location of the generated file
#2 generates a unique static file for the whole site (e.g. /staticfiles/header-colors.css), while #1 creates a static file for each user (e.g. /staticfiles/users/leo/header-colors.css).
Triggering event
While for #1 the static file must be executed on runtime, for #2 it can also be executed as part of a build process in our staging environment.
Deployment and distribution
Static files in #2 can be seamlessly integrated inside the site’s deployment bundle, presenting no challenges; static files in #1, however, cannot, so the process must handle additional concerns, such as multiple servers behind a load balancer (will the static files be created in 1 server only, or in all of them, and how?).

Let’s design and implement the process next. For each static file to be generated we must create an object containing the file’s metadata, calculate its content from the dynamic sources, and finally save the static file to disk. As a use case to guide the explanations below, we will generate the following static files:

header-colors.css, with some style from values saved in the database
welcomeuser-data.js, containing a JSON object with user data under some variable: window.welcomeUserData = {name: "Leo"};.

Below, I will describe the process to generate the static files for WordPress, for which we must base the stack on PHP and WordPress functions. The function to generate the static files before deployment can be triggered by loading a special page executing shortcode [create_static_files] as I have described in a previous article.

Further recommended reading: Making A Service Worker: A Case Study

Representing The File As An Object

We must model a file as a PHP object with all corresponding properties, so we can both save the file on disk on a specific location (e.g. either under /staticfiles/ or /staticfiles/users/leo/), and know how to request the file consequently. For this, we create an interface Resource returning both the file’s metadata (filename, dir, type: “css” or “js”, version, and dependencies on other resources) and its content.

interface Resource {

function get_filename();
function get_dir();
function get_type();
function get_version();
function get_dependencies();
function get_content();
}

In order to make the code maintainable and reusable we follow the SOLID principles, for which we set an object inheritance scheme for resources to gradually add properties, starting from the abstract class ResourceBase from which all our Resource implementations will inherit:

abstract class ResourceBase implements Resource {

function get_dependencies() {

// By default, a file has no dependencies
return array();
}
}

Following SOLID, we create subclasses whenever properties differ. As stated earlier, the location of the generated static file, and the versioning to request it will be different depending on the file being about the user or site configuration:

abstract class UserResourceBase extends ResourceBase {

function get_dir() {

// A different file and folder for each user
$user = wp_get_current_user();
return “/staticfiles/users/{$user->user_login}/”;
}

function get_version() {

// Save the resource version for the user under her meta data.
// When the file is regenerated, must execute `update_user_meta` to increase the version number
$user_id = get_current_user_id();
$meta_key = “resource_version_”.$this->get_filename();
return get_user_meta($user_id, $meta_key, true);
}
}

abstract class SiteResourceBase extends ResourceBase {

function get_dir() {

// All files are placed in the same folder
return “/staticfiles/”;
}

function get_version() {

// Same versioning as the site, assumed defined under a constant
return SITE_VERSION;
}
}

Finally, at the last level, we implement the objects for the files we want to generate, adding the filename, the type of file, and the dynamic code through function get_content:

class HeaderColorsSiteResource extends SiteResourceBase {

function get_filename() {

return “header-colors”;
}

function get_type() {

return “css”;
}

function get_content() {

return sprintf(

.site-title a {
color: #%s;
}
“, esc_attr(get_header_textcolor())
);
}
}

class WelcomeUserDataUserResource extends UserResourceBase {

function get_filename() {

return “welcomeuser-data”;
}

function get_type() {

return “js”;
}

function get_content() {

$user = wp_get_current_user();
return sprintf(
“window.welcomeUserData = %s;”,
json_encode(
array(
“name” => $user->display_name
)
)
);
}
}

With this, we have modeled the file as a PHP object. Next, we need to save it to disk.

Saving The Static File To Disk

Saving a file to disk can be easily accomplished through the native functions provided by the language. In the case of PHP, this is accomplished through the function fwrite. In addition, we create a utility class ResourceUtils with functions providing the absolute path to the file on disk, and also its path relative to the site’s root:

class ResourceUtils {

protected static function get_file_relative_path($fileObject) {

return $fileObject->get_dir().$fileObject->get_filename().”.”.$fileObject->get_type();
}

static function get_file_path($fileObject) {

// Notice that we must add constant WP_CONTENT_DIR to make the path absolute when saving the file
return WP_CONTENT_DIR.self::get_file_relative_path($fileObject);
}
}

class ResourceGenerator {

static function save($fileObject) {

$file_path = ResourceUtils::get_file_path($fileObject);
$handle = fopen($file_path, “wb”);
$numbytes = fwrite($handle, $fileObject->get_content());
fclose($handle);
}
}

Then, whenever the source changes and the static file needs to be regenerated, we execute ResourceGenerator::save passing the object representing the file as a parameter. The code below regenerates, and saves on disk, files “header-colors.css” and “welcomeuser-data.js”:

// When need to regenerate header-colors.css, execute:
ResourceGenerator::save(new HeaderColorsSiteResource());

// When need to regenerate welcomeuser-data.js, execute:
ResourceGenerator::save(new WelcomeUserDataUserResource());

Once they exist, we can enqueue files to be loaded through the <script> and <link> tags.

Enqueuing The Static Files

Enqueuing the static files is no different than enqueuing any resource in WordPress: through functions wp_enqueue_script and wp_enqueue_style. Then, we simply iterate all the object instances and use one hook or the other depending on their get_type() value being either "js" or "css".

We first add utility functions to provide the file’s URL, and to tell the type being either JS or CSS:

class ResourceUtils {

// Continued from above…

static function get_file_url($fileObject) {

// Add the site URL before the file path
return get_site_url().self::get_file_relative_path($fileObject);
}

static function is_css($fileObject) {

return $fileObject->get_type() == “css”;
}

static function is_js($fileObject) {

return $fileObject->get_type() == “js”;
}
}

An instance of class ResourceEnqueuer will contain all the files that must be loaded; when invoked, its functions enqueue_scripts and enqueue_styles will do the enqueuing, by executing the corresponding WordPress functions (wp_enqueue_script and wp_enqueue_style respectively):

class ResourceEnqueuer {

protected $fileObjects;

function __construct($fileObjects) {

$this->fileObjects = $fileObjects;
}

protected function get_file_properties($fileObject) {

$handle = $fileObject->get_filename();
$url = ResourceUtils::get_file_url($fileObject);
$dependencies = $fileObject->get_dependencies();
$version = $fileObject->get_version();

return array($handle, $url, $dependencies, $version);
}

function enqueue_scripts() {

$jsFileObjects = array_map(array(ResourceUtils::class, ‘is_js’), $this->fileObjects);
foreach ($jsFileObjects as $fileObject) {

list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject);
wp_register_script($handle, $url, $dependencies, $version);
wp_enqueue_script($handle);
}
}

function enqueue_styles() {

$cssFileObjects = array_map(array(ResourceUtils::class, ‘is_css’), $this->fileObjects);
foreach ($cssFileObjects as $fileObject) {

list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject);
wp_register_style($handle, $url, $dependencies, $version);
wp_enqueue_style($handle);
}
}
}

Finally, we instantiate an object of class ResourceEnqueuer with a list of the PHP objects representing each file, and add a WordPress hook to execute the enqueuing:

// Initialize with the corresponding object instances for each file to enqueue
$fileEnqueuer = new ResourceEnqueuer(
array(
new HeaderColorsSiteResource(),
new WelcomeUserDataUserResource()
)
);

// Add the WordPress hooks to enqueue the resources
add_action(‘wp_enqueue_scripts’, array($fileEnqueuer, ‘enqueue_scripts’));
add_action(‘wp_print_styles’, array($fileEnqueuer, ‘enqueue_styles’));

That’s it: Being enqueued, the static files will be requested when loading the site in the client. We have succeeded to avoid printing inline code and loading static resources instead.

Next, we can apply several improvements for additional performance gains.

Recommended reading: An Introduction To Automated Testing Of WordPress Plugins With PHPUnit

Bundling Files Together

Even though HTTP/2 has reduced the need for bundling files, it still makes the site faster, because the compression of files (e.g. through GZip) will be more effective, and because browsers (such as Chrome) have a bigger overhead processing many resources.

By now, we have modeled a file as a PHP object, which allows us to treat this object as an input to other processes. In particular, we can repeat the same process above to bundle all files from the same type together and serve the bundled version instead of all the independent files. For this, we create a function get_content which simply extracts the content from every resource under $fileObjects, and prints it again, producing the aggregation of all content from all resources:

abstract class SiteBundleBase extends SiteResourceBase {

protected $fileObjects;

function __construct($fileObjects) {

$this->fileObjects = $fileObjects;
}

function get_content() {

$content = “”;
foreach ($this->fileObjects as $fileObject) {

$content .= $fileObject->get_content().PHP_EOL;
}

return $content;
}
}

We can bundle all files together into the file bundled-styles.css by creating a class for this file:

class StylesSiteBundle extends SiteBundleBase {

function get_filename() {

return “bundled-styles”;
}

function get_type() {

return “css”;
}
}

Finally, we simply enqueue these bundled files, as before, instead of all the independent resources. For CSS, we create a bundle containing files header-colors.css, background-image.css and font-sizes.css, for which we simply instantiate StylesSiteBundle with the PHP object for each of these files (and likewise we can create the JS bundle file):

$fileObjects = array(
// CSS
new HeaderColorsSiteResource(),
new BackgroundImageSiteResource(),
new FontSizesSiteResource(),
// JS
new WelcomeUserDataUserResource(),
new UserShoppingItemsUserResource()
);
$cssFileObjects = array_map(array(ResourceUtils::class, ‘is_css’), $fileObjects);
$jsFileObjects = array_map(array(ResourceUtils::class, ‘is_js’), $fileObjects);

// Use this definition of $fileEnqueuer instead of the previous one
$fileEnqueuer = new ResourceEnqueuer(
array(
new StylesSiteBundle($cssFileObjects),
new ScriptsSiteBundle($jsFileObjects)
)
);

That’s it. Now we will be requesting only one JS file and one CSS file instead of many.

A final improvement for perceived performance involves prioritizing assets, by delaying loading those assets which are not needed immediately. Let’s tackle this next.

async/defer Attributes For JS Resources

We can add attributes async and defer to the <script> tag, to alter when the JavaScript file is downloaded, parsed and executed, as to prioritize critical JavaScript and push everything non-critical for as late as possible, thus decreasing the site’s apparent loading time.

To implement this feature, following the SOLID principles, we should create a new interface JSResource (which inherits from Resource) containing functions is_async and is_defer. However, this would close the door to <style> tags eventually supporting these attributes too. So, with adaptability in mind, we take a more open-ended approach: we simply add a generic method get_attributes to interface Resource as to keep it flexible to add to any attribute (either already existing ones or yet to be invented) for both <script> and <link> tags:

interface Resource {

// Continued from above…

function get_attributes();
}

abstract class ResourceBase implements Resource {

// Continued from above…

function get_attributes() {

// By default, no extra attributes
return ”;
}
}

WordPress doesn’t offer an easy way to add extra attributes to the enqueued resources, so we do it in a rather hacky way, adding a hook that replaces a string inside the tag through function add_script_tag_attributes:

class ResourceEnqueuerUtils {

protected static tag_attributes = array();

static function add_tag_attributes($handle, $attributes) {

self::tag_attributes[$handle] = $attributes;
}

static function add_script_tag_attributes($tag, $handle, $src) {

if ($attributes = self::tag_attributes[$handle]) {

$tag = str_replace(
” src=’${src}’>”,
” src=’${src}’ “.$attributes.”>”,
$tag
);
}

return $tag;
}
}

// Initize by connecting to the WordPress hook
add_filter(
‘script_loader_tag’,
array(ResourceEnqueuerUtils::class, ‘add_script_tag_attributes’),
PHP_INT_MAX,
3
);

We add the attributes for a resource when creating the corresponding object instance:

abstract class ResourceBase implements Resource {

// Continued from above…

function __construct() {

ResourceEnqueuerUtils::add_tag_attributes($this->get_filename(), $this->get_attributes());
}
}

Finally, if resource welcomeuser-data.js doesn’t need to be executed immediately, we can then set it as defer:

class WelcomeUserDataUserResource extends UserResourceBase {

// Continued from above…

function get_attributes() {

return “defer=’defer'”;
}
}

Because it is loaded as deferred, a script will load later, bringing forward the point in time in which the user can interact with the site. Concerning performance gains, we are all set now!

There is one issue left to resolve before we can relax: what happens when the site is hosted on multiple servers?

Dealing With Multiple Servers Behind A Load Balancer

If our site is hosted on several sites behind a load balancer, and a user-configuration dependant file is regenerated, the server handling the request must, somehow, upload the regenerated static file to all the other servers; otherwise, the other servers will serve a stale version of that file from that moment on. How do we do this? Having the servers communicate to each other is not just complex, but may ultimately prove unfeasible: What happens if the site runs on hundreds of servers, from different regions? Clearly, this is not an option.

The solution I came up with is to add a level of indirection: instead of requesting the static files from the site URL, they are requested from a location in the cloud, such as from an AWS S3 bucket. Then, upon regenerating the file, the server will immediately upload the new file to S3 and serve it from there. The implementation of this solution is explained in my previous article Sharing Data Among Multiple Servers Through AWS S3.

Conclusion

In this article, we have considered that inlining JS and CSS code is not always ideal, because the code must be sent repeatedly to the client, which can have a hit on performance if the amount of code is significant. We saw, as an example, how WordPress loads 43kb of scripts to print the Media Manager, which are pure JavaScript templates and could perfectly be loaded as static resources.

Hence, we have devised a way to make the website faster by transforming the dynamic JS and CSS inline code into static resources, which can enhance caching at several levels (in the client, Service Workers, CDN), allows to further bundle all files together into just one JS/CSS resource as to improve the ratio when compressing the output (such as through GZip) and to avoid an overhead in browsers from processing several resources concurrently (such as in Chrome), and additionally allows to add attributes async or defer to the <script> tag to speed up the user interactivity, thus improving the site’s apparent loading time.

As a beneficial side effect, splitting the code into static resources also allows the code to be more legible, dealing with units of code instead of big blobs of HTML, which can lead to a better maintenance of the project.

The solution we developed was done in PHP and includes a few specific bits of code for WordPress, however, the code itself is extremely simple, barely a few interfaces defining properties and objects implementing those properties following the SOLID principles, and a function to save a file to disk. That’s pretty much it. The end result is clean and compact, straightforward to recreate for any other language and platform, and not difficult to introduce to an existing project — providing easy performance gains.

Smashing Editorial
(rb, ra, yk, il)

9 Free Data Visualization Tools

Original Source: http://feedproxy.google.com/~r/1stwebdesigner/~3/Vxx98XaxavI/

Data can be beautiful. And if you’ve ever had to work with graphs and statistics in your web design work before, you know that the extra visual reinforcement is much better than trying to figure out a bunch of numbers.

Writing a blog post, creating a chart for a client’s project, or just trying to get your own personal data sorted; these free tools are just what you need to make charts, graphs, and infographics that are both pretty and easy to understand.

Chartist

Chartist

“Simple responsive charts” – no more, no less. Download this tiny program and create vector pie charts, line graphs and more that will scale to any screen size. You can even add animations! The graphs are highly compatible with a majority of web browsers, so there’s no reason not to use it. However, you will need to learn some JavaScript and CSS. The documentation should be a big help if you’re new to these languages.

RAWGraphs

RAWGraphs

If you’re looking for variety, RAWGraphs has everything under the sun. Just paste in your spreadsheet data or upload a file, and you’ll be able to convert the numbers to anything from a bar graph to a bump chart! You can even add your own chart type if you’re familiar with JavaScript. When done customizing, you can download as a SVG, PNG or JSON file. Or just embed the vector into your site. This advanced program isn’t the easiest to use, but it has a lot of potential.

Datawrapper

Datawrapper

If you have spreadsheet data you want to include in an article, Datawrapper makes it easy to turn it into a beautiful graph. It’s not hard at all, and the graphs are fully customizable down to text alignment and color customization (there are even color-blind filters!) When you’re finished, you just need to sign up and you’ll get the embed code for the chart.

Tableau

Tableau

Need something professional? Tableau Public is a downloadable tool that allows you to visualize data in a variety of ways. Suitable for anything from small charts to dedicated infographics, the app is a great choice for web designers.

ChartBlocks

ChartBlocks

“The world’s easiest chart builder” is exactly what it says. Insert some spreadsheet data, do a little tweaking, pick a theme and you’ve got yourself a chart in less than a minute. If you want to, you can tweak the appearance further. Or you can just download it, embed it or share it on social media!

Beam

Beam

Are all these programs too complicated? Just need to make a simple graph from a handful of data without importing a bunch of files? Just tweak a few names and data points, pick from four types of simple charts, and choose a color profile. Easy as that!

Visualize Free

Visualize Free

This drag-and-drop program is a fantastic way to create beautiful infographics that involve a lot of data. You’ll need to sign up to use the app, but there are a bunch of examples on the homepage that you’re free to customize while getting a feel for the dashboard.

OpenHeatMap

OpenHeatMap

This app is amazing simply because it’s so easy to use! Upload a spreadsheet or Excel file – get a beautiful heatmap with an embed code. The process is clean and perfectly streamlined.

Timeline

Timeline

This one is a bit more difficult to set up, since you’ll need to follow the steps exactly and create a new spreadsheet. But if you need an embeddable interactive timeline for your project, this is the app you’re looking for.

Beautiful Data

Breaking up text with visually stimulating images is a great way to prevent a webpage from looking bland. With these free tools, you’ll be able to create interesting graphs and charts that will draw in anyone who sees them, and in minutes. No more spending hours trying to make your own infographics!


Black Friday Deals of 2018: Editor's Picks

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/VTdwJGcsMI0/black-friday-deals-2018-editors-picks

Black Friday Deals of 2018: Editor’s Picks

Black Friday Deals of 2018: Editor's Picks

AoiroStudio
Nov 23, 2018

Today is the day where brands, friends and more are kicking off their Black Friday & Cyber Monday deals for the long weekend of festivities. We have made an article entirely about it if you wanna check it out: Black Friday Deals for Designers, the Ultimate List!. Now let’s shift our conversation and still celebrating the spirit of Black Friday. I love Amazon and I took a dive through their Black Friday deals and tried to find the best deals with a decent amount of savings. And I made a roundup! You can easily get distracted on those sites on the apparent of why are you even there in the first place? So shall we?

More Links
Follow my tweets @aoirostudio
Follow my pictures on Instagram
My Best Black Friday Deals of 2018
ASUS Chromebook Flip 12.5-inch Touchscreen now $499.00

Black Friday Deals of 2018: Editor's Picks

Seagate Expansion 4TB now $79.99

Black Friday Deals of 2018: Editor's Picks

Echo Dot with Philips Hue White and Color Smart Light Bulb now $94.99

Black Friday Deals of 2018: Editor's Picks

Blue Yeti USB Microphone now $89.00

Black Friday Deals of 2018: Editor's Picks

CORSAIR K95 Gaming Keyboard now $139.99

Black Friday Deals of 2018: Editor's Picks

Manfrotto Compact Tripod now $41.67

Black Friday Deals of 2018: Editor's Picks

Logitech Wireless Mouse now $14.24

Black Friday Deals of 2018: Editor's Picks

Samsung 34″ Thunderbolt now $749.99

Black Friday Deals of 2018: Editor's Picks

Apple iPad (Wi-Fi, 32GB) now $249.00

Black Friday Deals of 2018: Editor's Picks

Dell UltraSharp 27-Inch now $329.00

Black Friday Deals of 2018: Editor's Picks

Logitech G933 Artemis Spectrum now $99.99

Black Friday Deals of 2018: Editor's Picks

Sony Alpha a6000 Mirrorless with two lenses kit now $598.00

Black Friday Deals of 2018: Editor's Picks

Fujifilm X-T2 Mirrorless kit lens now $1,499.00

Black Friday Deals of 2018: Editor's Picks

Elsewhere
Made by Google

Save $200 on Pixel 3 or $250 on Pixel 3 XL. Save $300 on Pixelbook. Mini and mighty. Save $44 on Google Home Mini. Save $10 on Chromecast. Save $100 on Google Home Max.

Made by Google Products

Black Friday Deals of 2018: Editor's Picks

black friday
guide


Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Original Source: https://www.smashingmagazine.com/2018/11/monthly-web-development-update-11-2018/

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Anselm Hannemann

2018-11-23T14:00:24+01:00
2018-11-23T17:23:27+00:00

How much does design affect the perception of our products and the users who interact with them? To me, it’s getting clearer that design makes all the difference and that unifying designs to a standard model like the Google Material Design Kit doesn’t work well. By using it, you’ll get a decent design that works from a technical perspective, of course. But you won’t create a unique experience with it, an experience that lasts or that reaches people on a personal level.

Now think about which websites you visit and if you enjoy being there, reading or even contributing content to the service. In my opinion, that’s something that Instagram manages to do very well. Good design fits your company’s purpose and adjusts to what visitors expect, making them feel comfortable where they are and enabling them to connect with the product. Standard solutions, however, might be nice and convenient, but they’ll always have that anonymous feel to them which prevents people from really caring for your product. It’s in our hands to shape a better experience.

News

Yes, Firefox 63 is here, but what does it bring? Web Components support including Custom Elements with built-in extends and Shadow DOM. prefers-reduced-motion media query support is available now, too, Developer Tools have gotten a font editor to make playing with web typography easier, and the accessibility inspector is enabled by default. The img element now supports the decoding attribute which can get sync, async, or auto values to hint the preferred decoding timing to the browser. Flexbox got some improvements as well, now supporting gap (row-gap, column-gap) properties. And last but not least, the Media Capabilities API, Async Clipboard API, and the SecurityPolicyViolationEvent interface which allows us to send CSP violations have also been added. Wow, what a release!
React 16.6 is out — that doesn’t sound like big news, does it? Well, this minor update brings React.lazy(), a method you can use to do code-splitting by wrapping a dynamic import in a call to React.lazy(). A huge step for better performance. There are also a couple of other useful new things in the update.
The latest Safari Tech Preview 68 brings <input type="color"> support and changes the default behavior of links that have target="_blank" to get the rel="noopener" as implied attribute. It also includes the new prefers-color-scheme media query which allows developers to adapt websites to the light or dark mode settings of macOS.
From now on, PageSpeed Insights, likely still the most commonly used performance analysis tool by Google, is now powered by project Lighthouse which many of you have already used additionally. A nice iteration of their tool that makes it way more accurate than before.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Form Design Patterns — a practical guide for anyone who needs to design and code web forms

General

Explore structured learning paths to discover everything you need to know about building for the modern web. web.dev is the new resource by the Google Web team for developers.
No matter how you feel about Apple Maps (I guess most of us have experienced moments of frustration with it), but this comparison about the maps data they used until now and the data they currently gather for their revamped Maps is fascinating. I’m sure that the increased level of detail will help a lot of people around the world. Imagine how landscape architects could make use of this or how rescue helpers could profit from that level of detail after an earthquake, for example.

Web.dev

From fast load times to accessibility — web.dev helps you make your site better.

HTML & SVG

Andrea Giammarchi wrote a polyfill library for Custom Elements that allows us to extend built-in elements in Safari. This is super nice as it allows us to extend native elements with our own custom features — something that works in Chrome and Firefox already, and now there’s this little polyfill for other browsers as well.
Custom elements are still very new and browser support varies. That’s why this html-parsed-element project is useful as it provides a base custom element class with a reliable parsedCallback method.

JavaScript

Leonardo Maldonado compiled a collection of JavaScript concepts that are very useful to know for developers. The list includes both videos and articles so you can choose your preferred way of learning.
When a video doesn’t work on a website anymore and you’re using Service Workers, the problem might be the Range request. Phil Nash debugged this weird issue on his page and explains how you can do too.

UI/UX

How do you build a color palette? Steve Schoger from RefactoringUI shares a great approach that meets real-life needs.
Matthew Ström’s article “Just-in-time Design” mentions a solution to minimize the disconnection between product design and product engineering. It’s about adopting the Just-in-time method for design. Something that my current team was very excited about and I’m happy to give it a try.
HolaBrief looks promising. It’s a tool that improves how we create design briefs, keeping everyone on the same page during the process.
Mental models are explanations of how we see the world. Teresa Man wrote about how we can apply mental models to product design and why it matters.
Shelby Rogers shares how we can build better 404 error pages.

Building Your Color Palette

Steve Schoger looks into color palettes that really work. (Image credit)

Tooling

The color palette generator Palx lets you enter a base hex value and generates a full color palette based on it.

Security

This neat Python tool is a great XSS detection utility.
Svetlin Nakov wrote a book about Practical Cryptography for Developers which is available for free. If you ever wanted to understand or know more about how private/public keys, hashing, ciphers, or signatures work, this is a great place to start.
Facebook claimed that they’d reveal who pays for political ads. Now VICE researched this new feature and posed as every single of the current 100 U.S. senators to run ads ‘paid by them’. Pretty scary to see how one security failure that gives users more power as intented can change world politics.

Privacy

I don’t like linking to paid, restricted articles but this one made me think and you don’t need the full story to follow me. When Tesla announced that they’d ramp up model 3 production to 24⁄7, a lot of people wanted to verify this, and a company that makes money by providing geolocation data captured smartphone location data from the workers around the Tesla factories to confirm whether this could be true. Another sad story of how easy it is to track someone without consent, even though this is more a case of mass-surveillance than individual tracking.

Web Performance

Addy Osmani shares a performance case study of Netflix to improve Time-to-Interactive of the streaming service. This includes switching from React and other libraries to plain JavaScript, prefetching HTML, CSS, and (React) JavaScript and the usage of React.js on the server side. Quite interesting to see so many unconventional approaches and their benefits. But remember that what works for others doesn’t need to be the perfect approach for your project, so take it more as inspiration than blindly copying it.
Harry Roberts explains all the details that are important to know about CSS and Network Performance. A comprehensive collection that also provides some very interesting tips for when you have async scripts in your code.
I love the tiny ImageOptim app for batch optimizing my images for web distribution. But now there’s an impressive web app called “Squoosh” that lets you optimize images perfectly in your web browser and, as a bonus, you can also resize the image and choose which compression to use, including mozJPEG and WebP. Made by the Google Chrome team.

CSS

Oliver Schöndorfer shows how we can serve a Variable Font to modern browsers while providing a fallback web font for older browsers. This is especially interesting as Oliver goes deep into optimizing the fallback font and adjusting it via CSS in order to match the variable font as closely as possible in case a font swap happens during page load.
Andy Clarke shows what’s needed to redesign a product and website to support bright and dark modes which were introduced to several Operating Systems recently and will soon be supported via media queries by various browsers.
While background-clip is not super new, it hasn’t been very useful due to the lack of browser support. But as Sime Vidas shows, CSS Background Clip is now widely supported, giving us great opportunities to enhance the text styling on our websites.

Redesigning your product and website for dark mode

How to design for dark mode while maintaining accessibility, readability, and a consistent feel for your brand? Andy Clarke shares some valuable tips. (Image credit)

Work & Life

Stig Brautaset wrote about why he nearly failed to get into his job as a submarine sonar operator due to a silly hiring rule and how he made the best out of the situation and succeeded. A valuable lesson that shows that you shouldn’t stick too much to guidelines and rules when it comes to hiring people but trust your gut and listen to their stories instead.
In “People, Not Robots: Bringing the Humanity Back to Customer Support,” Kristin Aardsma shares why it’s important to rethink how customer support works.
Marcus Wermuth reflects on why becoming a manager is not a promotion but a career change.

Going Beyond…

Neil Stevenson on Steve Jobs, creativity and death and why this is a good story for life. Although copying Steve Jobs is likely not a good idea, Neil provides some different angles on how we might want to work, what to do with our lives, and why purpose matters for many of us.
Ryan Broderick reflects on what we did by inventing the internet. He concludes that all that radicalism in the world, those weird political views are all due to the invention of social media, chat software and the (not so sub-) culture of promoting and embracing all the bad things happening in our society. Remember 4chan, Reddit, and similar services, but also Facebook et al? They contribute and embrace not only good ideas but often stupid or even harmful ones. “This is how we radicalized the world” is a sad story to read but well-written and with a lot of inspiring thoughts about how we shape society through technology.
I’m sorry, this is another link about Bitcoin’s energy consumption, but it shows that Bitcoin mining alone could raise global temperatures above the critical limit (2°C) by 2033. It’s time to abandon this inefficient type of cryptocurrency. Now.
Wilderness is something special. And our planet has less and less of it, as this article describes. The map reveals that only very few countries have a lot of wilderness these days, giving rare animals and species a place to live, giving humans a way to explore nature, to relax, to go on adventures.
We definitely live in exciting times, but it makes me sad when I read that in the last forty years, wildlife population declined by 60%. That’s a pretty massive scale, and if this continues, the world will be another place when I’m old. Yes, when I am old, a lot of animals I knew and saw in nature will not exist anymore by then, and the next generation of humans will not be able to see them other than in a museum. It’s not entirely clear what the reasons are, but climate change might be one thing, and the ever-growing expansion of humans into wildlife areas probably contributes a lot to it, too.

Smashing Editorial
(cm, il)

Black Friday deal: This top monitor calibrator is a total bargain

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/e1U4z7qCW5s/pick-up-a-top-monitor-calibrator-for-half-price

All digital artists and graphic designers will know how vital it is to have complete colour accuracy on your machine. The best monitor calibrators don't come cheap, which is why this Black Friday 2018 deal is such a great find. DataColour has slashed the prices of its Spyder5 range, and they're flying off the shelves. 

This deal is so popular that that Spyder5PRO has already sold out, but fear not: you can still grab a Spyder5ELITE with a whopping £97 (130€) off. This calibrator offers expert colour accuracy, with room light monitoring and unlimited settings for gamma, white point and advanced grey balancing enabling you to take complete control of your colour workflow. It work on laptops, desktops and projectors, too.

Please note, the online store price is in Euros, so the GBP value may fluctuate slightly depending on the exchange rate. On the hunt for some more top deals for artists and designers? Take a look at our pick of the best UK Black Friday gems (or, for those of you across the pond, US hidden gems).