Statically Compiled Go on Alibaba Cloud Container Service

Original Source: https://www.sitepoint.com/statically-compiled-go-on-alibaba-cloud-container-service/

The third prize of the Alibaba Cloud competition goes to David Banham. His winning entry is a succinct tutorial on statically compiling a Go program, and using Docker to containerize and distribute it.

Alibaba Cloud’s Container Service is a great example of how Docker and Kubernetes are revolutionising the cloud landscape. The curmudgeons will rail that it’s all still just some software running on someone else’s computer, but the transformative difference is that k8s and Docker provide what is effectively a platform-agnostic management API. If you build your DevOps pipelines against k8s you have the lowest possible switching friction between AWS, Google Cloud, Azure and Alibaba. The closer we can get to the dream of write once, run anywhere, the better!

Another tool I love for enhancing software portability is the Go language. Cross compilation in Go is as easy as falling off a log. I develop software on my Linux laptop and in the blink of an eye I can have binaries built for Windows, OSX, WASM, etc! Here’s the Makefile snippet I use to do it:

name = demo

PLATFORMS := linux/amd64 windows/amd64 linux/arm darwin/amd64

temp = $(subst /, ,$@)
os = $(word 1, $(temp))
arch = $(word 2, $(temp))

release:

make -l inner_release

inner_release: $(PLATFORMS)

$(PLATFORMS):
@-mkdir -p ../web/api/clients/$(os)-$(arch)
@-rm ../web/api/clients/$(os)-$(arch)/*
GOOS=$(os) GOARCH=$(arch) go build -o ‘../web/api/clients/$(os)-$(arch)/$(name) .
@chmod +x ../web/api/clients/$(os)-$(arch)/$(name)
@if [ $(os) = windows ]; then mv ../web/api/clients/$(os)-$(arch)/$(name) ../web/api/clients/$(os)-$(arch)/$(name).exe; fi
zip –junk-paths ../web/api/clients/$(os)-$(arch)/$(name)$(os)-$(arch).zip ../web/api/clients/$(os)-$(arch)/*
@if [ $(os) = windows ]; then cp ../web/api/clients/$(os)-$(arch)/$(name).exe ../web/api/clients/$(os)-$(arch)/$(name); fi

Neat! That will get you a tidy little binary that will run on your target operating systems. Even that is overkill if you’re targeting a Docker host like Cloud Container Service. All you need to do there is just GOOS=linux GOARCH=amd64 go build and you’re done! Then, you just need to choose your favorite Linux distribution and build that into the Dockerfile.

What if we wanted to simplify our lives even further, though? What if we could do away with the Linux distribution entirely?

Go supports the compilation of statically linked binaries. That means we can write code that doesn’t rely on any external DLLs at all. Observe this magic Dockerfile:

The post Statically Compiled Go on Alibaba Cloud Container Service appeared first on SitePoint.

Three Ways to Create Your Own WordPress Theme

Original Source: https://www.sitepoint.com/creating-wordpress-themes-overview/

It’s common for WordPress users to pick a ready-made theme. But you can also create a theme of your own. This article covers various ways to go about this.

Options range from making edits to an existing theme, to creating your own WordPress theme completely from scratch. In between these extremes are various other options that include duplicating and modifying themes, and using a range of tools to help you build your own theme.

WordPress themes are made up of a collection of files, all contained within a single folder that lives within the /themes/ folder: wp-content/themes/.

An example of the files contained within a WordPress theme

Option 1: Modify an Existing Theme

Modifying an existing theme is perhaps the easiest option. You may just want to make some minor changes, like colors, font sizes, or simple layout changes.

In this case your best option is to create a child theme. A child theme references an existing theme, just modifying the bits you want to change. Using a child theme has the advantage that, if the parent theme is updated when you update WordPress, your changes won’t be wiped away.

To create a child theme, create a new folder inside your /themes/ folder. A handy tip is to use the name of the parent theme with -child appended, as this makes it clear what the child theme is. So, if you’re creating a child theme of the Twenty Seventeen theme, your child theme folder might be called /twentyseventeen-child/.

Minimum default files for a child theme

In this child folder, you need at a minimum a style.css file and a functions.php file. In these files you need to add certain code to tell WordPress which is the parent theme, where the stylesheets are, and any other new functionality you want in your child theme.

The last step for getting your child theme up and running is to enter the WordPress admin panel and go to Appearance > Themes to activate your child theme.

For a complete guide to this process, visit the WordPress Codex. For help with setting up a child theme, you can also use the WordPress Child Theme Configurator utility.

Option 2: Adapt an Existing Theme

If you’re keen to dig into WordPress code a bit more, you can duplicate an existing theme and bend it to your will.

That might involve things like deleting all of the current styles and creating your own. You can also dig into the other theme files and remove elements you don’t need and add others. For example, you might want to alter the HTML structure of the theme. To do so, you’ll need to open various files such as header.php, index.php and footer.php and update the HTML parts with your own template elements.

A new folder containing files to be edited

Along the way, you might decide there are lots of features in the copied theme you no longer need, such as post comments and various sidebar elements such as categories and bookmarks. You’ll find PHP snippets for these elements in the various theme files, and you can simply delete them or move them around to other locations.

It can take a bit of searching around to work out which files contain the elements you want to delete or move, but it’s a good way to get familiar with your WordPress theme to dig in to the files like this.

Another option here, rather than duplicating an existing theme, is to start with a “starter theme”, which we look at below.

Option 3: Building a Theme from Scratch

A more daunting option — but more fun, too! — is to create your own theme completely from scratch. This is actually simpler than it sounds, because at a minimum you only need two files — style.css and index.php.

The minimum required files for building your own theme

That, however, would result in a pretty limited theme! At a minimum, you’d probably want a functions.php file for custom functions, and perhaps several other template files for the various sections of the site, such as a 404.php template file for showing 404 pages.

In this example, we’ve created a folder in our themes directory called /scratch-theme/. (You’ll want to choose a spiffier name than that, of course.) The style.css file will serve as the main stylesheet of our WordPress theme. In that CSS file, we first need to add some header text. This is a basic example:

/*
Theme Name: My Scratch Theme
Theme URI: https://sitepoint.com
Author: Sufyan bin Uzayr
Author URI: http://sufyanism.com
Description: Just a fancy starter theme for SitePoint
Version: 1
License: GNU General Public License v2 or later
License URI: http://www.gnu.org/licenses/gpl-2.0.html
Tags: clean, starter
*/

You can now head to the WordPress admin Themes section, where we’ll see our now theme listed.

Our new theme listed as an option

At this point, it doesn’t offer any custom styles and layouts, but can still be activated and used nonetheless. (You can add a thumbnail image for the theme by uploading an image file named “screenshot” to the theme’s root folder, preferably 880 x 660px.)

For more in-depth guidance of WordPress theme development, check out the WordPress Codex theme development guide.

It’s fairly straightforward to create a very basic WordPress theme from scratch, but going beyond that is more challenging. Before deciding this is a bit outside your wheelhouse, let’s check out some of the tools that are available to help you along.

The post Three Ways to Create Your Own WordPress Theme appeared first on SitePoint.

Tailor Brands Review: Instant Custom-Made Logos for Small Business Owners and Startups

Original Source: https://inspiredm.com/tailor-brands-review-instant-custom-made-logos-for-small-business-owners-and-startups/

Meta: This Tailor Brands review delves into all the details about its features, functionalities, pricing, and possible drawbacks.

Apple. Undoubtedly one of the biggest brands today. And in case you haven’t seen the news yet, it’s the first U.S. Company to hit the $1 trillion market cap.

Well-deserved, I bet. And not just because it develops innovative phones and PC. Not at all. Many companies are doing that already. And some are launching more revolutionary products. But still, Apple somehow managed to maneuver its way to the $1 trillion value ahead of the pack.

How, you ask?

Call it what you want, but the fact is- getting their branding right laid the foundation for everything. “iMac” essentially converted a regular PC into a special product with a name of its own. And the same applies to “iPhones”.

Now that’s clever branding. However, the most impressive element overall is the Apple logo. Isaac Newton’s light bulb moment, after getting hit by a falling apple,  basically inspired one of the most captivating logos of both the 20th and 21st century.

So, you think you can beat that branding strategy?

Technically, you should be able to, considering all the branding tools available on the web. In fact, allow me to walk you through one of the seemingly dominant solutions today.

Tailor Brands Review: Overview

Of course, you completely understand the importance of business branding. Sadly, you’re too busy to enroll yourself in a graphic design course.

Besides, there are many professional and freelance graphic designers out there. Hiring someone to handle the whole branding thing might seem like the most convenient option, right?

Now, I don’t know about you. But, I believe that branding is best designed when business owners passionately commit to the entire process.

Let’s face it. You’re the only person who comprehensively understands your business inside out. In other words, you are the best asset to the brand design process.

And that’s one of the reasons why co-founders Nadav Shatz, Tom Lahat, and Yali Saar established Tailor Brands in 2014- to make it easier for business owners to design and manage their brands.

Admittedly, by then, there were numerous logo makers on the web. But, here’s the thing- none of them had implemented a holistically automated system.

Until Tailor Brands came along. It was the first solution to leverage artificial intelligence to facilitate automated logo design. And this has seen it attract more than 3 million users over the years, who’ve collectively produced over 100 million designs.

Tailor Brands

Today, Tailor Brands is much more than just a logo design engine. It sells itself as an all-in-one branding solution that combines logo design with critical tools for driving branding campaigns.

Sounds like something you’d want to check out?

Well, I handled that for you. And quite comprehensively to be precise. So, I’ll give you the truth and nothing but the truth- this Tailor Brands review delves into all the details about its features, functionalities, pricing, and possible drawbacks.

Now let’s get to it…

Tailor Brands Review: Features
Logo Design Process

Ok. I’ll admit that quite a number of the design solutions I’ve tested out use a drag-and-drop editor. While this method is arguably considered user-friendly, let’s face it. It’s only simple when you compare it with complex alternatives like coding.

If you’ve tried one before, it probably took you some time to learn the ropes. It typically takes several rounds of edits to ultimately make up your mind about a good final design.

So, of course, I was fairly excited to see a different approach on Tailor Brands.

First off, you’ll be required to enter your brand’s name, followed by the industry. A drop-down list of suggestions comes in handy here to help you get it over and done with as fast as possible.

Then comes the preference stage. The system basically presents you with three options for your logo- either develop it from your brand name’s initials, or go with the entire name altogether, or proceed with an icon logo.

logo design

Let me guess. Since initials seem too boring, and a brand name logo is essentially a fancy replication of what you already have, I bet you’d choose the icon. Or you might as well skip the step and let the system run its algorithms to pick out an ideal design for your business.

Well, of course, I decided to proceed with the popular option here. And for my icon-logo, the system requested me to upload an image from my gallery, or have one developed from an abstract shape.

Neat and pleasantly straightforward, to say the least. But, what happens when you choose a name-based logo?

Good thing I’m a curious cat. So I went back and gave the name-based option a try. It turns out the system conducts about five to six steps of requesting you to pick out a favorite from a set of images.

The whole idea behind this, as I came to establish, is helping Tailor Brand’s algorithm get a feel of your overall aesthetic preferences. Quite a fine touch, as opposed to simply generating random logos based on popular favorites.

Now, I know what you’re thinking. Heck, doesn’t the system proceed to generate random guesses when you skip the logo type stage?

Oddly enough, it doesn’t. It actually does the same thing as the name-based logo process. You’ll still get numerous sets of logos, from where you should select the ones that feel like your cup of tea. Subsequently, the system runs structured assessments based on the underlying algorithms to comprehend what you like most.

Then it generates a wide range of possible logos for your brand, complete with corresponding previews of how they’d actually look like on mobile view, letterheads, T-shirts, business cards, social media, and other relevant platforms.

It’s like hiring a designer who keenly reviews your business, studies your overall preferences, then generates corresponding brand concepts, before helping you make a final decision.

But, is that all? There’s no editing at all?

The good news is- you can edit several elements of your logo, including spacing, font, color, and more. The bad news is- this is not open to trial users. You have to pay to edit the logo.

logo editing

Well, all things considered, it goes without saying that the entire process is exceedingly user-friendly. Although it doesn’t guarantee you a logo that’s unique all around, everything is simple and straightforward. Because otherwise, you obviously wouldn’t want to spend days figuring your way around a complicated graphic-design software just to create a simple logo.

Social Media Tools

Now, guess what? It doesn’t end with logo creation. Come to think of it, plenty of tools can do that. What sets Tailor Brands apart from them is the fact that it’s not just about logos. It seemingly takes the whole concept of branding pretty seriously.

That’s why it goes ahead to provide a range of social media tools to further empower your branding strategy. And it all makes sense since social media is currently the principal platform businesses are leveraging to hunt for prospects. In fact, 98% of small businesses have already tried social media marketing.

Good enough? Now, let’s dive into the details.

Tailor Brand’s Social Posts tool, for starters, helps you customize branded images to post on social media. This strategy alone has been proven to engage twice as many social media users as posts without images.

And speaking of strategy, another tool you get is the Social Strategy. This helps you organize and streamline your entire social media framework. In addition to an automated post scheduler, it comes with features for curating posts, saving pre-made posts, keeping track of all your campaigns, and more.

Then, when it comes to impressions, Tailor Brands offers you a tool called Social Cover Banners. This is exceptionally critical because the fundamental objective behind branding has always been trying to make solid impressions on the target market.

And when it comes to social media, your profile has got to be outstanding enough to leave a lasting impression. Social Cover Banners helps you achieve this by providing branded customized banners to place at the top of your LinkedIn, Twitter, and Facebook pages.

Online Brand Book

Using algorithms to develop a logo based on your overall preferences is impressive. But, let’s be honest here. Even when you’re fully satisfied with the outcome, the truth is- you’re just one element in an extensive network of audiences.

In other words, your target market does not participate in the initial design process. And that should be enough to get you worried, especially when you consider the fact that design is more of a subjective matter. So, in essence, your audience might not share the same sentiments about the brand as you.

Thankfully, Tailor Brand’s professional designers swing into action to help you avoid such a disaster. And they do so through an online brand book, which is offered as along with your branding package.

Quite simply, the online brand book is like a Magna Carta of sorts. It contains valuable pointers about branding, which you can use to further refine your design. Some of the elements it delves into include primary and complementary colors, spacing, fonts, logo placement, plus sizes.

tailor brands online brand book

Over time, you’ll be able to maintain a consistent branding strategy even when you hire new members to join your team.

Analytics

By now, you’re probably already used to the standard business metrics you get on most web-based solutions. I’m talking about things like conversion rate, bounce rate, traffic size, etc.

Now, get this. Analytics on Tailor Brands are a bit different. Instead of monitoring your web traffic, the system assesses how the market is responding to your branding strategy.

And you know what? It goes beyond typical social media indicators like the number of post likes. To adequately establish how your audience is warming up to the brand, Tailor Brands maintains a keen eye on the engagement patterns, plus events happening around your brand- on both social media and the web.

Tailor Brands Review: Pricing

Tailor Brands has seemingly extended its simplicity concept to the pricing structure. There are only two plans for all types of users.

Well, you can proceed with the logo design process without paying a dime. But, unfortunately, that’s all you can do. You need an active subscription to buy a logo. And the best thing is- you’ll retain all the rights to your logo after purchasing.

That said, here are the subscriptions options

Dynamic Logo– Costs $9.99 per month, while annual prepay subscribers end up paying $2.99 per month.

Unlimited Backup
Weebly Website Builder
Social Strategy
Seasonal logos
Social Analytics
Commercial Use License
Transparent PNG
High-Resolution Logo

Premium– Costs $49.99 per month, while annual prepay subscribers end up paying $10.99 per month.

Unlimited Backup
Weebly Website Builder
Facebook Ads
Branded Watermark Tool
Branded Business Deck
Branded Presentation
Online Brand Guidelines
Social Covers Design Tool
Weekly Facebook Posts
Business Card Design Tool
Logo Resize Tool
Make Changes and Re-Download
Social Strategy
Seasonal Logos
Social Analytics
Commercial Use License
Transparent PNG & Vector EPS Files
High-Resolution Logo

tailor brands pricing

Who Should Consider Using Tailor Brands?

Going by all the features we’ve covered, Tailor Brands’ primary strong point is simplicity and user-friendliness. Users who’ve never spent even a single second of their life on any graphics design software can generate a decent logo in just a couple of minutes. All things considered, it’s ridiculously straightforward.

But then again, and rather ironically, that might also be its principal drawback to a couple of users. I’m talking about designers who prefer dynamic graphics editing solutions that can customize everything.

So, let’s agree on one thing- Tailor Brands is best for small business owners and startups seeking a simple and straightforward platform, which can systematically convert their ideas into logos and power their branding campaigns.

The post Tailor Brands Review: Instant Custom-Made Logos for Small Business Owners and Startups appeared first on Inspired Magazine.

My Best Practices for Deploying a Web Application on Alibaba Cloud

Original Source: https://www.sitepoint.com/my-best-practices-for-deploying-a-web-application-on-alibaba-cloud/

This article was originally published on Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

In this article, I want to share the best practices I use when deploying a web application to Alibaba Cloud. I work as a freelancer and recently one of my clients asked me to setup SuiteCRM for his small organization. Since I frequently write tutorials for Alibaba Cloud, I recommended that the client use the same cloud platform. For nearly 100 users and at least 30 concurrent users, here's the configuration I recommended.

ECS instance of 2 vCPUs and 4GB RAM to install Nginx with PHP-FPM.
ApsaraDB for RDS instance for MySQL with 1GB core, 1 GB RAM, and 10 GB storage.
Direct Mail for sending emails.

The steps I followed are very simple and can be adopted for nearly all PHP based applications.

If you are new to Alibaba Cloud, you can use this link to sign up to Alibaba Cloud. You will get new user credit worth US$300 for free, which you can use to try out different Alibaba Cloud products.

Creating an ECS Instance

Alibaba Cloud has documented nearly everything you will require to get started with the cloud platform. You can use the Getting Started Tutorials or the Tech Share Blog to learn how to start using Alibaba Cloud. You can find the most obvious steps in the Quick Start Guide and let me walk you through the best practices to use when creating the ECS instance.

Log in to your Alibaba Cloud console and go to Elastic Compute Service interface. You can easily create the instance by clicking the Create Instance button. Things to keep in mind are:

Region: Since Alibaba Cloud has data centers all around the globe, always choose the region which is geographically closer to the users of the application. As the data center is closer to the user, the website will load very fast due to the low latency of the network. In my case, I chose Mumbai region, as the organization was based in Mumbai itself.
Billing Method: If you are planning to continuously run the instance 24/7, you should always choose the monthly subscription as it will cut down the price to less than half compared to Pay-As-You-Go. For example, the monthly subscription cost of a shared type ECS instance of 2 vCPUs and 4GB RAM is $23 USD but the same instance in Pay-As-You-Go costs $0.103 USD per Hour. Monthly cost becomes $0.103*24*30 = $74.16 USD.
Instance Type: Choose the instance type according to your requirements. Resources can be increased later on demand.
Image: You may find the application you wish to install on your ECS instance on a Marketplace image but it is always recommended to install it yourself in a clean official image. Later, if your application encounters some error, you will know where to look.
Storage: System disks are deleted when the ECS instance is released. Use data disk when possible as your disk will be retained even after the instance is accidentally deleted.

Here's the configuration I used.

You can choose the VPC which is created by default. You can add as many as 4092 instances in it. I use a different security group for each ECS instance so that I can configure individually and make sure that no unused port is opened.

Another important thing is to use key-based authentication rather than using passwords. If you already have a key-pair, you can add the public key to Alibaba Cloud. If not, you can use Alibaba Cloud to create one. Make sure that key is stored in a very secure place, and the key itself is encrypted by a passphrase.

That's all the things to keep in mind while creating the ECS instance.

Setting Up the ECS Instance

Once you have created your instance and logged into the terminal, there are few things I suggest you should consider before you set up your website.

Rather than using the root account for executing the commands, set up a sudo user on the first connection and always use the sudo user for running the commands. You can also set key based authentication for the sudo user, and disable root login entirely.
Always keep your base image updated.
Alibaba base images do not have any extra package which is not required. Do not install any package that’s not required.
If things go bad during installation, you can always reset the instance by changing the system disk. You don't need to delete the instance and recreate it.

I created the sudo user and configured key based auth in it. I updated the base image and set up unattended system upgrades. I followed a tutorial to install Nginx web server, which is a lightweight production-grade web server. Further, I installed PHP 7.2 with PHP-FPM. PHP 7.2 is the latest available version of PHP as of now. Using the latest software will ensure that the system is free from all the bugs and we will also get a faster processing and more stability. Finally, I downloaded the SuiteCRM archive from its official website and deployed the files into Nginx.

You can use the getting started tutorials or the tutorials written by Tech Share authors to install the applications.

Configuring Security Group Rules

It is very important to leave no unused port open in the security group of the ECS instance. Have a look at the security group rules I used for the SuiteCRM instance.

You can see that I have allowed only the ports 22, 80 and 443 along with all ICMP packets. Port 22 is used for SSH connection. Port 80 is the unsecured HTTP port, which in my case just redirects to the port 443 on HTTPS. ICMP packets are used to ping the host to check if it is alive or not. It's perfectly okay if you want to drop the ICMP packets as well — you just won't be able to ping your instance.

Creating the RDS Instance

The first question to ask before we create the RDS instance is why exactly we need it. We could install any open source database server such as MySQL, MariaDB, PostgreSQL or MongoDB server on the ECS instance itself.

The answer to the question is that ApsaraDB for RDS is optimized for speed and security. By default, the instance we create is only accessible to the whitelisted instances only.

Let's look at the things to keep in mind when we create the ECS instance.

Region: Always choose the same region for the database instance and the ECS instance. Also, make sure that they both are in the same VPC. This will enable you to leverage the free intranet data transfer between the hosts in the same network. Another advantage is that you will need to whitelist only the private IP address of the ECS instance. This increases the security of the database to a great extent.
Billing: Again, the cost of monthly subscription is less than that of the Pay-As-You-Go method. Choose according to your needs.
Capacity: You can start with a low-end configuration such as 1 Core, 1 GB instance, and 5 GB storage. Later on you can increase resources.
Accounts: Never create the Master account for the MySQL 5.6 instance unless required. You can create a database and a database user for each database.

Here's the RDS configuration I used for SuiteCRM.

The post My Best Practices for Deploying a Web Application on Alibaba Cloud appeared first on SitePoint.

Space Escape Illustration Series

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/TR_Ss0zT4zQ/space-escape-illustration-series

Space Escape Illustration Series

Space Escape Illustration Series

AoiroStudio
Oct 30, 2018

Prateek Vatash is a graphic artist based in Bangalore, India. He shared on his Behance, an illustration series entitled: Space Escape. It’s a beautiful series with a true inspiration from the 80s but added with a flair of a broad range of vibrant colors. I published the entire collection since every single piece has something special and unique of its kind. Make sure to check out more of his work, enjoy!

More Links
Personal Site
Behance
Illustration
Space Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration SeriesSpace Escape Illustration Series

illustration
digital art


50 Clean, Simple and Minimalist Website Designs

Original Source: https://www.hongkiat.com/blog/clean-simple-minimalist-website-design/

Minimalism has gained popularity in the past few years and has been among the top web design trends in 2017. Minimalist sites load faster, take fewer server resources, and are often faster to develop…

Visit hongkiat.com for full content.

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Original Source: https://www.smashingmagazine.com/2018/10/video-playback-on-the-web-part-2/

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Doug Sillars

2018-10-25T14:40:24+02:00
2018-10-29T16:03:22+00:00

In my previous post, I examined video trends on the web today, using data from the HTTP Archive. I found that many websites serve the same video content on mobile and desktop, and that many video streams are being delivered at bitrates that are too high to playback on 3G speed connections. We also discovered that may websites automatically download video to mobile devices — damaging customer’s data plans, battery life, for videos that might not ever be played.

TL;DR: In this post, we look at techniques to optimize the speed and delivery of video to your customers, and provide a list of 9 best practices to help you deliver your video assets.

Video Playback Metrics

There are 3 principal video playback metrics in use today:

Video Startup Time
Video Stalling
Video Quality

Since video files are large, optimizing the video to be as small as possible will lead to faster video delivery, speeding up video start, lowering the number of stalls, and minimizing the effect of the quality of the video delivered. Of course, we need to balance startup speed and stalling with the third metric of quality (and higher quality videos generally use more data).

Video Startup

When a user presses play on a video, they expect to be able to watch the video quickly. According to Conviva (a leader in video metric analysis), in Q1 of 2018, 14% of videos never started playing (that’s 2.4 Billion video plays) after the user pressed play.

Pie chart showing that nearly 15% of all videos fail to play

Video Start Breakdown (Large preview)

2.3% of videos (400M video requests) failed to play after the user pressed the play button. 11.54% (2B plays) were abandoned by the user after pressing play. Let’s try to break down what might be causing these issues.

Front-end is messy and complicated these days. That’s why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬

Smashing Cat, just preparing to do some magic stuff.

Video Playback Failure

Video playback failure accounted for 2.3% of all video plays. What could lead to this? In the HTTP Archive data, we see 0.3% of all video requests resulting in a 4xx or 5xx HTTP response — so some percentage fail to bad URLs or server misconfigurations. Another potential issue (that is not observed in the HTTP Archive data) are videos that are blocked by Geolocation (blocked based on the location of the viewer and the licensing of the provider to display the video in that locale).

Video Playback Abandonment

The Conviva report states that 11.5% of all video plays would play, but that the customer abandoned the playback before the video started playing. The issue here is that the video is not being delivered to the customer fast enough, and they give up. There are many studies on the mobile web where long delays cause abandonment of web pages, and it appears that the same effect occurs with video playback as well.

Research from Akamai shows that viewers will wait for 2 seconds, but for each subsequent second, 5.8% of viewers abandon the video.

Chart displaying the abandonment rate as startup time is longer.

Rate of abandonment over time (Large preview)

So what leads to video playback issues? In general, larger files take longer to download, so will delay playback. Let’s look at a few ways that one can speed up the playback of videos. To reduce the number of videos abandoned at startup, we should ‘slim’ down these files as best as possible, so they download (and begin playback) quickly.

MP4: Video Preload

To ensure fast playback on the web, one option is to preload the video onto the device in advance. That way, when your customer clicks ‘play’ the video is already downloaded, and playback will be fast. HTML offers a preload attribute with 3 possible options: auto, metadata and none.

preload = auto

When your video is delivered with preload="auto", the browser downloads the entire video file and stores it locally. This permits a large performance improvement for video startup, since the video is available locally on the device, and no network interference will slow the startup.

However, preload="auto" should only be used if there is a high probability that the video will be viewed. If the video is simply resident on your webpage, and it is downloaded each time, this will add a large data penalty to your mobile users, as well as increase your server/CDN costs for delivering the entire video to all of your users.

This website has a section entitled “Video Gallery” with several videos. Each video in this section has preload set to auto, and we can visualize their download in the WebPageTest waterfall as green horizontal lines:

A WebPageTEst Waterfall chart

Waterfall of video preload (Large preview)

There is a section called “Video Gallery”, and the files for this small section of the website account for 14.6M (83%) of the page download. The odds that one (of many) videos will be played is probably pretty low, and so utilizing preload="auto" only generates a lot of data traffic for the site.

Pie Chart showing the high percentage (83%) of video usage.

Webpage data breakdown (Large preview)

In this case, it is unlikely that even one of these videos will be viewed, yet all of them are downloaded completely, adding 14.8MB of content to the mobile site (83% of the content on the page). For videos that are have a high probability of playback (perhaps >90% of page views result in video play) — preloading the entire video is a very good idea. But for videos that are unlikely to be played, preload="auto" will only cause extra tonnage of content through your servers and to your customer’s mobile (and desktop) devices.

preload="metadata"

When the preload="metadata" attribute is used, an initial segment of the video is downloaded. This allows the player to know the size of the video window, and to perhaps have a second or 2 of video downloaded for immediate playback. The browser simply makes a 206 (partial request) of the video content. By storing a small bit of video data on the device, video startup time is decreased, without a large impact to the amount of data transferred.

On Chrome, metadata is the default choice if no attribute is chosen.

Note: This can still lead to a large amount of video to be downloaded, if the video is large.

For example, on a mobile website with a video set at preload="metadata", we see just one request for video:

Webpage Test Waterfall chart

(Large preview)

And the request is a partial download, but it still results in 2.7 MB of video to be downloaded because the full video is 1080p, 150s long and 97 MB (we’ll talk about optimizing video size in the next sections).

Pie chart showing that 2.7 MB or 42% of data is still vide with preload=metadata.

KB usage with video metadata (Large preview)

So, I would recommend that preload="metadata" still only be used when there is a fairly high probability that the video will be viewed by your users, or if the video is small.

preload="none"

The most economical download option for videos, as no video files are downloaded when the page is loaded. This will potentially add a delay in playback, but will result in faster initial page load For sites with many videos on a single page, it may make sense to add a poster to the video window, and not download any of the video until it is expressly requested by the end user. All YouTube videos that are embedded on websites never download any video content until the play button is pressed, essentially behaving as if preload="none".

Preload Best Practice: Only use preload="auto" if there is a high probability that the video will be watched. In general, the use of preload="metadata" provides a good balance in data usage vs. startup time, but should be monitored for excessive data usage.

MP4 Video Playback Tips

Now that the video has started, how can we ensure that the video playback can be optimized to not stall and continue playing. Again, the trick is to make sure the video is as small as possible.

Let’s look at some tricks to optimize the size of video downloads. There are several dimensions of video that can be optimized to reduce the size of the video:

Audio

Video files are split into different “streams” — the most common being the video stream. The second most common stream is the audio track that syncs to the video. In some video playback applications, the audio stream is delivered separately; this allows for different languages to be delivered in s seamless manner.

If your video is played back in a silent manner (like a looping GIF, or a background video), removing the audio stream from the video is a quick and easy way to reduce the file size. In one example of a background video, the full file was 5.3 MB, but the audio track (which is never heard) was nearly 300 KB (5% of the file) By simple eliminating the audio, the file will be delivered quickly without wasting bytes.

42% of the MP4 files found on the HTTP Archive have no audio stream.

Best Practice: Remove the audio tracks from videos that are played silently.

Video Encoding

When encoding a video, there are options to reduce the video quality (number of pixels per frame, or the frames per second). Reducing a high-quality video to be suitable for the web is easy to do, and generally does not affect the quality delivered to your end users. This article is not long enough for an in depth discussion of all the various compression techniques available for video. In x264 and x265 encoders, there is a term called the Constant Rate Factor (CRF). Using a CRF of 23-28 will generally give a good compression/quality trade off, and is a great first start into the realm of video compression

Video Size

Video size can be affected by many dimensions: length, width, and height (you could probably include audio here as well).

Video Duration

The length of the video is generally not a feature that a web developer can adjust. If the video is going to playback for three minutes, it is going to playback for three minutes. In cases in which the video is exceptionally long, tools like preload="none" or streaming the video can allow for a smaller amount of data to be downloaded initially to reduce page load time.

Video Dimensions

18% of all video found in the HTTP Archive is identical on mobile and desktop. Those who have worked with responsive web design know how optimizing images for different viewports can drastically reduce load times since the size of the images is much smaller for smaller screens.

The same holds for video. A website with a 30 MB 2560×1226 background video will have a hard time downloading the video on mobile (probably on desktop, too!). Resizing the video drastically decreases the files size, and might even allow for three or four different background videos to be served:

Width
Video (MB)

1226
30

1080
8.1

720
43

608
3.3

405
1.76

Now, unfortunately, browsers do not support media queries for video in HTML, meaning that this just does not work:

<video preload=”auto” autoplay muted controls
source sizes=”(max-width:1400px 100vw, 1400px”
srcset=”small.mp4 200w,
medium.mp4 800w,
large.mp4 1400w”
src=”large.mp4″

</video>

Therefore, we’ll need to create a small JS wrapper to deliver the videos we want to different screen sizes. But before we go there…

Downloading Video, But Hiding It From View

Another throwback to the early responsive web is to download full-size images, but to hide them on mobile devices. Your customers get all the delay for downloading the large images (and hit to mobile data plan, and extra battery drain, etc.), and none of the benefit of actually seeing the image. This occurs quite frequently with video on mobile. So, as we write our script, we can ensure that smaller screens never request the video that will not appear in the first place.

Retina Quality Videos

You may have different videos for different device screen densities. This can lead to added time to download the videos to your mobile customers. You may wish to prevent retina videos on smaller screen devices, or on devices with a limited network bandwidth, falling to back to standard quality videos for these devices. Tools like the Network Information API can provide you with the network throughput, and help you decide which video quality you’d like to serve to your customer.

Downloading Different Video Types Based On Device Size And Network Quality

We’ve just covered a few ways to optimize the delivery of movies to smaller screens, and also noted the inability of the video tag to choose between video types, so here is a quick JS snippet that will use the screen width to:

Not deliver video on screens below 500px;
Deliver small videos for screens 500-1400;
Deliver a larger sized video to all other devices.

<html><body>
<div id=”video”> </div>
<div id=”text”></div>
<script>
//get screen width and pixel ratio
var width = screen.width;
var dpr = window.devicePixelRatio;
//initialise 2 videos —
//“small” is 960 pixels wide (2.6 MB), large is 1920 pixels wide (10 MB)
var smallVideo=”http://res.cloudinary.com/dougsillars/video/upload/w_960/v1534228645/30s4kbbb_oblsgc.mp4″;
var bigVideo = “http://res.cloudinary.com/dougsillars/video/upload/w_1920/v1534228645/30s4kbbb_oblsgc.mp4”;
//TODO add logic on adding retina videos
if (width<500){
console.log(“this is a very small screen, no video will be requested”);
}

else if (width< 1400){
console.log(“let’s call this mobile sized”);
var videoTag = “<video preload=”auto” width=”100%” autoplay muted controls src=”” +smallVideo +””/>”;
console.log(videoTag);
document.getElementById(‘video’).innerHTML = videoTag;
document.getElementById(‘text’).innerHTML = “This is a small video.”;
}
else{
var videoTag = “<video preload=”auto” width=”100%” autoplay muted controls src=”” +bigVideo +””/>”;
document.getElementById(‘video’).innerHTML = videoTag;
document.getElementById(‘text’).innerHTML = “This is a big video.”;

}

</script>
</html></body>

This script divides user’s screens into three options:

Under 500 pixels, no video is shown.
Between 500 and 1400, we have a smaller video.
For larger than 1400 pixel wide screens, we have a larger video.

Our page has a responsive video with two different sizes: one for mobile, and another for desktop-sized screens. Mobile users get great video quality, but the file is only 2.6 MB, compared to the 10MB video for desktop.

Animated GIFs

Animated GIFs are big files. While both aGIFs and video files compress the data through width and height dimensions, only video files have compression (on the often larger) time axis. aGIFs are essentially “flipping through” static GIF images quickly. This lack of compression adds a significant amount of data. Thankfully, it is possible to replace aGIFs with a looping video, potentially saving MBs of data for each request.

<video loop autoplay muted playsinline src=”pseudoGif.mp4″>

In Safari, there is an even fancier approach: You can place a looping mp4 in the picture tag, like so:

<picture>
<source type=”video/mp4″ loop autoplay srcset=”loopingmp4.mp4″>
<source type=”image/webp” srcset=”animated.webp”>
<src=”animated.gif”>
</picture>

In this case, Safari will play the animated GIF, while Chrome (and other browsers that support WebP) will play the animated WebP, with a fallback to the animated GIF. You can read more about this approach in Colin Bendell’s great post.

Third-Party Videos

One of the easiest ways to add video to your website is to simply copy/paste the code from a video sharing service and put it on your site. However, just like adding any third party to your site, you need to be vigilant about what kind of content is added to your page, and how that will affect page load. Many of these “simply paste this into your HTML” widgets add 100s of KB of JavaScript. Others will download the entire movie (think preload="auto"), and some will do both.

Third-Party Video Best Practice: Trust but verify. Examine how much content is added, and how much it affects your page load time. Also, the behavior might change, so track with your analytics regularly.

Streaming Startup

When a video stream is requested, the server supplies a manifest file to the player, listing every available stream (with dimensions and bitrate information). In HLS streaming, the player generally chooses the first stream in the list to begin playback. Therefore, the stream positioned first in the manifest file should be optimized for video startup on both mobile and desktop (or perhaps alternative manifest files should be delivered to mobile vs. desktop).

In most cases, the startup is optimized by using a lower quality stream to begin playback. Once the player downloads a few segments, it has a better idea of available throughput and can select a higher quality stream for later segments. As a user, you have likely seen this — where the first few seconds of a video looks very pixelated, but a few seconds into playback the video sharpens.

In examining 1,065 manifest files delivered to mobile devices from the HTTP Archive, we find that 59% of videos have an initial bitrate under 1.2 MBPS — and are likely to start streaming without any much delay at 1.6 MBPS 3G data rates. 11% use a bitrate between 1.2 and 1.6 MBPS — which may slow the startup on 3G, and 30% have a bitrate above 1.6 MBPS — and are unable to playback at this bitrate on a 3G connection. Based on this data, it appears that ~41% of all videos will not be able to sustain the initial bitrate on mobile — adding to startup delay, and possibly increased number of stalls during playback.

Column chart showing initial bitrates in streaming videos. Many videos have too high an initial bitrate to stream on mobile.

Initial bitrate for video streams (Large preview)

Streaming Startup Best Practice: Ensure your initial bitrate in the manifest file is one that will work for most of your customers. If the player has to change streams during startup, playback will be delayed and you will lose video views.

So, what happens when the video’s bitrate is near (or above) the available throughput? After a few seconds of download without a completed video segment ready for playback, the player stops the download and chooses a lower quality bitrate video, and begins the process again. The action of downloading a video segment and then abandoning leads to additional startup delay, which will lead to video abandonment.

We can visualize this by building video manifests with different initial bitrates. We test 3 different scenarios: starting with the lowest (215 KBPS), middle (600 KBPS), and highest bitrate (2.6 MBPS).

When beginning with the lowest quality video, playback begins at 11s. After a few seconds, the player begins requesting a higher quality stream, and the picture sharpens.

When starting with the highest bitrate (testing on a 3G connection at 1.6 MBPS), the player quickly realizes that playback cannot occur, and switches to the lowest bitrate video (215 KBPS). The video starts playing at 17s. There is a 6-second delay, and the video quality is the same low quality delivered to in the first test.

Using the middle-quality video allows for a bit of a tradeoff, the video begins playing at 13s (2 seconds slower), but is high quality from the start -and there is no jump from pixelated to higher quality video.

Best Practice for Video Startup: For fastest playback, start with the lowest quality stream. For longer videos, you might consider using a ‘middle quality’ stream at start to deliver sharp video at startup (with a slightly longer delay).

Thumbnails of 3 pages with video loading.

WebPage Test Thumbnails (Large preview)

WebPageTest results: Initial video stream is low, middle and high (from top to bottom). The video starts the fastest with the lowest quality video. It is important to note that the high quality start video at 17s is the same quality as the low quality start at 11s.

Streaming: Continuing Playback

When the video player can determine the optimal video stream for playback and the stream is lower than the available network speed, the video will playback with no issues. There are tricks that can help ensure that the video will deliver in an optimal manner. If we examine the following manifest entry:

#EXT-X-STREAM-INF:BANDWIDTH=912912,PROGRAM-ID=1,CODECS=”avc1.42c01e,mp4a.40.2″,RESOLUTION=640×360,SUBTITLES=”subs”
video/600k.m3u8

The information line reports that this stream has a 913 KBPS bitrate, and 640×360 resolution. If we look at the URL that this line points to, we see that it references a 600k video. Examining the video files shows that the video is 600 KBPS, and the manifest is overstating the bitrate.

Overstating The Video Bitrate

PRO
Overstating the bitrate will ensure that when the player chooses a stream, the video will download faster than expected, and the buffer will fill up faster than expected, reducing the possibility of a stall.
CON
By overstating the bitrate, the video delivered will be a lower quality stream. If we look at the entire list of reported vs. actual bitrates:

Reported (KBS)
Actual
Resolution

913
600
640×360

142
64
320×180

297
180
512×288

506
320
512×288

689
450
412×288

1410
950
853×480

2090
1500
1280×720

For users on a 1.6 MBPS connection, the player will choose the 913 KBPS bitrate, serving the customer 600 KBPS video. However, if the bitrates had been reported accurately, the 950 KBPS bitrate would be used, and would likely have streamed with no issues. While the choices here prevent stalls, they also lower the quality of the delivered video to the consumer.

Best Practice: A small overstatement of video bitrate may be useful to reduce the number of stalls in playback. However, too large a value can lead to reduced quality playback.

Test the Neilsen video in the browser, and see if you can make it jump back and forth.

Conclusion

In this post, we’ve walked through a number of ways to optimize the videos that you present on your websites. By following the best practices illustrated in this post:

preload="auto"
Only use if there is a high probability that this video will be watched by your customers.
preload="metadata"
Default in Chrome, but can still lead to large video file downloads. Use with caution.
Silent Videos (looping GIFs or background videos)
Strip out the audio channel
Video Dimensions
Consider delivering differently sized video to mobile over desktop. The videos will be smaller, download faster, and your users are unlikely to see the difference (your server load will go down too!)
Video Compression
Don’t forget to compress the videos to ensure that they are delivered
Don’t ‘hide’ videos
If the video will not be displayed — don’t download it.
Audit your third-party videos regularly
Streaming
Start with a lower quality stream to ensure fast startup. (For longer play videos, consider a medium bitrate for better quality at startup)
Streaming
It’s OK to be conservative on bitrate to prevent stalling, but go too far, and the streams will deliver a lower quality video.

You will find that the video on your page is streamlined for optimal delivery and that your customers will not only delight in the video you present but also enjoy a faster page load time overall.

Smashing Editorial
(dm, ra, il)

The CSS Working Group At TPAC: What’s New In CSS?

Original Source: https://www.smashingmagazine.com/2018/10/tpac-css-working-group-new/

The CSS Working Group At TPAC: What’s New In CSS?

The CSS Working Group At TPAC: What’s New In CSS?

Rachel Andrew

2018-10-26T22:30:30+02:00
2018-10-29T16:03:22+00:00

Last week, I attended W3C TPAC as well as the CSS Working Group meeting there. Various changes were made to specifications, and discussions had which I feel are of interest to web designers and developers. In this article, I’ll explain a little bit about what happens at TPAC, and show some examples and demos of the things we discussed at TPAC for CSS in particular.

What Is TPAC?

TPAC is the Technical Plenary / Advisory Committee Meetings Week of the W3C. A chance for all of the various working groups that are part of the W3C to get together under one roof. The event is in a different part of the world each year, this year it was held in Lyon, France. At TPAC, Working Groups such as the CSS Working Group have their own meetings, just as we do at other times of the year. However, because we are all in one building, it means that people from other groups can more easily come as observers, and cross-working group interests can be discussed.

Attendees of TPAC are typically members of one or more of the Working Groups, working on W3C technologies. They will either be representatives of a member organization or Invited Experts. As with any other meetings of W3C Working Groups, the minutes of all of the discussions held at TPAC will be openly available, usually as IRC logs scribed during the meetings.

The CSS Working Group

The CSS Working Group meet face-to-face at TPAC and on at least two other occasions during the year; this is in addition to our weekly phone calls. At all of our meetings, the various issues raised on the specifications are discussed, and decisions made. Some issues are kept for face-to-face discussions due to the benefits of being able to have them with the whole group, or just being able to all get around a whiteboard or see a demo on screen.

When an issue is discussed in any meeting (whether face-to-face or teleconference), the relevant GitHub issue is updated with the minutes of the discussion. This means if you have an issue you want to keep track of, you can star it and see when it is updated. The full IRC minutes are also posted to the www-style mailing list.

Here is a selection of the things we discussed that I think will be of most interest to you.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Form Design Patterns — a practical guide for anyone who needs to design and code web forms

CSS Scrollbars

The CSS Scrollbars specification seeks to give a standard way of styling the size and color of scrollbars. If you have Firefox Nightly, you can test it out. To see the examples below, use Firefox Nightly and enable the flags layout.css.scrollbar-width.enabled and layout.css.scrollbar-color.enabled by visiting http://about:config in Firefox Nightly.

The specification gives us two new properties: scrollbar-width and scrollbar-color. The scrollbar-width property can take a value of auto, thin, none, or length (such as 1em). It looks as if the length value may be removed from the specification. As you can imagine, it would be possible for a web developer to make a very unusable scrollbar by playing with the width, so it may be better to allow the browser to decide the exact width that makes sense but instead to either show thin or thick scrollbars. Firefox has not implemented the length option.

If you use auto as the value, then the browser will use the default scrollbars: thin will give you a thin scrollbar, and none will show no visible scrollbar (but the element will still be scrollable).

A scrolling element with a thin scrollbar

In this example I have set scrollbar-width: thin.(Large preview)

In a browser with support for CSS Scrollbars, you can see this in action in the demo:

See the Pen CSS Scrollbars: scrollbar-width by Rachel Andrew (@rachelandrew) on CodePen.

The scrollbar-color property deals with — as you would expect — scrollbar colors. A scrollbar has two parts which you may wish to color independently:

thumb
The slider that moves up and down as you scroll.
track
The scrollbar background.

The values for the scrollbar-color property are auto, dark, light and <color> <color>.

Using auto as a keyword value will give you the default scrollbar colors for that browser, dark will provide a dark scrollbar, either in the dark mode of that platform or a custom dark mode, light the light mode of the platform or a custom light mode.

To set your own colors, you add two colors as the value that are separated by a space. The first color will be used for the thumb and the second one for the track. You should take care that there is enough contrast between the colors, as otherwise the scrollbar may be difficult to use for some people.

A scrolling element with a purple and white scrollbar

In this example, I have set custom colors for the scrollbar elements. (Large preview)

In a browser with support for CSS Scrollbars, you can see this in the demo:

See the Pen CSS Scrollbars: scrollbar-color by Rachel Andrew (@rachelandrew) on CodePen.

Aspect Ratio Units

We’ve been using the padding hack in CSS to achieve aspect ratio boxes for some time, however, with the advent of Grid Layout and better ways of sizing content, having a real way to do aspect ratios in CSS has become a more pressing need.

There are two issues raised on GitHub which relate to this requirement:

Aspect Ratio Units Needed
Aspect Ratio.

There is now a draft spec in Level 4 of CSS Sizing, and the decision of the meeting was that this needed further discussion on GitHub before any decisions can be made. So, if you are interested in this, or have additional use cases, the CSS Working Group would be interested in your comments on those issues.

The :where() Functional Pseudo-Class

Last year, the CSSWG resolved to add a pseudo-class which acted like :matches() but with zero specificity, thus making it easy to override without needing to artificially inflate the specificity of later elements to override something in a default stylesheet.

The :matches() pseudo-class might be new to you as it is a Level 4 Selector, however, it allows you to specify a group of selectors to apply some CSS, too. For example, you could write:

.foo a:hover,
p a:hover {
color: green;
}

Or with :matches()

:matches(.foo, p) a:hover {
color: green;
}

If you have ever had a big stack of selectors just in order to set the same couple of rules, you will see how useful this will be. The following CodePen uses the prefixed names of webkit-any and -moz-any to demonstrate the matches() functionality. You can also read more about matches() on MDN.

See the Pen :matches() and prefixed versions by Rachel Andrew (@rachelandrew) on CodePen.

Where we often do this kind of stacking of selectors, and thus where :matches() will be most useful is in some kind of initial, default stylesheet. However, we then need to be careful when overwriting those defaults that any overwriting is done in a way that will ensure it is more specific than the defaults. It is for this reason that a zero specificity version was proposed.

The issue that was discussed in the meeting was in regard to naming this pseudo-class, you can see the final resolution here, and if you wonder why various names were ruled out take a look at the full thread. Naming things in CSS is very hard — because we are all going to have to live with it forever! After a lot of debate, the group voted and decided to call this selector :where().

Since the meeting, and while I was writing up this post, a suggestion has been raised to rename matches() to is(). Take a look at the issue and comment if you have any strong feelings either way!

Logical Shorthands For Margins And Padding

On the subject of naming things, I’ve written about Logical Properties and Values here on Smashing Magazine in the past, take a look at “Understanding Logical Properties and Values”. These properties and values provide flow relative mappings. This means that if you are using Writing Modes other than a horizontal top to bottom writing mode, such as English, things like margins and padding, widths and height follow the text direction and are not linked to the physical screen dimensions.

For example, for physical margins we have:

margin-top
margin-right
margin-bottom
margin-left

The logical mappings for these (assuming horizontal-tb) are:

margin-block-start
margin-inline-end
margin-block-end
margin-inline-start

We can have two value shorthands. For example, to set both margin-block-start and margin-block-end as a shorthand, we can use margin-block: 20px 1em. The first value representing the start edge in the block dimension, the second value the end edge in the block dimension.

We hit a problem, however, when we come to the four-value shorthand margin. That property name is used for physical margins — how do we denote the logical four-value version? Various things have been suggested, including a switch at the top of the file:

@mode “logical”;

Or, to use a block that looks a little like a media query:

@mode (flow-mode: relative) {

}

Then various suggestions for keyword modifiers, using some punctuation character, or creating a brand new property name:

margin: relative 1em 2em 3em 4em;
margin: 1em 2em 3em 4em !relative;
margin-relative: 1em 2em 3em 4em;
~margin: 1em 2em 3em 4em;

You can read the issue to see the various things that are being considered. Issues discussed were that while the logical version may well end up being generally the default, sometimes you will want things to relate to the screen geometry; we need to be able to have both options in one stylesheet. Having a @mode setting at the top of the CSS could be confusing; it would fail if someone were to copy and paste a chunk of the stylesheet.

My preference is to have some sort of keyword value. That way, if you look at the rule, you can see exactly which mode is being used, even if it does seem slightly inelegant. It is the sort of thing that a preprocessor could deal with for you; if you did indeed want all of your properties and values to use the logical versions.

We didn’t manage to resolve on the issue, so if you do have thoughts on which of these might be best, or can see problems with them that we haven’t described, please comment on the issue on GitHub.

Web Platform Tests Discussion

At the CSS Working Group meeting and then during the unconference style Technical Plenary Day, I was involved in discussing how to get more people involved in writing tests for CSS specifications. The Web Platform Tests project aims to provide tests for all of the web platform. These tests then help browser vendors check whether their browser is correct as to the spec. In the CSS Working Group, the aim is that any normative change to a specification which has reached Candidate Recommendation (CR) status, should be accompanied by a test. This makes sense as once a spec is in CR, we are asking browsers to implement that spec and provide feedback. They need to know if anything in the spec changes so they can update their code.

The problem is that we have very few people writing specs, so for spec writers to have to write all the tests will slow the progress of CSS down. We would love to see other people writing tests, as it is a way to contribute to the web platform and to gain deep knowledge of how specifications work. So we met to think about how we could encourage people to participate in the effort. I’ve written on this subject in the past; if the idea of writing tests for the platform interests you, take a look at my 24 Ways article on “Testing the Web Platform”.

On With The Work!

TPAC has added to my personal to-do list considerably. However, I’ve been able to pick up tips about specification editing, test writing, and to come up with a plan to get the Multi-Column Layout specification — of which I’m the co-editor — back to CR status. As someone who is not a fan of meetings, I’ve come to see how valuable these face-to-face meetings are for the web platform, giving those of us contributing to it a chance to share the knowledge we individually are developing. I feel it is important though to then take that knowledge and share it outside of the group in order to help more people get involved with developing as well as using the platform.

If you are interested in how the CSS Working Group functions, and how new CSS is invented and ends up in browsers, check out my 2017 CSSConf.eu presentation “Where Does CSS Come From?” and the information from fantasai in her posts “An Inside View of the CSS Working Group at W3C”.

Smashing Editorial
(il)

Add a Powerful LMS to WordPress with Masterstudy Theme

Original Source: http://feedproxy.google.com/~r/1stwebdesigner/~3/_PfuxNGbso4/

Online education is incredibly useful and convenient. What’s more, it’s not just for schools. Both public and private organizations use online education to train employees and help members stay in the know. It transcends industry and can be utilized in any number of ways.

The great news is that adding this capability to your WordPress website is as easy as installing Masterstudy. It’s the theme that will turn your site into an LMS (Learning Management System). You can build, customize and manage online courses with ease.

Want to learn more? Let’s take a look at what makes this LMS so powerful.

Masterstudy Offline Course Demo Homepage

A Turnkey Solution for Online Education

Masterstudy is a WordPress theme built as a result of extensive research in online education. Every aspect has been carefully thought out, meaning that you’ll find all the features you need to run a full-fledged educational program.

Super-Fast, Thanks to Vue.js

The integrated Masterstudy LMS plugin offers several front and back-end features that are powered by Vue.js. The result is a UI that loads at blazing-fast speeds. Your students will spend more time learning and less time waiting for content to load.

Flexible Courses

The free Masterstudy LMS plugin is the perfect companion for Masterstudy theme. It gives you the power to create courses that match your specific needs. Build Text, Video and Slideshow lesson types. Whatever type of content you’re looking to present, Masterstudy has you covered.

Masterstudy Video Lesson

Powerful Online Quizzes

Use the built-in quiz capabilities to help students reinforce what they’ve learned. Quizzes feature the ability to use an online timer, results reporting and optional retakes. Certificates can be awarded based on the criteria you set.

Built in eCommerce

Masterstudy includes built-in support for PayPal and Stripe payment gateways. This provides you with the flexibility to offer courses as one-time payments or recurring subscriptions. Plus, support for Paid Memberships Pro offers you another way to sell online. Looking to sell offline courses? This capability is supported with the use of WooCommerce.

Encourage Communication

Students and instructors can easily stay in touch. Use the real-time question and answer feature during lessons to ensure that everyone is on the same page. And, the private messaging system facilitates easy communication between users, anytime.

Masterstudy Course Page

A Top-Quality WordPress Theme

With Masterstudy, you get a WordPress theme that is built to the highest standard. It’s been optimized for speed and will look pixel-perfect across all screens and devices. StylemixThemes, an Envato Power Elite Author, has gone to great lengths to ensure quality and ease of use.

Fully Customizable

Masterstudy empowers you with plenty of options to customize your site. The theme settings panel, powered by Redux framework, lets you tweak colors, fonts and more. Plus, you can choose from several header layouts for just the right look. The best part? You don’t need to touch any code!

Top Plugins Included

In addition to Masterstudy LMS Pro, you’ll enjoy free access to Visual Composer and Revolution Slider. They’ll help your site both look and function beautifully.

1-Click Demo Import

Want to get started quickly? Use Masterstudy’s 1-click demo import to start building immediately. There are currently six gorgeous demo layouts, with more in development.

24/7 Support

Don’t wait to get your questions answered. Masterstudy features extensive documentation and video tutorials. Or, take advantage of live chat or ticket-based support that is available 24/7.

Masterstudy Course Instructor Profile

Use Masterstudy to Open Your Own Online Education Hub

When it comes to online education, Masterstudy is the complete package. Create compelling courses and sell them online. The entire process is seamless and easy to customize.

You’ll also gain peace of mind in knowing that help is always just a click away. And, with free lifetime updates, you will always have the most stable and secure code, along with amazing new features.

Get started with Masterstudy today and bring the full LMS experience to your WordPress website.


Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Original Source: https://www.smashingmagazine.com/2018/10/headless-wordpress-decoupled/

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Denis Žoljom

2018-10-26T13:45:46+02:00
2018-10-26T13:48:35+00:00

WordPress came a long way from its start as a simple blog writing tool. A long 15 years later it became the number one CMS choice for developers and non-developers alike. WordPress now powers roughly 30% of the top 10 million sites on the web.

Ever since REST API was bundled in the WordPress core, developers can experiment and use it in a decoupled way, i.e. writing the front-end part by using JavaScript frameworks or libraries. At Infinum, we were (and still are) using WordPress in a ‘classic’ way: PHP for the frontend as well as the backend. After a while, we wanted to give the decoupled approach a go. In this article, I’ll share an overview of what it was that we wanted to achieve and what we encountered while trying to implement our goals.

There are several types of projects that can benefit from this approach. For example, simple presentational sites or sites that use WordPress as a backend are the main candidates for the decoupled approach.

In recent years, the industry thankfully started paying more attention to performance. However, being an easy-to-use inclusive and versatile piece of software, WordPress comes with a plethora of options that are not necessarily utilized in each and every project. As a result, website performance can suffer.

Recommended reading: How To Use Heatmaps To Track Clicks On Your WordPress Website

If long website response times keep you up at night, this is a how-to for you. I will cover the basics of creating a decoupled WordPress and some lessons learned, including:

The meaning of a “decoupled WordPress”
Working with the default WordPress REST API
Improving performance with the decoupled JSON approach
Security concerns

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Form Design Patterns — a practical guide for anyone who needs to design and code web forms

So, What Exactly Is A Decoupled WordPress?

When it comes down to how WordPress is programmed, one thing is certain: it doesn’t follow the Model-View-Controller (MVC) design pattern that many developers are familiar with. Because of its history and for being sort of a fork of an old blogging platform called “b2” (more details here), it’s largely written in a procedural way (using function-based code). WordPress core developers used a system of hooks which allowed other developers to modify or extend certain functionalities.

It’s an all-in-one system that is equipped with a working admin interface; it manages database connection, and has a bunch of useful APIs exposed that handle user authentication, routing, and more.

But thanks to the REST API, you can separate the WordPress backend as a sort of model and controller bundled together that handle data manipulation and database interaction, and use REST API Controller to interact with a separate view layer using various API endpoints. In addition to MVC separation, we can (for security reasons or speed improvements) place the JS App on a separate server like in the schema below:

Image depicting decoupled WordPress diagram with PHP and JS part separated

Decoupled WordPress diagram. (Large preview)

Advantages Of Using The Decoupled Approach

One thing why you may want to use this approach for is to ensure a separation of concerns. The frontend and the backend are interacting via endpoints; each can be on its separate server which can be optimized specifically for each respective task, i.e. separately running a PHP app and running a Node.js app.

By separating your frontend from the backend, it’s easier to redesign it in the future, without changing the CMS. Also, front-end developers only need to care about what to do with the data the backend provides them. This lets them get creative and use modern libraries like ReactJS, Vue or Angular to deliver highly dynamic web apps. For example, it’s easier to build a progressive web app when using the aforementioned libraries.

Another advantage is reflected in the website security. Exploiting the website through the backend becomes more difficult since it’s largely hidden from the public.

Recommended reading: WordPress Security As A Process

Shortcomings Of Using The Decoupled Approach

First, having a decoupled WordPress means maintaining two separate instances:

WordPress for the backend;
A separate front-end app, including timely security updates.

Second, some of the front-end libraries do have a steeper learning curve. It will either take a lot of time to learn a new language (if you are only accustomed to HTML and CSS for templating), or will require bringing additional JavaScript experts to the project.

Third, by separating the frontend, you are losing the power of the WYSIWYG editor, and the ‘Live Preview’ button in WordPress doesn’t work either.

Working With WordPress REST API

Before we delve deeper in the code, a couple more things about WordPress REST API. The full power of the REST API in WordPress came with version 4.7 on December 6th, 2016.

What WordPress REST API allows you to do is to interact with your WordPress installation remotely by sending and receiving JSON objects.

Setting Up A Project

Since it comes bundled with latest WordPress installation, we will be working on the Twenty Seventeen theme. I’m working on Varying Vagrant Vagrants, and have set up a test site with an URL http://dev.wordpress.test/. This URL will be used throughout the article. We’ll also import posts from the wordpress.org Theme Review Teams repository so that we have some test data to work with. But first, we will get familiar working with default endpoints, and then we’ll create our own custom endpoint.

Access The Default REST Endpoint

As already mentioned, WordPress comes with several built-in endpoints that you can examine by going to the /wp-json/ route:

http://dev.wordpress.test/wp-json/

Either by putting this URL directly in your browser, or adding it in the postman app, you’ll get out a JSON response from WordPress REST API that looks something like this:

{
“name”: “Test dev site”,
“description”: “Just another WordPress site”,
“url”: “http://dev.wordpress.test”,
“home”: “http://dev.wordpress.test”,
“gmt_offset”: “0”,
“timezone_string”: “”,
“namespaces”: [
“oembed/1.0”,
“wp/v2”
],
“authentication”: [],
“routes”: {
“/”: {
“namespace”: “”,
“methods”: [
“GET”
],
“endpoints”: [
{
“methods”: [
“GET”
],
“args”: {
“context”: {
“required”: false,
“default”: “view”
}
}
}
],
“_links”: {
“self”: “http://dev.wordpress.test/wp-json/”
}
},
“/oembed/1.0”: {
“namespace”: “oembed/1.0”,
“methods”: [
“GET”
],
“endpoints”: [
{
“methods”: [
“GET”
],
“args”: {
“namespace”: {
“required”: false,
“default”: “oembed/1.0”
},
“context”: {
“required”: false,
“default”: “view”
}
}
}
],
“_links”: {
“self”: “http://dev.wordpress.test/wp-json/oembed/1.0”
}
},

“wp/v2”: {

So in order to get all of the posts in our site by using REST, we would need to go to http://dev.wordpress.test/wp-json/wp/v2/posts. Notice that the wp/v2/ marks the reserved core endpoints like posts, pages, media, taxonomies, categories, and so on.

So, how do we add a custom endpoint?

Create A Custom REST Endpoint

Let’s say we want to add a new endpoint or additional field to the existing endpoint. There are several ways we can do that. First, one can be done automatically when creating a custom post type. For instance, we want to create a documentation endpoint. Let’s create a small test plugin. Create a test-documentation folder in the wp-content/plugins folder, and add documentation.php file that looks like this:

<?php
/**
* Test plugin
*
* @since 1.0.0
* @package test_plugin
*
* @wordpress-plugin
* Plugin Name: Test Documentation Plugin
* Plugin URI:
* Description: The test plugin that adds rest functionality
* Version: 1.0.0
* Author: Infinum
* Author URI: https://infinum.co/
* License: GPL-2.0+
* License URI: http://www.gnu.org/licenses/gpl-2.0.txt
* Text Domain: test-plugin
*/

namespace Test_Plugin;

// If this file is called directly, abort.
if ( ! defined( ‘WPINC’ ) ) {
die;
}

/**
* Class that holds all the necessary functionality for the
* documentation custom post type
*
* @since 1.0.0
*/
class Documentation {
/**
* The custom post type slug
*
* @var string
*
* @since 1.0.0
*/
const PLUGIN_NAME = ‘documentation-plugin’;

/**
* The custom post type slug
*
* @var string
*
* @since 1.0.0
*/
const POST_TYPE_SLUG = ‘documentation’;

/**
* The custom taxonomy type slug
*
* @var string
*
* @since 1.0.0
*/
const TAXONOMY_SLUG = ‘documentation-category’;

/**
* Register custom post type
*
* @since 1.0.0
*/
public function register_post_type() {
$args = array(
‘label’ => esc_html( ‘Documentation’, ‘test-plugin’ ),
‘public’ => true,
‘menu_position’ => 47,
‘menu_icon’ => ‘dashicons-book’,
‘supports’ => array( ‘title’, ‘editor’, ‘revisions’, ‘thumbnail’ ),
‘has_archive’ => false,
‘show_in_rest’ => true,
‘publicly_queryable’ => false,
);

register_post_type( self::POST_TYPE_SLUG, $args );
}

/**
* Register custom tag taxonomy
*
* @since 1.0.0
*/
public function register_taxonomy() {
$args = array(
‘hierarchical’ => false,
‘label’ => esc_html( ‘Documentation tags’, ‘test-plugin’ ),
‘show_ui’ => true,
‘show_admin_column’ => true,
‘update_count_callback’ => ‘_update_post_term_count’,
‘show_in_rest’ => true,
‘query_var’ => true,
);

register_taxonomy( self::TAXONOMY_SLUG, [ self::POST_TYPE_SLUG ], $args );
}
}

$documentation = new Documentation();

add_action( ‘init’, [ $documentation, ‘register_post_type’ ] );
add_action( ‘init’, [ $documentation, ‘register_taxonomy’ ] );

By registering the new post type and taxonomy, and setting the show_in_rest argument to true, WordPress automatically created a REST route in the /wp/v2/namespace. You now have http://dev.wordpress.test/wp-json/wp/v2/documentation and http://dev.wordpress.test/wp-json/wp/v2/documentation-category endpoints available. If we add a post in our newly created documentation custom post going to http://dev.wordpress.test/?post_type=documentation, it will give us a response that looks like this:

[
{
“id”: 4,
“date”: “2018-06-11T19:48:51”,
“date_gmt”: “2018-06-11T19:48:51”,
“guid”: {
“rendered”: “http://dev.wordpress.test/?post_type=documentation&p=4”
},
“modified”: “2018-06-11T19:48:51”,
“modified_gmt”: “2018-06-11T19:48:51”,
“slug”: “test-documentation”,
“status”: “publish”,
“type”: “documentation”,
“link”: “http://dev.wordpress.test/documentation/test-documentation/”,
“title”: {
“rendered”: “Test documentation”
},
“content”: {
“rendered”: “

This is some documentation content

n”,
“protected”: false
},
“featured_media”: 0,
“template”: “”,
“documentation-category”: [
2
],
“_links”: {
“self”: [
{
“href”: “http://dev.wordpress.test/wp-json/wp/v2/documentation/4”
}
],
“collection”: [
{
“href”: “http://dev.wordpress.test/wp-json/wp/v2/documentation”
}
],
“about”: [
{
“href”: “http://dev.wordpress.test/wp-json/wp/v2/types/documentation”
}
],
“version-history”: [
{
“href”: “http://dev.wordpress.test/wp-json/wp/v2/documentation/4/revisions”
}
],
“wp:attachment”: [
{
“href”: “http://dev.wordpress.test/wp-json/wp/v2/media?parent=4”
}
],
“wp:term”: [
{
“taxonomy”: “documentation-category”,
“embeddable”: true,
“href”: “http://dev.wordpress.test/wp-json/wp/v2/documentation-category?post=4”
}
],
“curies”: [
{
“name”: “wp”,
“href”: “https://api.w.org/{rel}”,
“templated”: true
}
]
}
}
]

This is a great starting point for our single-page application. Another way we can add a custom endpoint is by hooking to the rest_api_init hook and creating an endpoint ourselves. Let’s add a custom-documentation route that is a bit different than the one we registered. Still working in the same plugin, we can add:

/**
* Create a custom endpoint
*
* @since 1.0.0
*/
public function create_custom_documentation_endpoint() {
register_rest_route(
self::PLUGIN_NAME . ‘/v1’, ‘/custom-documentation’,
array(
‘methods’ => ‘GET’,
‘callback’ => [ $this, ‘get_custom_documentation’ ],
)
);
}

/**
* Create a callback for the custom documentation endpoint
*
* @return string JSON that indicates success/failure of the update,
* or JSON that indicates an error occurred.
* @since 1.0.0
*/
public function get_custom_documentation() {
/* Some permission checks can be added here. */

// Return only documentation name and tag name.
$doc_args = array(
‘post_type’ => self::POST_TYPE_SLUG,
‘post_status’ => ‘publish’,
‘perm’ => ‘readable’
);

$query = new WP_Query( $doc_args );

$response = [];
$counter = 0;

// The Loop
if ( $query->have_posts() ) {
while ( $query->have_posts() ) {
$query->the_post();

$post_id = get_the_ID();
$post_tags = get_the_terms( $post_id, self::TAXONOMY_SLUG );

$response[ $counter ][‘title’] = get_the_title();

foreach ( $post_tags as $tags_key => $tags_value ) {
$response[ $counter ][‘tags’][] = $tags_value->name;
}
$counter++;
}
} else {
$response = esc_html__( ‘There are no posts.’, ‘documentation-plugin’ );
}
/* Restore original Post Data */
wp_reset_postdata();

return rest_ensure_response( $response );
}

And hook the create_custom_documentation_endpoint() method to the rest_api_init hook, like so:

add_action( ‘rest_api_init’, [ $documentation, ‘create_custom_documentation_endpoint’ ] );

This will add a custom route in the http://dev.wordpress.test/wp-json/documentation-plugin/v1/custom-documentation with the callback returning the response for that route.

[{
“title”: “Another test documentation”,
“tags”: [“Another tag”]
}, {
“title”: “Test documentation”,
“tags”: [“REST API”, “test tag”]
}]

There are a lot of other things you can do with REST API (you can find more details in the REST API handbook).

Work Around Long Response Times When Using The Default REST API

For anyone who has tried to build a decoupled WordPress site, this is not a new thing — REST API is slow.

My team and I first encountered the strange WordPress-lagging REST API on a client site (not decoupled), where we used the custom endpoints to get the list of locations on a Google map, alongside other meta information created using the Advanced Custom Fields Pro plugin. It turned out that the time the first byte (TTFB) — which is used as an indication of the responsiveness of a web server or other network resource — took more than 3 seconds.

After a bit of investigating, we realized the default REST API calls were actually really slow, especially when we “burdened” the site with additional plugins. So, we did a small test. We installed a couple of popular plugins and encountered some interesting results. The postman app gave the load time of 1.97s for 41.9KB of response size. Chrome’s load time was 1.25s (TTFB was 1.25s, content was downloaded in 3.96ms). Just to retrieve a simple list of posts. No taxonomy, no user data, no additional meta fields.

Why did this happen?

It turns out that accessing REST API on the default WordPress will load the entire WordPress core to serve the endpoints, even though it’s not used. Also, the more plugins you add, the worse things get. The default REST controller WP_REST_Controller is a really big class that does a lot more than necessary when building a simple web page. It handles routes registering, permission checks, creating and deleting items, and so on.

There are two common workarounds for this issue:

Intercept the loading of the plugins, and prevent loading them all when you need to serve a simple REST response;
Load only the bare minimum of WordPress and store the data in a transient, from which we then fetch the data using a custom page.

Improving Performance With The Decoupled JSON Approach

When you are working with simple presentation sites, you don’t need all the functionality REST API offers you. Of course, this is where good planning is crucial. You really don’t want to build your site without REST API, and then say in a years time that you’d like to connect to your site, or maybe create a mobile app that needs to use REST API functionality. Do you?

For that reason, we utilized two WordPress features that can help you out when serving simple JSON data out:

Transients API for caching,
Loading the minimum necessary WordPress using SHORTINIT constant.

Creating A Simple Decoupled Pages Endpoint

Let’s create a small plugin that will demonstrate the effect that we’re talking about. First, add a wp-config-simple.php file in your json-transient plugin that looks like this:

<?php
/**
* Create simple wp configuration for the routes
*
* @since 1.0.0
* @package json-transient
*/

define( ‘SHORTINIT’, true );
$parse_uri = explode( ‘wp-content’, $_SERVER[‘SCRIPT_FILENAME’] );
require_once filter_var( $parse_uri[0] . ‘wp-load.php’, FILTER_SANITIZE_STRING );

The define( ‘SHORTINIT’, true ); will prevent the majority of WordPress core files to be loaded, as can be seen in the wp-settings.php file.

We still may need some of the WordPress functionality, so we can require the file (like wp-load.php) manually. Since wp-load.php sits in the root of our WordPress installation, we will fetch it by getting the path of our file using $_SERVER[‘SCRIPT_FILENAME’], and then exploding that string by wp-content string. This will return an array with two values:

The root of our installation;
The rest of the file path (which is of no interest to us).

Keep in mind that we’re using the default installation of WordPress, and not a modified one, like for example in the Bedrock boilerplate, which splits the WordPress in a different file organization.

Lastly, we require the wp-load.php file, with a little bit of sanitization, for security.

In our init.php file, we’ll add the following:

<?php
/**
* Test plugin
*
* @since 1.0.0
* @package json-transient
*
* @wordpress-plugin
* Plugin Name: Json Transient
* Plugin URI:
* Description: Proof of concept for caching api like calls
* Version: 1.0.0
* Author: Infinum
* Author URI: https://infinum.co/
* License: GPL-2.0+
* License URI: http://www.gnu.org/licenses/gpl-2.0.txt
* Text Domain: json-transient
*/

namespace Json_Transient;

// If this file is called directly, abort.
if ( ! defined( ‘WPINC’ ) ) {
die;
}

class Init {
/**
* Get the array of allowed types to do operations on.
*
* @return array
*
* @since 1.0.0
*/
public function get_allowed_post_types() {
return array( ‘post’, ‘page’ );
}

/**
* Check if post type is allowed to be save in transient.
*
* @param string $post_type Get post type.
* @return boolean
*
* @since 1.0.0
*/
public function is_post_type_allowed_to_save( $post_type = null ) {
if( ! $post_type ) {
return false;
}

$allowed_types = $this->get_allowed_post_types();

if ( in_array( $post_type, $allowed_types, true ) ) {
return true;
}

return false;
}

/**
* Get Page cache name for transient by post slug and type.
*
* @param string $post_slug Page Slug to save.
* @param string $post_type Page Type to save.
* @return string
*
* @since 1.0.0
*/
public function get_page_cache_name_by_slug( $post_slug = null, $post_type = null ) {
if( ! $post_slug || ! $post_type ) {
return false;
}

$post_slug = str_replace( ‘__trashed’, ”, $post_slug );

return ‘jt_data_’ . $post_type . ‘_’ . $post_slug;
}

/**
* Get full post data by post slug and type.
*
* @param string $post_slug Page Slug to do Query by.
* @param string $post_type Page Type to do Query by.
* @return array
*
* @since 1.0.0
*/
public function get_page_data_by_slug( $post_slug = null, $post_type = null ) {
if( ! $post_slug || ! $post_type ) {
return false;
}

$page_output = ”;

$args = array(
‘name’ => $post_slug,
‘post_type’ => $post_type,
‘posts_per_page’ => 1,
‘no_found_rows’ => true
);

$the_query = new WP_Query( $args );

if ( $the_query->have_posts() ) {
while ( $the_query->have_posts() ) {
$the_query->the_post();
$page_output = $the_query->post;
}
wp_reset_postdata();
}
return $page_output;
}

/**
* Return Page in JSON format
*
* @param string $post_slug Page Slug.
* @param string $post_type Page Type.
* @return json
*
* @since 1.0.0
*/
public function get_json_page( $post_slug = null, $post_type = null ) {
if( ! $post_slug || ! $post_type ) {
return false;
}

return wp_json_encode( $this->get_page_data_by_slug( $post_slug, $post_type ) );
}

/**
* Update Page to transient for caching on action hooks save_post.
*
* @param int $post_id Saved Post ID provided by action hook.
*
* @since 1.0.0
*/
public function update_page_transient( $post_id ) {

$post_status = get_post_status( $post_id );
$post = get_post( $post_id );
$post_slug = $post->post_name;
$post_type = $post->post_type;
$cache_name = $this->get_page_cache_name_by_slug( $post_slug, $post_type );

if( ! $cache_name ) {
return false;
}

if( $post_status === ‘auto-draft’ || $post_status === ‘inherit’ ) {
return false;
} else if( $post_status === ‘trash’ ) {
delete_transient( $cache_name );
} else {
if( $this->is_post_type_allowed_to_save( $post_type ) ) {
$cache = $this->get_json_page( $post_slug, $post_type );
set_transient( $cache_name, $cache, 0 );
}
}
}
}

$init = new Init();

add_action( ‘save_post’, [ $init, ‘update_page_transient’ ] );

The helper methods in the above code will enable us to do some caching:

get_allowed_post_types()
This method lets post types know that we want to enable showing in our custom ‘endpoint’. You can extend this, and the plugin we’ve actually made this method filterable so that you can just use a filter to add additional items.
is_post_type_allowed_to_save()
This method simply checks to see if the post type we’re trying to fetch the data from is in the allowed array specified by the previous method.
get_page_cache_name_by_slug()
This method will return the name of the transient that the data will be fetched from.
get_page_data_by_slug()
This method is the method that will perform the WP_Query on the post via its slug and post type and return the contents of the post array that we’ll convert with the JSON using the get_json_page() method.
update_page_transient()
This will be run on the save_post hook and will overwrite the transient in the database with the JSON data of our post. This last method is known as the “key caching method”.

Let’s explain transients in more depth.

Transients API

Transients API is used to store data in the options table of your WordPress database for a specific period of time. It’s a persisted object cache, meaning that you are storing some object, for example, results of big and slow queries or full pages that can be persisted across page loads. It is similar to regular WordPress Object Cache, but unlike WP_Cache, transients will persist data across page loads, where WP_Cache (storing the data in memory) will only hold the data for the duration of a request.

It’s a key-value store, meaning that we can easily and quickly fetch the desired data, similar to what in-memory caching systems like Memcached or Redis do. The difference is that you’d usually need to install those separately on the server (which can be an issue on shared servers), whereas transients are built in with WordPress.

As noted on its Codex page — transients are inherently sped up by caching plugins. Since they can store transients in memory instead of a database. The general rule is that you shouldn’t assume that transient is always present in the database — which is why it’s a good practice to check for its existence before fetching it

$transient_name = get_transient( ‘transient_name’ );
if ( $transient_name === false ) {
set_transient( ‘transient_name’, $transient_data, $transient_expiry );
}

You can use it without expiration (like we are doing), and that’s why we implemented a sort of ‘cache-busting’ on post save. In addition to all the great functionality they provide, they can hold up to 4GB of data in it, but we don’t recommend storing anything that big in a single database field.

Recommended reading: Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Final Endpoint: Testing And Verification

The last piece of the puzzle that we need is an ‘endpoint’. I’m using the term endpoint here, even though it’s not an endpoint since we are directly calling a specific file to fetch our results. So we can create a test.php file that looks like this:

get_page_cache_name_by_slug( $post_slug, $post_type ) );

// Return error on false.
if ( $cache === false ) {
wp_send_json( ‘Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!’ );
}

// Decode json for output.
wp_send_json( json_decode( $cache ) );

If we go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php, we’ll see this message:

Error, page slug or type is missing!

So, we’ll need to specify the post type and post slug. When we now go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php?slug=hello-world&type=post we’ll see:

Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!

Oh, wait! We need to re-save our pages and posts first. So when you’re starting out, this can be easy. But if you already have 100+ pages or posts, this can be a challenging task. This is why we implemented a way to clear the transients in the Decoupled JSON Content plugin, and rebuild them in a batch.

But go ahead and re-save the Hello World post and then open the link again. What you should now have is something that looks like this:

{
“ID”: 1,
“post_author”: “1”,
“post_date”: “2018-06-26 18:28:57”,
“post_date_gmt”: “2018-06-26 18:28:57”,
“post_content”: “Welcome to WordPress. This is your first post. Edit or delete it, then start writing!”,
“post_title”: “Hello world!”,
“post_excerpt”: “”,
“post_status”: “publish”,
“comment_status”: “open”,
“ping_status”: “open”,
“post_password”: “”,
“post_name”: “hello-world”,
“to_ping”: “”,
“pinged”: “”,
“post_modified”: “2018-06-30 08:34:52”,
“post_modified_gmt”: “2018-06-30 08:34:52”,
“post_content_filtered”: “”,
“post_parent”: 0,
“guid”: “http://dev.wordpress.test/?p=1”,
“menu_order”: 0,
“post_type”: “post”,
“post_mime_type”: “”,
“comment_count”: “1”,
“filter”: “raw”
}

And that’s it. The plugin we made has some more extra functionality that you can use, but in a nutshell, this is how you can fetch the JSON data from your WordPress that is way faster than using REST API.

Before And After: Improved Response Time

We conducted testing in Chrome, where we could see the total response time and the TTFB separately. We tested response times ten times in a row: first without plugins and then with the plugins added. Also, we tested the response for a list of posts and for a single post.

The results of the test are illustrated in the tables below:

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster. (Large preview)

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster.

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster. (Large preview)

As you can see, the difference is drastic.

Security Concerns

There are some caveats that you’ll need to take a good look at. First of all, we are manually loading WordPress core files, which in the WordPress world is a big no-no. Why? Well, besides the fact that manually fetching core files can be tricky (especially if you’re using nonstandard installations such as Bedrock), it could pose some security concerns.

If you decide to use the method described in this article, be sure you know how to fortify your server security.

First, add HTML headers like in the test.php file:

header( ‘Access-Control-Allow-Origin: your-front-end-app.url’ );

header( ‘Content-Type: application/json’ );

The first header is a way to bypass CORS security measure so that only your front-end app can fetch the contents when going to the specified file.

Second, disable directory traversal of your app. You can do this by modifying nginx settings, or add Options -Indexes to your .htaccess file if you’re on an Apache server.

Adding a token check to the response is also a good measure that can prevent unwanted access. We are actually working on a way to modify our Decoupled JSON plugin so that we can include these security measures by default.

A check for an Authorization header sent by the frontend app could look like this:

if ( ! isset( $_SERVER[‘HTTP_AUTHORIZATION’] ) ) {
return;
}

$auth_header = $_SERVER[‘HTTP_AUTHORIZATION’];

Then you can check if the specific token (a secret that is only shared by the front- and back-end apps) is provided and correct.

Conclusion

REST API is great because it can be used to create fully-fledged apps — creating, retrieving, updating and deleting the data. The downside of using it is its speed.

Obviously, creating an app is different than creating a classic website. You probably won’t need all the plugins we installed. But if you just need the data for presentational purposes, caching data and serving it in a custom file seems like the perfect solution at the moment, when working with decoupled sites.

You may be thinking that creating a custom plugin to speed up the website response time is an overkill, but we live in a world in which every second counts. Everyone knows that if a website is slow, users will abandon it. There are many studies that demonstrate the connection between website performance and conversion rates. But if you still need convincing, Google penalizes slow websites.

The method explained in this article solves the speed issue that the WordPress REST API encounters and will give you an extra boost when working on a decoupled WordPress project. As we are on our never-ending quest to squeeze out that last millisecond out of every request and response, we plan to optimize the plugin even more. In the meantime, please share your ideas on speeding up decoupled WordPress!

Smashing Editorial
(md, ra, yk, il)