6 Instagram Marketing Hacks to Grow Your Business

Original Source: https://www.hongkiat.com/blog/instagram-business-marketing-hacks/

(Guest writer: Jigar Agrawal) Many businesses find it challenging to keep up with fast-changing algorithms on the popular social media site, Instagram. Instagram is nonetheless an incredible…

Visit hongkiat.com for full content.

How to Handle These 9 Client Types like a Pro

Original Source: https://www.hongkiat.com/blog/types-of-clients/

The freelancer-to-client relationship is a tricky thing to deal with. Your ability to work with the various types of clients can make or break your freelancing career. To help you deal with this…

Visit hongkiat.com for full content.

Fighting Your Corner: Assertive SEO in 2021+

Original Source: https://www.webdesignerdepot.com/2021/10/fighting-your-corner-assertive-seo-in-2021/

The web industry is beset by competing ideals and goals so that the simplicity of numbers makes sense to us: one is more than zero, two is more than one.

When it comes to any metric, there is an understandable temptation to focus on volume. In some cases, absolute metrics make more sense than others. If your goal is to make money, then $1 is marginally better than $0, and $2 is marginally better than $1.

However, even in ecommerce, some conversions are worth more than others; high-value items or items that open up repeat sales are inherently more valuable in the long term.

SEO (Search Engine Optimization) has traditionally been built around a high-traffic numbers game: if enough people visit your site, then sooner or later, someone will convert. But it is far more effective to attract the right type of visitor, the high-value user that will become a customer or even a brand advocate.

The best content does not guarantee success on Google, and neither does good UX or even Core Web Vitals. Content is no longer king. What works is brand recognition.

SERPs Look Different in 2021+

Traditional SEO strategies would have you pack content with keywords. Use the right keywords, have more keywords than your competitor, and you’ll rank higher. SERPs (Search Engine Results Pages) used to be a league table for keywords.

Unfortunately, it’s simply not that easy any longer, in part because Google has lost its self-confidence.

Even before the recent introduction of dark mode for Google Search, its SERPs had started to look very different. [We tend to focus on Google in these articles because Google is by far the biggest search engine, and whatever direction Google moves in, the industry follows — except for FLoC, that’s going down like a lead balloon on Jupiter.]

Google’s meteoric success has been due to its all-powerful algorithm. Anything you publish online is scrutinized, categorized, and archived by the all-seeing, all-knowing algorithm. We create quality content to appeal to the algorithm. We trust in its fairness, its wisdom…

…all of us except Google, who have seen behind the curtain and found the great and powerful algorithm, may as well be an old man pulling levers and tugging on ropes.

Content Is President

Google has never been coy about the inadequacies of the algorithm. Backlinks have been one of the most significant ranking factors of the algorithm for years because a backlink is a human confirmation of quality. A backlink validates the algorithm’s hypothesis that content is worth linking to.

One hundred words or so of keyword dense text requires less processing and has fewer outliers, and so is relatively simple for an algorithm to assess. And yet content of this kind performs poorly on Google.

The reason is simple: human beings don’t want thin content. We want rich, high-quality content. Thin content is unlikely to be validated by a human.

The key to ranking well is to create content to which many people want to link. Not only does this drive traffic, but it validates the page for Google’s algorithm.

There Can Be Only One

One of the key motivating factors in the recent changes to search has been the evolution of technology.

Siri, Bixby, and all manner of cyber-butler are queueing up to answer your question with a single, authoritative statement. Suddenly, top-ten on Google is a lot less desirable because it’s only the top answer that is returned.

Google, and other search engines, cannot afford to rely on the all-seeing, all-knowing algorithm because the all-powerful algorithm is just an educated guess. It’s a very good educated guess, but it’s an educated guess nonetheless.

Until now, an educated guess was sufficient because if the top result were incorrect, something in the top ten would work. But when it’s a single returned result, what search engines need is certainty.

The Single Source of Truth

As part of the push towards a single, correct answer, Google introduced knowledge panels. These are panels within search results that present Google’s top answer to any given question.

Go ahead and search for “Black Widow” and you’ll see a knowledge panel at the top of the results hierarchy. Many searchers will never get beyond this.

Knowledge panels are controversial because Google is deferring to a third authority on the subject — in the case of Black Widow, Google is deferring to Marvel Studios. If someone at Marvel decided to redefine Black Widow from action-adventure to romantic comedy, Google would respect that [bizarre] decision and update the knowledge panel accordingly.

Whether we approve of the move towards single results, knowledge panels, and whatever else develops in the next few years, it’s a moot point. It’s happening. Most of us don’t have the pull of Marvel Studios. So the question is, how do we adapt to this future and become the authority within our niche.

Making Use of sameAs

One of the most significant developments in recent years has been structured data. Structured data is essentially metadata that tells search engines how to interpret content.

Using structured data, you can specify whether content refers to a product, a person, an organization, or many other possible categories. Structured data allows a search engine to understand the difference between Tom Ford, the designer, Tom Ford, the corporation, and Tom Ford, the perfume.

Most structured data extends the generic “thing”. And thing contains a valuable property: sameAs.

sameAs is used to provide a reference to other channels for the same “thing”. In the case of an organization, that means your Facebook page, your Twitter profile, your YouTube channel, and anything else you can think of.

Implementing sameAs provides corroboration of your brand presence. In effect, it’s backlinking to yourself and providing the type of third-party validation Google needs to promote you up the rankings.

Be a Brand

Google prefers established brands because people are more likely to trust them, and therefore consider the result that Google returned as high-quality. They will, in turn, come back to Google the next time they need something, and Google’s business model survives another day.

Google’s results are skewed towards brands, so the best strategy is to act like a brand.

Brands tend to be highly localized entities that dominate a small sector. There’s no benefit to spreading out keywords in the hope of catching a lot of traffic. Instead, identify the area that you are an expert in, then focus your content there.

Develop a presence on social media, but don’t sign up for every service available unless you have the time to maintain them properly; a suspended or lapsed account doesn’t corroborate your value.

Big Fish, Flexible Pond

There’s a pop-psychology question that asks whether you would prefer to be a big fish in a small pond or a small fish in a big pond. The direction that search is moving the correct answer is “Big fish, small pond”.

The problem with metaphors is that they carry irrelevant limitations with them. In that question, we assume that there is the choice of two ponds, both a fixed size. There is no reason the pond cannot be flexible and grow with you as you increase in size.

What matters from an SEO point of view is that you dominate your niche. You must become the single source of truth, the number one search result. If you find that you aren’t number one, then instead of competing for that top spot, reduce your niche until you are the number one authority in your niche.

Be the single source of truth that Google defers to, and the all-powerful algorithm can clamber into its balloon and float away.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The post Fighting Your Corner: Assertive SEO in 2021+ first appeared on Webdesigner Depot.

Building The SSG I’ve Always Wanted: An 11ty, Vite And JAM Sandwich

Original Source: https://smashingmagazine.com/2021/10/building-ssg-11ty-vite-jam-sandwich/

I don’t know about you, but I’ve been overwhelmed by all the web development tools we have these days. Whether you like Markdown, plain HTML, React, Vue, Svelte, Pug templates, Handlebars, Vibranium — you can probably mix it up with some CMS data and get a nice static site cocktail.

I’m not going to tell you which UI development tools to reach for because they’re all great — depending on the needs of your project. This post is about finding the perfect static site generator for any occasion; something that lets us use JS-less templates like markdown to start, and bring in “islands” of component-driven interactivity as needed.

I’m distilling a year’s worth of learnings into a single post here. Not only are we gonna talk code (aka duct-taping 11ty and Vite together), but we’re also going to explore why this approach is so universal to Jamstackian problems. We’ll touch on:

Two approaches to static site generation, and why we should bridge the gap;
Where templating languages like Pug and Nunjucks still prove useful;
When component frameworks like React or Svelte should come into play;
How the new, hot-reloading world of Vite helps us bring JS interactivity to our HTML with almost zero configs;
How this complements 11ty’s data cascade, bringing CMS data to any component framework or HTML template you could want.

So without further ado, here’s my tale of terrible build scripts, bundler breakthroughs, and spaghetti-code-duct-tape that (eventually) gave me the SSG I always wanted: an 11ty, Vite and Jam sandwich called Slinkity!

A Great Divide In Static Site Generation

Before diving in, I want to discuss what I’ll call two “camps” in static site generation.

In the first camp, we have the “simple” static site generator. These tools don’t bring JavaScript bundles, single-page apps, and any other buzzwords we’ve come to expect. They just nail the Jamstack fundamentals: pull in data from whichever JSON blob of CMS you prefer, and slide that data into plain HTML templates + CSS. Tools like Jekyll, Hugo, and 11ty dominate this camp, letting you turn a directory of markdown and liquid files into a fully-functional website. Key benefits:

Shallow learning curve
If you know HTML, you’re good to go!
Fast build times
We’re not processing anything complex, so each route builds in a snap.
Instant time to interactive
There’s no (or very little) JavaScript to parse on the client.

Now in the second camp, we have the “dynamic” static site generator. These introduce component frameworks like React, Vue, and Svelte to bring interactivity to your Jamstack. These fulfill the same core promise of combining CMS data with your site’s routes at build time. Key benefits:

Built for interactivity
Need an animated image carousel? Multi-step form? Just add a componentized nugget of HTML, CSS, and JS.
State management
Something like React Context of Svelte stores allow seamless data sharing between routes. For instance, the cart on your e-commerce site.

There are distinct pros to either approach. But what if you choose an SSG from the first camp like Jekyll, only to realize six months into your project that you need some component-y interactivity? Or you choose something like NextJS for those powerful components, only to struggle with the learning curve of React, or needless KB of JavaScript on a static blog post?

Few projects squarely fit into one camp or the other in my opinion. They exist on a spectrum, constantly favoring new feature sets as a project’s need evolve. So how do we find a solution that lets us start with the simple tools of the first camp, and gradually add features from the second when we need them?

Well, let’s walk through my learning journey for a bit.

Note: If you’re already sold on static templating with 11ty to build your static sites, feel free to hop down to the juicy code walkthrough. ?

Going From Components To Templates And Web APIs

Back in January 2020, I set out to do what just about every web developer does each year: rebuild my personal site. But this time was gonna be different. I challenged myself to build a site with my hands tied behind my back, no frameworks or build pipelines allowed!

This was no simple task as a React devotee. But with my head held high, I set out to build my own build pipeline from absolute ground zero. There’s a lot of poorly-written code I could share from v1 of my personal site… but I’ll let you click this README if you’re so brave. ? Instead, I want to focus on the higher-level takeaways I learned starving myself of my JS guilty pleasures.

Templates Go A Lot Further Than You Might Think

I came at this project a recovering JavaScript junky. There are a few static-site-related needs I loved using component-based frameworks to fill:

We want to break down my site into reusable UI components that can accept JS objects as parameters (aka “props”).
We need to fetch some information at build time to slap into a production site.
We need to generate a bunch of URL routes from either a directory of files or a fat JSON object of content.

List taken from this post on my personal blog.

But you may have noticed… none of these really need clientside JavaScript. Component frameworks like React are mainly built to handle state management concerns, like the Facebook web app inspiring React in the first place. If you’re just breaking down your site into bite-sized components or design system elements, templates like Pug work pretty well too!

Take this navigation bar for instance. In Pug, we can define a “mixin” that receives data as props:

// nav-mixins.pug
mixin NavBar(links)
// pug’s version of a for loop
each link in links
a(href=link.href) link.text

Then, we can apply that mixin anywhere on our site.

// index.pug
// kinda like an ESM “import”
include nav-mixins.pug
html
body
+NavBar(navLinksPassedByJS)
main
h1 Welcome to my pug playground ?

If we “render” this file with some data, we’ll get a beautiful index.html to serve up to our users.

const html = pug.render(‘/index.pug’, { navLinksPassedByJS: [
{ href: ‘/’, text: ‘Home’ },
{ href: ‘/adopt’, text: ‘Adopt a Pug’ }
] })
// use the NodeJS filesystem helpers to write a file to our build
await writeFile(‘build/index.html’, html)

Sure, this doesn’t give niceties like scoped CSS for your mixins, or stateful JavaScript where you want it. But it has some very powerful benefits over something like React:

We don’t need fancy bundlers we don’t understand.
We just wrote that pug.render call by hand, and we already have the first route of a site ready-to-deploy.
We don’t ship any JavaScript to the end-user.
Using React often means sending a big ole runtime for people’s browsers to run. By calling a function like pug.render at build time, we keep all the JS on our side while sending a clean .html file at the end.

This is why I think templates are a great “base” for static sites. Still, being able to reach for component frameworks where we really benefit from them would be nice. More on that later. ?

You Don’t Need A Framework To Build Single Page Apps

While I was at it, I also wanted some sexy page transitions on my site. But how do we pull off something like this without a framework?

Crossfade with vertical wipe transition. (Large preview)

Well, we can’t do this if every page is its own .html file. The whole browser refreshes when we jump from one HTML file to the other, so we can’t have that nice cross-fade effect (since we’d briefly show both pages on top of each other).

We need a way to “fetch” the HTML and CSS for wherever we’re navigating to, and animate it into view using JavaScript. This sounds like a job for single-page apps!
I used a simple browser API medley for this:

Intercept all your link clicks using an event listener.
fetch API: Fetch all the resources for whatever page you want to visit, and grab the bit I want to animate into view: the content outside the navbar (which I want to remain stationary during the animation).
web animations API: Animate the new content into view as a keyframe.
history API: Change the route displaying in your browser’s URL bar using window.history.pushState({}, ‘new-route’). Otherwise, it looks like you never left the previous page!

For clarity, here’s a visual illustration of that single page app concept using a simple find-and-replace:

Step-by-step clientside routing process: 1. Medium rare hamburger is returned, 2. We request a well done burger using the fetch API, 3. We massage the response, 4. We pluck out the ‘patty’ element and apply it to our current page. (Large preview)

Source article

You can visit the source code from my personal site as well!

Sure, some pairing of React et al and your animation library of choice can do this. But for a use case as simple as a fade transition… web APIs are pretty dang powerful on their own. And if you want more robust page transitions on static templates like Pug or plain HTML, libraries like Swup will serve you well.

What 11ty Brought To The Table

I was feeling pretty good about my little SSG at this point. Sure it couldn’t fetch any CMS data at build-time, and didn’t support different layouts by page or by directory, and didn’t optimize my images, and didn’t have incremental builds.

Okay, I might need some help.

Given all my learnings from v1, I thought I earned my right to drop the “no third-party build pipelines” rule and reach for existing tools. Turns out, 11ty has a treasure trove of features I need!

Data fetching at buildtime using .11ydata.js files;
Global data available to all my templates from a _data folder;
Hot reloading during development using browsersync;
Support for fancy HTML transforms;
…and countless other goodies.

If you’ve tried out bare-bones SSGs like Jekyll or Hugo, you should have a pretty good idea of how 11ty works. Only difference? 11ty uses JavaScript through-and-through.

11ty supports basically every template library out there, so it was happy to render all my Pug pages to .html routes. It’s layout chaining option helped with my foe-single-page-app setup too. I just needed a single script for all my routes, and a “global” layout to import that script:

// _includes/base-layout.html
<html>
<body>
<!–load every page’s content between some body tags–>
{{ content }}
<!–and apply the script tag just below this–>
<script src=”main.js”></script>
</body>
</html>

// random-blog-post.pug

layout: base-layout

article
h2 Welcome to my blog
p Have you heard the story of Darth Plagueis the Wise?

As long as that main.js does all that link intercepting we explored, we have page transitions!

Oh, And The Data Cascade

So 11ty helped clean up all my spaghetti code from v1. But it brought another important piece: a clean API to load data into my layouts. This is the bread and butter of the Jamstack approach. Instead of fetching data in the browser with JavaScript + DOM manipulation, you can:

Fetch data at build-time using Node. This could be a call to some external API, a local JSON or YAML import, or even the content of other routes on your site (imagine updating a table-of-contents whenever new routes are added ?).
Slot that data into your routes. Recall that .render function we wrote earlier:

const html = pug.render(‘/index.pug’, { navLinksPassedByJS: [
{ href: ‘/’, text: ‘Home’ },
{ href: ‘/adopt’, text: ‘Adopt a Pug’ }
] })

…but instead of calling pug.render with our data every time, we let 11ty do this behind-the-scenes.

Sure, I didn’t have a lot of data for my personal site. But it felt great to whip up a .yaml file for all my personal projects:

# _data/works.yaml
– title: Bits of Good Homepage
hash: bog-homepage
links:
– href: https://bitsofgood.org
text: Explore the live site
– href: https://github.com/GTBitsOfGood/bog-web
text: Scour the Svelt-ified codebase
timeframe: May 2019 – present
tags:
– JAMstack
– SvelteJS
– title: Dolphin Audio Visualizer

And access that data across any template:

// home.pug
.project-carousel
each work in works
h3 #{title}
p #{timeframe}
each tag in tags

Coming from the world of “clientside rendering” with create-react-app, this was a pretty big revelation. No more sending API keys or big JSON blobs to the browser. ?

I also added some goodies for JavaScript fetching and animation improvements over version 1 of my site. If you’re curious, here’s where my README stood at this point.

I Was Happy At This Point But Something Was Missing

I went surprisingly far by abandoning JS-based components and embracing templates (with animated page transitions to boot). But I know this won’t satisfy my needs forever. Remember that great divide I kicked us off with? Well, there’s clearly still that ravine between my build setup (firmly in camp #1) and the haven of JS-ified interactivity (the Next, SvelteKit, and more of camp #2). Say I want to add:

a pop-up modal with an open/close toggle,
a component-based design system like Material UI, complete with scoped styling,
a complex multi-step form, maybe driven by a state machine.

If you’re a plain-JS-purist, you probably have framework-less answers to all those use cases. ? But there’s a reason JQuery isn’t the norm anymore! There’s something appealing about creating discrete, easy-to-read components of HTML, scoped styles, and pieces of JavaScript “state” variables. React, Vue, Svelte, etc. offer so many niceties for debugging and testing that straight DOM manipulation can’t quite match.

So here’s my million dollar question: can we use straight HTML templates to start, and gradually add React / Vue / Svelte components where we want them?

The answer… is yes. Let’s try it.

11ty + Vite: A Match Made In Heaven ❤️

Here’s the dream that I’m imagining here. Wherever I want to insert something interactive, I want to leave a little flag in my template to “put X React component here.” This could be the shortcode syntax that 11ty supports:

# Super interesting programming tutorial

Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!

{% react ‘./components/FancyLiveDemo.jsx’ %}

But remember, the one-piece 11ty (purposely) avoids: a way to bundle all your JavaScript. Coming from the OG guild of bundling, your brain probably jumps to building Webpack, Rollup, or Babel processes here. Build a big ole entry point file, and output some beautiful optimized code right?

Well yes, but this can get pretty involved. If we’re using React components, for instance, we’ll probably need some loaders for JSX, a fancy Babel process to transform everything, an interpreter for SASS and CSS module imports, something to help with live reloading, and so on.

If only there were a tool that could just see our .jsx files and know exactly what to do with them.

Enter: Vite

Vite’s been the talk of the town as of late. It’s meant to be the all-in-one tool for building just about anything in JavaScript. Here’s an example for you to try at home. Let’s make an empty directory somewhere on our machine and install some dependencies:

npm init -y # Make a new package.json with defaults set
npm i vite react react-dom # Grab Vite + some dependencies to use React

Now, we can make an index.html file to serve as our app’s “entry point.” We’ll keep it pretty simple:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Document</title>
</head>
<body>
<h1>Hello Vite! (wait is it pronounced “veet” or “vight”…)</h1>
<div id=”root”></div>
</body>
</html>

The only interesting bit is that div id=”root” in the middle. This will be the root of our React component in a moment!

If you want, you can fire up the Vite server to see our plain HTML file in your browser. Just run vite (or npx vite if the command didn’t get configured in your terminal), and you’ll see this helpful output:

vite vX.X.X dev server running at:

> Local: http://localhost:3000/
> Network: use `–host` to expose

ready in Xms.

Much like Browsersync or other popular dev servers, the name of each .html file corresponds to a route on our server. So if we renamed index.html to about.html, we would visit http://localhost:3000/about/ (yes, you’ll need a trailing slash!)

Now let’s do something interesting. Alongside that index.html file, add a basic React component of some sort. We’ll use React’s useState here to demonstrate interactivity:

// TimesWeMispronouncedVite.jsx
import React from ‘react’

export default function TimesWeMispronouncedVite() {
const [count, setCount] = React.useState(0)
return (
<div>
<p>I’ve said Vite wrong {count} times today</p>
<button onClick={() => setCount(count + 1)}>Add one</button>
</div>
)
}

Now, let’s load that component onto our page. This is all we have to add to our index.html:

<!DOCTYPE html>

<body>
<h1>Hello Vite! (wait is it pronounced “veet” or “vight”…)</h1>
<div id=”root”></div>
<!–Don’t forget type=”module”! This lets us use ES import syntax in the browser–>
<script type=”module”>
// path to our component. Note we still use .jsx here!
import Component from ‘./TimesWeMispronouncedVite.jsx’;
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘root’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>
</body>
</html>

Yep, that’s it. No need to transform our .jsx file to a browser-ready .js file ourselves! Wherever Vite sees a .jsx import, it’ll auto-convert that file to something browsers can understand. There isn’t even a dist or build folder when working in development; Vite processes everything on the fly — complete with hot module reloading every time we save our changes. ?

Okay, so we have an incredibly capable build tool. How can we bring this to our 11ty templates?

Running Vite Alongside 11ty

Before we jump into the good stuff, let’s discuss running 11ty and Vite side-by-side. Go ahead and install 11ty as a dev dependency into the same project directory from last section:

npm i -D @11ty/eleventy # yes, it really is 11ty twice

Now let’s do a little pre-flight check to see if 11ty’s working. To avoid any confusion, I’d suggest you:

Delete that index.html file from earlier;
Move that TimesWeMispronouncedVite.jsx inside a new directory. Say, components/;
Create a src folder for our website to live in;
Add a template to that src directory for 11ty to process.

For example, a blog-post.md file with the following contents:

# Hello world! It’s markdown here

Your project structure should look something like this:

src/
blog-post.md
components/
TimesWeMispronouncedVite.jsx

Now, run 11ty from your terminal like so:

npx eleventy –input=src

If all goes well, you should see an build output like this:

_site/
blog-post/
index.html

Where _site is our default output directory, and blog-post/index.html is our markdown file beautifully converted for browsing.

Normally, we’d run npx eleventy –serve to spin up a dev server and visit that /blog-post page. But we’re using Vite for our dev server now! The goal here is to:

Have eleventy build our markdown, Pug, nunjucks, and more to the _site directory.
Point Vite at that same _site directory so it can process the React components, fancy style imports, and other things that 11ty didn’t pick up.

So a two-step build process, with 11ty handing off the Vite. Here’s the CLI command you’ll need to start 11ty and Vite in “watch” mode simultaneously:

(npx eleventy –input=src –watch) & npx vite _site

You can also run these commands in two separate terminals for easier debugging. ?

With any luck, you should be able to visit http://localhost:3000/blog-post/ (again, don’t forget the trailing slash!) to see that processed Markdown file.

Partial Hydration With Shortcodes

Let’s do a brief rundown on shortcodes. Time to revisit that syntax from earlier:

{% react ‘/components/TimesWeMispronouncedVite.jsx’ %}

For those unfamiliar with shortcodes: they’re about the same as a function call, where the function returns a string of HTML to slide into your page. The “anatomy” of our shortcode is:

{% … %}
Wrapper denoting the start and end of the shortcode.
react
The name of our shortcode function we’ll configure in a moment.
‘/components/TimesWeMispronouncedVite.jsx’
The first (and only) argument to our shortcode function. You can have as many arguments as you’d like.

Let’s wire up our first shortcode! Add a .eleventy.js file to the base of your project, and add this config entry for our react shortcode:

// .eleventy.js, at the base of the project
module.exports = function(eleventyConfig) {
eleventyConfig.addShortcode(‘react’, function(componentPath) {
// return any valid HTML to insert
return `<div id=”root”>This is where we’ll import ${componentPath}</div>`
})

return {
dir: {
// so we don’t have to write `–input=src` in our terminal every time!
input: ‘src’,
}
}
}

Now, let’s spice up our blog-post.md with our new shortcode. Paste this content into our markdown file:

# Super interesting programming tutorial

Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!

{% react ‘/components/TimesWeMispronouncedVite.jsx’ %}

And if you run a quick npx eleventy, you should see this output in your _site directory under /blog-post/index.html:

<h1>Super interesting programming tutorial</h1>

<p>Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!</p>

<div id=”root”>This is where we’ll import /components/TimesWeMispronouncedVite.jsx</div>

Writing Our Component Shortcode

Now let’s do something useful with that shortcode. Remember that script tag we wrote while trying out Vite? Well, we can do the same thing in our shortcode! This time we’ll use the componentPath argument to generate the import, but keep the rest pretty much the same:

// .eleventy.js
module.exports = function(eleventyConfig) {
let idCounter = 0;
// copy all our /components to the output directory
// so Vite can find them. Very important step!
eleventyConfig.addPassthroughCopy(‘components’)

eleventyConfig.addShortcode(‘react’, function (componentPath) {
// we’ll use idCounter to generate unique IDs for each “root” div
// this lets us use multiple components / shortcodes on the same page ?
idCounter += 1;
const componentRootId = `component-root-${idCounter}`
return `
<div id=”${componentRootId}”></div>
<script type=”module”>
// use JSON.stringify to
// 1) wrap our componentPath in quotes
// 2) strip any invalid characters. Probably a non-issue, but good to be cautious!
import Component from ${JSON.stringify(componentPath)};
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘${componentRootId}’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>
`
})

eleventyConfig.on(‘beforeBuild’, function () {
// reset the counter for each new build
// otherwise, it’ll count up higher and higher on every live reload
idCounter = 0;
})

return {
dir: {
input: ‘src’,
}
}
}

Now, a call to our shortcode (ex. {% react ‘/components/TimesWeMispronouncedVite.jsx’ %}) should output something like this:

<div id=”component-root-1″></div>
<script type=”module”>
import Component from ‘./components/FancyLiveDemo.jsx’;
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘component-root-1’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>

Visiting our dev server using (npx eleventy –watch) & vite _site, we should find a beautifully clickable counter element. ✨

Buzzword Alert — Partial Hydration And Islands Architecture

We just demonstrated “islands architecture” in its simplest form. This is the idea that our interactive component trees don’t have to consume the entire website. Instead, we can spin up mini-trees, or “islands,” throughout our app depending on where we actually need that interactivity. Have a basic landing page of links without any state to manage? Great! No need for interactive components. But do you have a multi-step form that could benefit from X React library? No problem. Use techniques like that react shortcode to spin up a Form.jsx island.

This goes hand-in-hand with the idea of “partial hydration.” You’ve likely heard the term “hydration” if you work with component-y SSGs like NextJS or Gatsby. In short, it’s a way to:

Render your components to static HTML first.
This gives the user something to view when they initially visit your website.
“Hydrate” this HTML with interactivity.
This is where we hook up our state hooks and renderers to, well, make button clicks actually trigger something.

This 1-2 punch makes JS-driven frameworks viable for static sites. As long as the user has something to view before your JavaScript is done parsing, you’ll get a decent score on those lighthouse metrics.

Well, until you don’t. ? It can be expensive to “hydrate” an entire website since you’ll need a JavaScript bundle ready to process every last DOM element. But our scrappy shortcode technique doesn’t cover the entire page! Instead, we “partially” hydrate the content that’s there, inserting components only where necessary.

Don’t Worry, There’s A Plugin For All This — Slinkity

Let’s recap what we discovered here:

Vite is an incredibly capable bundler that can process most file types (jsx, vue, and svelte to name a few) without extra config.
Shortcodes are an easy way to insert chunks of HTML into our templates, component-style.
We can use shortcodes to render dynamic, interactive JS bundles wherever we want using partial hydration.

So what about optimized production builds? Properly loading scoped styles? Heck, using .jsx to create entire pages? Well, I’ve bundled all of this (and a whole lot more!) into a project called Slinkity. I’m excited to see the warm community reception to the project, and I’d love for you, dear reader, to give it a spin yourself!

? Try the quick start guide

Astro’s Pretty Great Too

Readers with their eyes on cutting-edge tech probably thought about Astro at least once by now. ? And I can’t blame you! It’s built with a pretty similar goal in mind: start with plain HTML, and insert stateful components wherever you need them. Heck, they’ll even let you start writing React components inside Vue or Svelte components inside HTML template files! It’s like MDX Xtreme edition. ?

There’s one pretty major cost to their approach though: you need to rewrite your app from scratch. This means a new template format based on JSX (which you might not be comfortable with), a whole new data pipeline that’s missing a couple of niceties right now, and general bugginess as they work out the kinks.

But spinning up an 11ty + Vite cocktail with a tool like Slinkity? Well, if you already have an 11ty site, Vite should bolt into place without any rewrites, and shortcodes should cover many of the same use cases as .astro files. I’ll admit it’s far from perfect right now. But hey, it’s been useful so far, and I think it’s a pretty strong alternative if you want to avoid site-wide rewrites!

Wrapping Up

This Slinkity experiment has served my needs pretty well so far (and a few of y’all’s too!). Feel free to use whatever stack works for your JAM. I’m just excited to share the results of my year of build tool debauchery, and I’m so pumped to see how we can bridge the great Jamstack divide.

Further Reading

Want to dive deeper into partial hydration, or ESM, or SSGs in general? Check these out:

Islands Architecture
This blog post from Jason Format really kicked off a discussion of “islands” and “partial hydration” in web development. It’s chock-full of useful diagrams and the philosophy behind the idea.
Simplify your static with a custom-made static site generator
Another SmashingMag article that walks you through crafting Node-based website builders from scratch. It was a huge inspiration to me!
How ES Modules have redefined web development
A personal post on how ES Modules have changed the web development game. This dives a little further into the “then and now” of import syntax on the web.
An introduction to web components
An excellent walkthrough on what web components are, how the shadow DOM works, and where web components prove useful. Used this guide to apply custom components to my own framework!

Foundation: the VFX secrets of Apple's epic sci-fi series

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/3cvxnVejcrE/foundation

Discover how Foundation was brought to the screen with the help of stunning visual effects.

Smashing Podcast Episode 42 With Jeff Smith: What Is DevOps?

Original Source: https://smashingmagazine.com/2021/10/smashing-podcast-episode-42/

In this episode, we’re talking about DevOps. What is it, and is it a string to add to your web development bow? Drew McLellan talks to expert Jeff Smith to find out.

Show Notes

Jeff on Twitter
Jeff’s book Operations Anti-Patterns, DevOps Solutions
Attainable DevOps

Weekly Update

Bridging The Gap Between Designers And Developers written by Matthew Talebi
Useful React APIs For Building Flexible Components With TypeScript written by Gaurav Khanna
Smart CSS Solutions For Common UI Challenges written by Cosima Mielke
Tips And Tricks For Evaluating UX/UI Designers written by Nataliya Sambir
Solving CLS Issues In A Next.js-Powered E-Commerce Website written by Arijit Mondal

Transcript

Drew McLellan: He’s a DevOps practitioner that focuses on attainable levels of DevOps implementations, regardless of where you are in your journey. He’s director of production operations at digital advertising platform Centro, as well as being a public speaker, sharing his DevOps knowledge with audiences all around the globe. He’s the author of the book, Operations Anti-Patterns, DevOps Solutions for Manning Publishing, which shows how to implement DevOps techniques in the kind of imperfect environments most developers work in. So we know he’s an expert in DevOps, but did you know George Clooney regards him as the best paper airplane maker of a generation? My Smashing friends, please welcome Jeff Smith. Hi Jeff. How are you?

Jeff Smith: I’m smashing, Drew, how you doing?

Drew: I’m good. Thank you. That’s good to hear. So I wanted to talk to you today about the subject of DevOps, which is one of your main key area. Many of our listeners will be involved in web and app development, but maybe only have a loose familiarity with what happens on the operations side of things. I know those of us who might work in larger companies will have whole teams of colleagues who are doing ops. We’re just thankful that whatever it is they do, they’re doing it well. But we hear DevOps mentioned more and more, and it feels like one of those things that as developers, we should really understand. So Jeff, what is DevOps?

Jeff: So if you ask 20 people what DevOps is, you might get 20 different answers. So I will give you my take on it, all right, and know that if you’re at a conference and you mention this, you could get into a fist fight with someone. But for me, DevOps is really about that relationship between, and we focus on dev and ops, but really that inter team relationship and how we go about structuring our work and more importantly, structuring our goals and incentives to make sure that they’re aligned so that we are working towards a common goal. And a lot of the core ideas and concepts from DevOps come from the old world where dev and ops were always adversarial, where there was this constant conflict. And when you think about it, it’s because of the way those two teams are incentivized. One team is incentivized to push changes. Another team is incentivized to keep stability, which means fewer changes.

Jeff: When you do that, you create this inherent conflict and everything spills out from there. So DevOps is really about aligning those teams and goals so that we are working towards a common strategy, but then also adopting practices from both sides, so that dev understands more about ops and ops understands more about dev, as a way to gain and share empathy with each other so that we understand the perspective of where the other person is coming from.

Jeff: But then also to enhance our work. Because again, if I understand your perspective and take that into account in my work, it’s going to be a lot more beneficial for each of us. And there’s a lot that ops can learn from developers in terms of automation and how we go about approaching things so that they’re easily reproducible. So it’s this blending and skills. And what you’re seeing now is that this applies to different group combinations, so you’re hearing things like DevSecOps, DevSecFinOps, DevSecFinHROps. It’s just going to keep growing and growing and growing. So it’s really a lesson that we can stamp out across the organization.

Drew: So it’s taking some of the concepts that we understand as developers and spreading our ideas further into the organization, and at the same time learning what we can from the operations to try and move everyone forward.

Jeff: Absolutely, yes. And another aspect of ops, and you had mentioned it a little bit in the intro, is we think it’s just for these larger organizations with dedicated ops teams and things like that, but one thing to think about is ops is happening in your organization, regardless of the size. It’s just a matter of it’s you doing it, or if there’s a separate team doing it, but somehow you’re deploying code. Somehow you’ve got a server out there running somewhere. So ops exist somewhere in your organization, regardless of the size. The question is, who is doing it? And if it’s a single person or a single group then DevOps might even be even more particularly salient for you, as you need to understand the types of things that ops does.

Drew: As professional developers, how important do you think it is for us to have a good grasp of what DevOps is and what it means to implement?

Jeff: I think it’s super important, especially at this phase of the DevOps journey. And the reason I think it’s important is that one, I think we’re always more efficient, again, when we understand what our counterparts are doing. But the other thing is to be able to take operational concerns into account during your design development and implementation of any technology. So one thing that I’ve learned in my career is that even though I thought developers were masters of the universe and understood everything that had to do with computers, turns out that’s not actually the case. Turns out there’s a lot of things that they outsource to ops in terms of understanding, and sometimes that results in particular design choices or implementation choices that may not be optimal for a production deployment.

Jeff: They might be fine in development and testing and things like that, but once you get to production, it’s a little bit of a different ballgame. So not to say that they need to own that entire set of expertise, but they at least need to know enough to know what they don’t know. So they know when to engage ops early, because that’s a common pattern that we see is development makes a choice. I won’t even say make a choice because they’re not even cognizant that it’s a choice, but there’s something that happens that leads to a suboptimal decision for ops and development was completely unaware. So just having a bit more knowledge about ops, even if it’s just enough to say, maybe we should bring ops in on this to get their perspective before we go moving forward. That could save a lot of time and energy and stability, obviously, as it relates to whatever products you’re releasing.

Drew: I see so many parallels with the way that you’re talking about the relationship between dev and ops as we have between design and dev, where you’ve got designers working on maybe how an interface works and looks and having a good understanding of how that’s actually going to be built in the development role, and bringing developers in to consult can really improve the overall solution just by having that clear communication and an understanding of what each other does. Seems like it’s that same principle played out with DevOps, which is really, really good to hear.

Drew: When I think of the things I hear about DevOps, I hear terms like Kubernetes, Docker, Jenkins, CircleCI. I’ve been hearing about Kubernetes for years. I still don’t have any idea what it is, but from what you’re saying, it seems that DevOps isn’t just about … We’re not just talking about tools here, are we? But more about processes and ways of communicating on workflows, is that right?

Jeff: Absolutely. So my mantra for the last 20 years has always been people process tools. You get people to buy into the vision. From there, you define whatever your process is going to look like to achieve that vision. And then you bring on tools that are going to model whatever your process is. So I always put tools at the tail end of the DevOps conversation, mainly because if you don’t have that buy-in, then it doesn’t matter. I could come up with the greatest continuous deployment pipeline ever, but if people aren’t bought into the idea of shipping every change straight to production, it doesn’t matter, right? What good is the tool? So those tools are definitely part of the conversation, only because they’re a standardized way to meet some common goals that we’ve defined.

Jeff: But you’ve got to make sure that those goals that are being defined make sense for your organization. Maybe continuous deployment doesn’t make sense for you. Maybe you don’t want to ship every single change the minute it comes out. And there are plenty of companies and organizations and reasons why you wouldn’t want to do that. So maybe something like a continuous deployment pipeline doesn’t make sense for you. So while the tools are important, it’s more important to focus on what it is that’s going to deliver value for your organization, and then model and implement the tools that are necessary to achieve that.

Jeff: But don’t go online and find out what everyone’s doing and be like, oh, well, if we’re going to do DevOps, we got to switch to Docker and Kubernetes because that’s the tool chain. No, that’s not it. You may not need those things. Not everyone is Google. Not everyone is Netflix. Stop reading posts from Netflix and Google. Please just stop reading them. Because it gets people all excited and they’re like, well this is what we got to do. And it’s like, well, they’re solving very different problems than the problems that you have.

Drew: So if say I’m starting a new project, maybe I’m a startup business, creating software as a service product. I’ve got three developers, I’ve got an empty Git repo and I’ve got dreams of IPOs. To be all in on a DevOps approach to building this product, what are the names of the building blocks that I should have in place in terms of people and processes and where do I start?

Jeff: So in your specific example, the first place I would start with is punting on most of it as much as possible and using something like Heroku or something to that effect. Because you get so excited about all this AWS stuff, Docker stuff, and in reality, it’s so hard just to build a successful product. The idea that you are focusing on the DevOps portion of it is like, well I would say outsource as much of that stuff as possible until it actually becomes a pain point. But if you’re at that point where you’re saying okay, we’re ready to take this stuff in house and we’re ready to take it to the next level. I would say the first place to start is, where are your pain points? what are the things that are causing you problems?

Jeff: So for some people it’s as simple as automated testing. The idea that hey, we need to run tests every time someone makes a commit, because sometimes we’re shipping stuff that’s getting caught by unit tests that we’ve already written. So then maybe you start with continuous integration. Maybe your deployments are taking hours to complete and they’re very manual, then that’s where you focus and you say like, okay, what automation do we need to be able to make this a one button click affair? But I hate to prescribe a general, this is where you start, just because your particular situation and your particular pain points are going to be different. And the thing is, if it’s a pain point, it should be shouting at you. It should be absolutely shouting at you.

Jeff: It should be one of those things where someone says, oh, what sucks in your organization? And it should be like, oh, I know exactly what that is. So when you approach it from that perspective, I think the next steps become pretty apparent to you in terms of what in the DevOps toolbox you need to unpack and start working with. And then it becomes these minimal incremental changes that just keep coming and you notice that as you get new capabilities, your appetite for substandard stuff becomes very small. So you go from like, oh yeah, deploys take three hours and that’s okay. You put some effort into it and next thing you know, in three weeks, you’re like, man, I cannot believe the deployment is still taking 30 minutes. How do we get this down from 30 minutes? Your appetite becomes insatiable for improvement. So things just sort of spill out from there.

Drew: I’ve been reading your recent book and that highlights what you call the four pillars of DevOps. And none of them is tools, as mentioned, but there are these four main areas of focus, if you like, for DevOps. I noticed that the first one of those is culture, I was quite surprised by that, firstly, because I was expecting you to be talking about tools more and we now understand why, but when it comes to culture, it just seems like a strange thing to have at the beginning. There’s a foundation for a technical approach. How does the culture affect how successful DevOps implementation can be within an organization?

Drew: … how successful DevOps implementation can be within an organization.

Jeff: Culture is really the bedrock of everything when you think about it. And it’s important because culture, and we get into this a little bit deeper in the book, but culture really sets the stage for norms within the organization. Right. You’ve probably been at a company where, if you submitted a PR with no automated testing, that’s not a big thing. People accept it and move on.

Jeff: But then there’s other orgs where that is a cardinal sin. Right. Where if you’ve done that, it’s like, “Whoa, are you insane? What are you doing? There’s no test cases here.” Right. That’s culture though. That is culture that is enforcing that norm to say like, “This is just not what we do.”

Jeff: Anyone can write a document that says we will have automated test cases, but the culture of the organization is what enforces that mechanism amongst the people. That’s just one small example of why culture is so important. If you have an organization where the culture is a culture of fear, a culture of retribution. It’s like if you make a mistake, right, that is sacrilege. Right. That is tantamount to treason. Right.

Jeff: You create behaviors in that organization that are adverse to anything that could be risky or potentially fail. And that ends up leaving a lot of opportunity on the table. Whereas if you create a culture that embraces learning from failure, embraces this idea of psychological safety, where people can experiment. And if they’re wrong, they can figure out how to fail safely and try again. You get a culture of experimentation. You get an organization where people are open to new ideas.

Jeff: I think we’ve all been at those companies where it’s like, “Well, this is just the way it’s done. And no one changes that.” Right. You don’t want that because the world is constantly changing. That’s why we put culture front and center, because a lot of the behaviors within an organization exist because of the culture that exists.

Jeff: And the thing is, cultural actors can be for good or ill. Right. What’s ironic, and we talk about this in the book too, is it doesn’t take as many people as you think to change the organizational culture. Right. Because most people, there’s detractors, and then there’s supporters, and then there’s fence sitters when it comes to any sort of change. And most people are fence sitters. Right. It only takes a handful of supporters to really tip the scales. But in the same sense, it really only takes a handful of detractors to tip the scales either.

Jeff: It’s like, it doesn’t take much to change the culture for the better. And if you put that energy into it, even without being a senior leader, you can really influence the culture of your team, which then ends up influencing the culture of your department, which then ends up influencing the culture of the organization.

Jeff: You can make these cultural changes as an individual contributor, just by espousing these ideas and these behaviors loudly and saying, “These are the benefits that we’re getting out of this.” That’s why I think culture has to be front and fore because you got to get everyone bought into this idea and they have to understand that, as an organization, it’s going to be worthwhile and support it.

Drew: Yeah. It’s got to be a way of life, I guess.

Jeff: Exactly.

Drew: Yeah. I’m really interested in the area of automation because through my career, I’ve never seen some automation that’s been put in place that hasn’t been of benefit. Right. I mean, apart from the odd thing maybe where something’s automated and it goes wrong. Generally, when you take the time to sit down and automate something you’ve been doing manually, it always saves you time and it saves you headspace, and it’s just a weight off your shoulders.

Drew: In taking a DevOps approach, what sort of things would you look to automate within your workflows? And what gains would you expect to see from that over completing things manually?

Jeff: When it comes to automation, to your point, very seldom is there a time where automation hasn’t made life better. Right. The rub that people encounter is finding the time to build that automation. Right. And usually, at my current job, for us it’s actually the point of the request. Right. Because at some point you have to say, “I’m going to stop doing this manually and I’m going to automate it.”

Jeff: And it may have to be the time you get a request where you say, “You know what? This is going to take two weeks. I know we normally turn it around in a couple of hours, but it’s going to take two weeks because this is the request that gets automated.” In terms of identifying what you automate. At Central, I use the process where basically, I would sample all of the different types of requests that came in over a four week period, let’s say. And I would categorize them as planned work, unplanned work, value add work, toil work. Toil being work that’s not really useful, but for some reason, my organization has to do it.

Jeff: And then identifying those things that are like, “Okay, what is the low hanging fruit that we can just get rid of if we were to automate this? What can we do to just simplify this?” And some of the criteria was the risk of the process. Right. Automated database failovers are a little scary because you don’t do them that often. And infrastructure changes. Right. We say, “How often are we doing this thing?” If we’re doing it once a year, it may not be worth automating because there’s very little value in it. But if it’s one of those things that we’re getting two, three times a month, okay, let’s take a look at that. All right.

Jeff: Now, what are the things that we can do to speed this up? And the thing is, when we talk about automation, we instantly jumped to, “I’m going to click a button and this thing’s just going to be magically done.” Right. But there are so many different steps that you can do in automation if you feel queasy. Right. For example, let’s say you’ve got 10 steps with 10 different CLI commands that you would normally run. Your first step of automation could be as simple as, run that command, or at least show that command. Right. Say, “Hey, this is what I’m going to execute. Do you think it’s okay?” “Yes.” “Okay. This is the result I got. Is it okay for me to proceed?” “Yes.” “Okay. This is the result I got.” Right.

Jeff: That way you’ve still got a bit of control. You feel comfortable. And then after 20 executions, you realize you’re just hitting, yes, yes, yes, yes, yes, yes. You say, “All right. Let’s chain all these things together and just make it all one.” It’s not like you’ve got to jump into the deep end of, click it and forget it right off the rip. You can step into this until you feel comfortable.

Jeff: Those are the types of things that we did as part of our automation effort was simply, how do we speed up the turnaround time of this and reduce the level of effort on our part? It may not be 100% day one, but the goal is always to get to 100%. We’ll start with small chunks that we’ll automate parts of it that we feel comfortable with. Yes. We feel super confident that this is going to work. This part we’re a little dicey on, so maybe we’ll just get some human verification before we proceed.

Jeff: The other thing that we looked at in terms of we talk about automation, but is what value are we adding to a particular process? And this is particularly salient for ops. Because a lot of times ops serves as the middleman for a process. Then their involvement is nothing more than some access thing. Right. It’s like, well, ops has to do it because ops is the only person that has access.

Jeff: Well, it’s like, well, how do we outsource that access so that people can do it? Because the reality is, it’s not that we’re worried about developers having production access. Right. We’re worried about developers having unfettered production access. And that’s really a safety thing. Right. It’s like if my toolbox has only sharp knives, I’m going to be very careful about who I give that out to. But if I can mix up the toolbox with a spoon and a hammer so that people can choose the right tool for the job, then it’s a lot easier to loan that out.

Jeff: For example, we had a process where people needed to run ad hoc Ruby scripts in production, for whatever reason. Right. Need to clean up data, need to correct some bad record, whatever. And that would always come through my team. And it’s like, well, we’re not adding any value to this because I can’t approve this ticket. Right. I have no idea. You wrote the software, so what good is it me sitting over your shoulder and going, “Well, I think that’s safe”? Right. I didn’t add any value to typing it in because I’m just typing exactly what you told me to type. Right.

Jeff: And worst case, and at the end of it, I’m really just a roadblock for you because you’re submitting a ticket, then you’re waiting for me to get back from lunch. I’m back from lunch, but I’ve got these other things to work on. We said, “How do we automate this so that we can put this in the hands of developers while at the same time addressing any of these audit concerns that we might have?”

Jeff: We put it in a JIRA workflow, where we had a bot that would automate executing commands that were specified in the JIRA ticket. And then we could specify in the JIRA ticket that it required approval from one of several senior engineers. Right.

Jeff: It makes more sense that an engineer is approving another engineer’s work because they have the context. Right. They don’t have to sit around waiting for ops. The audit piece is answered because we’ve got a clear workflow that’s been defined in JIRA that is being documented as someone approves, as someone requested. And we have automation that is pulling that command and executing that command verbatim in the terminal. Right.

Jeff: You don’t have to worry about me mistyping it. You don’t have to worry about me grabbing the wrong ticket. That increased the turnaround time for those tickets, something like tenfold. Right. Developers are unblocked. My team’s not tied up doing this. And all it really took was a week or two week investment to actually develop the automation and the permissioning necessary to get them access for it.

Jeff: Now we’re completely removed from that. And development is actually able to outsource some of that functionality to lower parts of the organization. They’ve pushed it to customer care. It’s like now when customer care knows that this record needs to be updated for whatever, they don’t need development. They can submit their standard script that we’ve approved for this functionality. And they can run it through the exact same workflow that development does. It’s really a boon all around.

Jeff: And then it allows us to push work lower and lower throughout the organization. Because as we do that, the work becomes cheaper and cheaper because I could have a fancy, expensive developer running this. Right. Or I can have a customer care person who’s working directly with the customer, run it themselves while they’re on the phone with a customer correcting an issue.

Jeff: Automation I think, is key to any organization. And the final point I’ll say on that is, it also allows you to export expertise. Right. Now, I may be the only person that knows how to do this if I needed to do a bunch of commands on the command line. But if I put this in automation, I can give that to anyone. And people know what the end result is, but they don’t need to know all the intermediate steps. I have increased my value tenfold by pushing it out to the organization and taking my expertise and codifying it into something that’s exportable.

Drew: You talked about automating tasks that are occurring frequently. Is there an argument for also automating tasks that happen so infrequently that it takes a developer quite a long time to get back up to speed with how it should work? Because everybody’s forgotten. It’s been so long. It’s been a year, maybe nobody has done it before. Is there an argument for automating those sorts of things too?

Jeff: That’s a tough balancing act. Right. And I always say take it by a case by case basis. And the reason I say that is, one of the mantras in DevOps is if something painful, do it more often. Right. Because the more often you do it, the more muscle memory it becomes and you get to work out and iron out those kinks.

Jeff: The issue that we see with automating very infrequent tasks is that the landscape of the environment tends to change in between executions of that automation. Right. What ends up happening is your code makes particular assumptions about the environment and those assumptions are no longer valid. So the automation ends up breaking anyways.

Drew: And then you’ve got two problems.

Jeff: Right. Right. Exactly. Exactly. And you’re like, “Did I type it wrong? Or is this? No, this thing is actually broke.” So-

Jeff: Typing wrong or is this no, this thing is actually broke. So when it comes to automating infrequent tasks, we really take it by a case by case basis to understand, well, what’s the risk if this doesn’t work, right. If we get it wrong, are we in a bad state or is it just that we haven’t finished this task? So if you can make sure that this would fail gracefully and not have a negative impact, then it’s worth giving a shot in automating it. Because at the very least, then you have a framework of understanding of what should be going on because at the very least, someone’s going to be able to read the code and understand, all right, this is what we were doing. And I don’t understand why this doesn’t work anymore, but I have a clear understanding of what was supposed to happen at least based at design time when this was written.

Jeff: But if you’re ever in a situation where failure could lead to data changes or anything like that, I usually err on the side of caution and keep it manual only because if I have an automation script, if I find some confluence document that’s three years old that says run this script, I tend to have a hundred percent confidence in that script and I execute it. Right. Whereas if it’s a series of manual steps that was documented four years ago, I’m going to be like, I need to do some verification here. Right? Let me step through this a little bit and talk to a few people. And sometimes when we design processes, it’s worthwhile to force that thought process, right? And you have to think about the human component and how they’re going to behave. And sometimes it’s worth making the process a little more cumbersome to force people to think should I be doing this now?

Drew: Are there other ways of identifying what should be automated through sort of monitoring your systems and measuring things? I mean, I think about DevOps and I think about dashboards as one of the things, nice graphs. And I’m sure there’s a lot more to those dashboards than just looking pretty, but it’s always nice to have pretty looking dashboards. Are there ways of measuring what a system’s up to, to help you to make those sorts of decisions?

Jeff: Absolutely. And that sort of segues into the metrics portion of cams, right, is what are the things that we are tracking in our systems to know that they are operating efficiently? And one of the common sort of pitfalls of metrics is we look for errors instead of verifying success. And those are two very different practices, right? So something could flow through the system and not necessarily error out, but not necessarily go through the entire process the way it should. So if we drop a message on a message queue, there should be a corresponding metric that says, “And this message was retrieved and processed,” right? If not, right, you’re going to quickly have an imbalance and the system doesn’t work the way it should. I think we can use metrics as a way to also understand different things that should be automated as we get into those bad states.

Jeff: Right? Because a lot of times it’s a very simple step that needs to be taken to clean things up, right? For people that have been ops for a while, right, the disc space alert, everyone knows about that. Oh, we’re filled up with disc. Oh, we forgot it’s month end and billing ran and billing always fills up the logs. And then VAR log is consuming all the disc space, so we need to run a log rotate. Right? You could get woken up at three in the morning for that, if that’s sort of your preference. But if we sort of know that that’s the behavior, our metrics should be able to give us a clue to that. And we can simply automate the log rotate command, right? Oh, we’ve reached this threshold, execute the log rotate command. Let’s see if the alert clears. If it does, continue on with life. If it doesn’t, then maybe we wake someone up, right.

Jeff: You’re seeing this a lot more with infrastructure automation as well, right, where it’s like, “Hey, are our requests per second are reaching our theoretical maximum. Maybe we need to scale the cluster. Maybe we need to add three or four nodes to the load balancer pool.” And we can do that without necessarily requiring someone to intervene. We can just look at those metrics and take that action and then contract that infrastructure once it goes below a particular threshold, but you got to have those metrics and you got to have those hooks into your monitoring environment to be able to do that. And that’s where the entire metrics portion of the conversation comes in.

Jeff: Plus it’s also good to be able to share that information with other people because once you have data, you can start talking about things in a shared reality, right, because busy is a generic term, but 5,200 requests per second is something much more concrete that we can all reason about. And I think so often when we’re having conversations about capacity or anything, we use these hand-wavy terms, when instead we could be looking at a dashboard and giving very specific values and making sure that everyone has access to those dashboards, that they’re not hidden behind some ops wall that only we have access to for some unknown reason.

Drew: So while sort of monitoring and using metrics as a decision-making tool for the businesses is one aspect of it, it sounds like the primary aspect is having the system monitor itself, perhaps, and to respond maybe with some of these automations as the system as a whole gives itself feedback on onto what’s happening.

Jeff: Absolutely. Feedback loops are a key part of any real system design, right, and understanding the state of the system at any one time. So while it’s easy in the world where everything is working fine, the minute something goes bad, those sorts of dashboards and metrics are invaluable to have, and you’ll quickly be able to identify things that you have not instrumented appropriately. Right. So one of the things that we always talk about in incident management is what questions did you have for the system that couldn’t be answered, right. So what is it, or you’re like, “Oh man, if we only knew how many queries per second were going on right now.” Right.

Jeff: Well, okay. How do we get that for next time? How do we make sure that that’s radiated somewhere? And a lot of times it’s hard when you’re thinking green field to sit down and think of all the data that you might want at any one time. But when you have an incident, it becomes readily apparent what data you wish you had. So it’s important to sort of leverage those incidents and failures and get a better understanding of information that’s missing so that you can improve your incident management process and your metrics and dashboarding.

Drew: One of the problems we sometimes face in development is that teammate members, individual team members hold a lot of knowledge about how a system works and if they leave the company or if they’re out sick or on vacation, that knowledge isn’t accessible to the rest of the team. It seems like the sort of DevOps approach to things is good at capturing a lot of that operational knowledge and building it into systems. So that sort of scenario where an individual has got all the information in their head that doesn’t happen so much. Is that a fair assessment?

Jeff: It is. I think we’ve probably, I think as an industry we might have overstated its efficacy. And the only reason I say that is when our systems are getting so complicated, right? Gone are the days where someone has the entire system in their head and can understand it from beginning to end. Typically, there’s two insidious parts of it. One, people typically focus on one specific area and someone doesn’t have the whole picture, but what’s even more insidious is that we think we understand how the system works. Right. And it’s not until an incident happens that the mental model that we have of the system and the reality of the system come into conflict. And we realize that there’s a divergence, right? So I think it’s important that we continuously share knowledge in whatever form is efficient for folks, whether it be lunch and learns, documentation, I don’t know, presentations, anything like that to sort of share and radiate that knowledge.

Jeff: But we also have to prepare and we have to prepare and define a reality where people may not completely understand how the system works. Right. And the reason I think it’s important that we acknowledge that is because you can make a lot of bad decisions thinking you know how the system behaves and being 100% wrong. Right. So having the wherewithal to understand, okay, we think this is how the system works. We should take an extra second to verify that somehow. Right. I think that’s super important in these complicated environments in these sprawling complex microservice environments. Whereas it can be very, it’s easy to be cavalier if you think, oh yeah, this is definitely how it works. And I’m going to go ahead and shut the service down because everything’s going to be fine. And then everything topples over. So just even being aware of the idea that, you know what, we may not know a hundred percent how this thing works.

Jeff: So let’s take that into account with every decision that we make. I think that’s key. And I think it’s important for management to understand the reality of that as well because for management, it’s easy for us to sit down and say, “Why didn’t we know exactly how this thing was going to fail?” And it’s like, because it’s complicated, right, because there’s 500 touch points, right, where these things are interacting. And if you change one of them, it changes the entire communication pattern. So it’s hard and it’s not getting any easier because we’re getting excited about things like microservices. We’re getting excited about things like Kubernetes. We’re giving people more autonomy and these are just creating more and more complicated interfaces into these systems that we’re managing. And it’s becoming harder and harder for anyone to truly understand them in their entirety.

Drew: We’ve talked a lot about a professional context, big organizations and small organizations too. But I know many of us work on smaller side projects or maybe we volunteer on projects and maybe you’re helping out someone in the community or a church or those sorts of things. Can a DevOps approach benefit those smaller projects or is it just really best left to big organizations to implement?

Jeff: I think DevOps can absolutely benefit those smaller projects. And specifically, because I think sort of some of the benefits that we’ve talked about get amplified in those smaller projects. Right? So exporting of expertise with automation is a big one, right? If I am… Take your church example, I think is a great one, right? If I can build a bunch of automated tests suites to verify that a change to some HTML doesn’t break the entire website, right, I can export that expertise so that I can give it to a content creator who has no technical knowledge whatsoever. Right. They’re a theologian or whatever, and they just want to update a new Bible verse or something, right. But I can export that expertise so that they know that I know when I make this content change, I’m supposed to run this build button.

Jeff: And if it’s green, then I’m okay. And if it’s red, then I know I screwed something up. Right. So you could be doing any manner of testing in there that is extremely complicated. Right. It might even be something as simple as like, hey, there’s a new version of this plugin. And when you deploy, it’s going to break this thing. Right. So it has nothing to do with the content, but it’s at least a red mark for this content creator to say “Oh, something bad happened. I shouldn’t continue. Right. Let me get Drew on the phone and see what’s going on.” Right. And Drew can say, “Oh right. This plugin is upgraded, but it’s not compatible with our current version of WordPress or whatever.” Right. So that’s the sort of value that we can add with some of these DevOps practices, even in a small context, I would say specifically around automation and specifically around some of the cultural aspects too.

Jeff: Right? So I’ve been impressed with the number of organizations that are not technical that are using get to make changes to everything. Right. And they don’t really know what they’re doing. They just know, well, this is what we do. This is the culture. And I add this really detailed commit message here. And then I push it. They are no better than us developers. They know three get commands, but it’s the ones they use over and over and over again. But it’s been embedded culturally and that’s how things are done. So everyone sort of rallies around that and the people that are technical can take that pattern.

Jeff: … around that and the people that are technical can take that pattern and leverage it into more beneficial things that might even be behind the scenes that they don’t necessarily see. So I think there’s some value, definitely. It’s a matter of how deep you want to go, even with the operations piece, right? Like being able to recreate a WordPress environment locally very easily, with something like Docker. They may not understand the technology or anything, but if they run Docker Compose Up or whatever, and suddenly they’re working on their local environment, that’s hugely beneficial for them and they don’t really need to understand all the stuff behind it. In that case, it’s worthwhile, because again, you’re exporting that expertise.

Drew: We mentioned right at the beginning, sort of putting off as much sort of DevOps as possible. You mentioned using tools like Heroku. And I guess that sort of approach would really apply here on getting started with, with a small project. What sort things can platforms like Heroku offer? I mean, obviously, I know you’re not a Heroku expert or representative or anything, but those sorts of platforms, what sort of tools are they offering that would help in this context?

Jeff: So for one, they’re basically taking that operational context for you and they’re really boiling it down into a handful of knobs and levers, right? So I think what it offers is one, it offers a very clear set of what we call the yellow brick road path, where it’s like, “If you go this route, all of this stuff is going to be handled for you and it’s going to make your life easier. If you want to go another route, you can, but then you got to solve for all this stuff yourself.” So following the yellow brick road route helps because one, they’re probably identifying a bunch of things that you hadn’t even thought of. So if you’re using their database container or technology, guess what? You’re going to get a bunch of their metrics for free. You’re going to get a lot of their alerting for free. You didn’t do anything. You didn’t think anything. It’s just when you need it, it’s there. And it’s like, “Oh wow, that’s super are helpful.”

Jeff: Two, when it comes to performance sizing and flexibility, this becomes very easy to sort of manage because the goal is, you’re a startup that’s going to become wildly successful. You’re going to have hockey stick growth. And the last thing you necessarily really want to be doing is figuring out how to optimize your code for performance, while at the same time delivering new features. So maybe you spend your way out of it. You say, “Well, we’re going to go up to the next tier. I could optimize my query code, but it’s much more efficient for me to be spending time building this next feature that’s going to bring in this new batch of users, so let’s just go up to the next tier,” and you click button and you move on.

Jeff: So being able to sort of spend your way out of certain problems, I think it’s hugely beneficial because tech debt gets a bad rap, but tech debt is no different than any debt. It’s the trade off of acquiring something now and dealing with the pain later. And that’s a strategic decision that you have to make in every organization. So unchecked tech debt is bad, right? But tech debt generally, I think, is a business choice and Heroku and platforms like that enable you to make that choice when it comes to infrastructure and performance.

Drew: You’ve written a book, Operations, Anti-Patterns, DevOps Solutions, for Manning. I can tell it’s packed with years of hard-earned experience. The knowledge sort of just leaps out from the page. And I can tell it’s been a real labor of love. It’s packed full of information. Who’s your sort of intended audience for that book? Is it mostly those who are already working in DevOps, or is it got a broader-

Jeff: It’s got a broader… So one of the motivations for the book was that there were plenty of books for people that we’re already doing DevOps. You know what I mean? So we were kind of talking to ourselves and high-fiving each other, like, “Yeah, we’re so advanced. Awesome.” But what I really wanted to write the book for were people that were sort of stuck in these organizations. I don’t want to use the term stuck. That’s unfair, but are in these organizations that maybe aren’t adopting DevOps practices or aren’t at the forefront of technology, or aren’t necessarily cavalier about blowing up the way they do work today, and changing things.

Jeff: I wanted to write it to them, mainly individual contributors and middle managers to say like, “You don’t need to be a CTO to be able to make these sorts of incremental changes, and you don’t have to have this whole sale revolution to be able to gain some of the benefits of DevOps.” So it was really sort of a love letter to them to say like, “Hey, you can do this in pieces. You can do this yourself. And there’s all of these things that you may not think are related to DevOps because you’re thinking of it as tools and Kubernetes.” Not every organization… If you were for this New York State, like the state government, you’re not going to just come in and implement Kubernetes overnight. Right? But you can implement how teams talk to each other, how they work together, how we understand each other’s problems, and how we can address those problems through automation. Those are things that are within your sphere of influence that can improve your day to day life.

Jeff: So it was really a letter to those folks, but I think there’s enough data in there and enough information for people that are in a DevOps organization to sort of glean from and say like, “Hey, this is still useful for us.” And a lot of people, I think identify quickly by reading the book, that they’re not in a DevOps organization, they just have out a job title change. And that happens quite a bit. So they say like, “Hey, we’re DevOps engineers now, but we’re not doing these sorts of practices that are talked about in this book and how do we get there?”

Drew: So it sounds like your book is one of them, but are there other resources that people looking to get started with DevOps could turn to? Are there good places to learn this stuff?

Jeff: Yeah. I think DevOps For Dummies by Emily Freeman is a great place to start. It really does a great job of sorting of laying out some of the core concepts and ideas, and what it is we’re striving for. So that would be a good place to start, just to sort of get a lay of the land. I think the Phoenix Project is obviously another great source by Gene Kim. And that is great, that sort of sets the stage for the types of issues that not being in a DevOps environment can create. And it does a great job of sort of highlighting these patterns and personalities that occur that we see in all types of organizations over and over again. I think it does a great job of sort of highlighting those. And if you read that book, I think you’re going to end up screaming at the pages saying, “Yes, yes. This. This.” So, that’s another great place.

Jeff: And then from there, diving into any of the DevOps handbook. I’m going to kick myself for saying this, but the Google SRE Handbook was another great place to look. Understand that you’re not Google, so don’t feel like you’ve got to implement everything, but I think a lot of their ideas and strategies are sound for any organization, and are great places where you can sort of take things and say like, “Okay, we’re, we’re going to make our operations environment a little more efficient.” And that’s, I think going to be particularly salient for developers that are playing an ops role, because it does focus on a lot of the sort of programmatic approach to solving some of these problems.

Drew: So, I’ve been learning all about DevOps. What have you been learning about lately, Jeff?

Jeff: Kubernetes, man. Yeah. Kubernetes has been a real sort of source of reading and knowledge for us. So we’re trying to implement that at Centro currently, as a means to sort of further empower developers. We want to take things a step further from where we’re at. We’ve got a lot of automation in place, but right now, when it comes to onboarding a new service, my team is still fairly heavily involved with that, depending on the nature of the service. And we don’t want to be in that line of work. We want developers to be able to take an idea from concept to code to deployment, and do that where the operational expertise is codified within the system. So, as you move through the system, the system is guiding you. So we think Kubernetes is a tool that will help us do that.

Jeff: It’s just incredibly complicated. And it’s a big piece to sort of bite off. So figuring out what do deployments look like? How do we leverage these operators inside Kubernetes? What does CICD look like in this new world? So there’s been a lot of reading, but in this field, you’re constantly learning, right? It doesn’t matter how long you’ve been in it, how long you’ve been doing it, you’re an idiot in some aspect of this field somewhere. So, it’s just something you kind of adapt to

Drew: Well, hats off as I say, even after all these years, although I sort of understand where it sits in the stack, I still really don’t have a clue what Kubernetes is doing.

Jeff: I feel similar sometimes. It feels like it’s doing a little bit of everything, right? It is the DNS of the 21st century.

Drew: If you, the listener, would like to hear more from Jeff, you can find him on Twitter, where he’s at dark and nerdy, and find his book and links to past presentations and blog posts at his site, attainabledevops.com. Thanks for joining us today, Jeff. Did you have any parting words?

Jeff: Just keep learning, just get out there, keep learning and talk to your fellow peers. Talk, talk, talk. The more you can talk to the people that you work with, the better understanding, the better empathy you’ll generate for them, and if there’s someone in particular in the organization you hate, make sure you talk to them first.

How to Control Windows Only With Keyboard

Original Source: https://www.hongkiat.com/blog/controlling-windows-with-shortcuts/

No need to worry if you have lost access to your PC mouse, you can still control your PC just with the keyboard. Your PC keyboard offers all the keys and shortcuts to perform almost all of the…

Visit hongkiat.com for full content.

The 20 best business card designs

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/8h0FQAOCAe4/business-card-designs-5132829

Get creative with business card design to stand out from the crowd.

18 Creative Custom Cursors

Original Source: http://feedproxy.google.com/~r/1stwebdesigner/~3/H7NlEJUyiho/

A cursor/pointer is a position indicator that helps the user enter text, numbers, or symbols. The default cursor is a symbol that is easily recognized by tons of people around the world. Without the cursor, user integration would not be as easy as it is now. Cursors have saved many people the trouble of memorizing keyboard shortcuts required to navigate a page.

Creative custom cursors are basically unique customized pointers. Throughout the years, the cursor has been modified to assume different shapes and characters. These customized pointers can boost a site’s interaction and traffic. Many websites have adopted custom cursors because they help them stand out and attract more customers.

Your Designer Toolbox
Unlimited Downloads: 500,000+ Web Templates, Icon Sets, Themes & Design Assets


DOWNLOAD NOW

 

Benefits of Creative Custom Cursors

Though they’re not going to make a massive difference in how your website is received by visitors, custom cursors can make an impact, including:

Help maintain the theme of the website.
Can attract more customers.
Build website aesthetics.
Are easy to make.

How to Choose a Custom Cursor

Here are some factors to consider when looking for a custom cursor for your next website project.

Suitability

Getting a custom cursor that suits your website can offer great user interaction. If your site targets young users, having a quirky cursor can enhance engagement with your website. Whereas if your target market is older, having a custom cursor might not get you the same results.

Formal websites should use default cursors and stay away from custom ones. This helps to maintain the site’s formal tone.

Functionality

Some custom cursors don’t work well with older browsers. If a user opens a website using an old browser that doesn’t support custom cursors, the pointer will assume its default design.

This means features that work with the custom cursor will not be as effective when using the default cursor, which in turn affects user experience. This is something to heavily consider.

Speed

Your site’s loading speed is an important factor if you want to rank well on Google and attract more visitors. Minor site upgrades such as a custom cursor will not typically affect your site’s speed.

18 Examples of Creative Custom Cursors

And now, the part you’ve been waiting for: on to the list of creative and eye-catching custom cursors worthy of your consideration.

1. Custom Cursor by Simon Busborg

See the Pen
custom cursor by Simon Busborg (@simonbusborg)
on CodePen.light

2. Custom Cursor Navigation Effect by Mark Mead

See the Pen
Custom Cursor Navigation Effect by Mark Mead (@markmead)
on CodePen.light

3. Custom Cursor Inverting Color by Uwe Chardon

See the Pen
custom cursor inverting color by Uwe Chardon (@uchardon)
on CodePen.light

4. Custom Cursor by Ivan Di Stasio

See the Pen
Custom cursor by Ivan Di Stasio (@IvanDiStasio)
on CodePen.light

5. Custom Cursor With Mixed-Blend-Mode by Victor Hripko

See the Pen
Custom cursor with mix-blend-mode by Victor Hripko (@victorhripko)
on CodePen.light

6. Custom Cursor Effect by Ivan Grozdic

See the Pen
Custom Cursor Effect by Ivan Grozdic (@ig_design)
on CodePen.light

7. Custom Cursor by Tim Jackleus

See the Pen
Custom cursor by Tim Jackleus (@timjackleus)
on CodePen.light

8. Custom Cursor With CSS Variables by Tobias Reich

See the Pen
Custom cursor with CSS variables by Tobias Reich (@electerious)
on CodePen.light

9. Circle Cursors by Chris Heuberger

See the Pen
Circle Cursors by Chris Heuberger (@ChrisBup)
on CodePen.light

10. Magnetic Hover Interaction by Sikriti Dakua

See the Pen
Magnetic Hover Interaction by Sikriti Dakua (@dev_loop)
on CodePen.light

11. Interactive Custom Cursor by hb nguyen

See the Pen
Interactive Custom Cursor by hb nguyen (@hbthen3rd)
on CodePen.light

12. Custom Cursor With GSAP TweenMax and CSS by Karlo Videk

See the Pen
Custom cursor with GSAP TweenMax and CSS by Karlo Videk (@karlovidek)
on CodePen.light

13. Custom Cursor- Circle Follows The Mouse Pointer by Cojea Gabriel

See the Pen
Custom Cursor – Circle Follows The Mouse Pointer by Cojea Gabriel (@gabrielcojea)
on CodePen.light

14. Creating Custom Cursors by designcourse

See the Pen
Creating Custom Cursors by designcourse (@designcourse)
on CodePen.light

15. Circle Cursor With Blend Mode by Clement Girault

See the Pen
Circle cursor with blend mode by Clement Girault (@clementGir)
on CodePen.light

16. Custom Dot Cursor by Kyle Brumm

See the Pen
Custom Dot Cursor by Kyle Brumm (@kjbrum)
on CodePen.light

17. Custom Cursor Using Data-Uri by Sten Hougaard

See the Pen
Custom cursors using data-uri by Sten Hougaard (@netsi1964)
on CodePen.light

18. Mutant Cursor by Rafael Gonzalez

See the Pen
Mutant Cursor by Rafael González (@rgg)
on CodePen.light

Conclusion

A unique custom cursor is a great way to make sure that users don’t — if you’ll pardon the pun — lose the point. Websites that use creative custom cursors that fit their aesthetic or theme create a more branded look and that is synonymous with increased traffic.

If you’re looking for the best custom cursor for your website, we hope this article will help to that end. Good luck to you!