How To Build An Amazon Product Scraper With Node.js

Original Source: https://smashingmagazine.com/2021/10/building-amazon-product-scraper-nodejs/

Have you ever been in a position where you need to intimately know the market for a particular product? Maybe you’re launching some software and need to know how to price it. Or perhaps you already have your own product on the market and want to see which features to add for a competitive advantage. Or maybe you just want to buy something for yourself and want to make sure you get the best bang for your buck.

All these situations have one thing in common: you need accurate data to make the correct decision. Actually, there’s another thing they share. All scenarios can benefit from the use of a web scraper.

Web scraping is the practice of extracting large amounts of web data through the use of software. So, in essence, it’s a way to automate the tedious process of hitting ‘copy’ and then ‘paste’ 200 times. Of course, a bot can do that in the time it took you to read this sentence, so it’s not only less boring but a lot faster, too.

But the burning question is: why would someone want to scrape Amazon pages?

You’re about to find out! But first of all, I’d like to make something clear right now — while the act of scraping publicly available data is legal, Amazon has some measures to prevent it on their pages. As such, I urge you always to be mindful of the website while scraping, take care not to damage it, and follow ethical guidelines.

Recommended Reading: “The Guide To Ethical Scraping Of Dynamic Websites With Node.js And Puppeteer” by Andreas Altheimer

Why You Should Extract Amazon Product Data

Being the largest online retailer on the planet, it’s safe to say that if you want to buy something, you can probably get it on Amazon. So, it goes without saying just how big of a data treasure trove the website is.

When scraping the web, your primary question should be what to do with all that data. While there are many individual reasons, it boils down to two prominent use cases: optimizing your products and finding the best deals.

Let’s start with the first scenario. Unless you’ve designed a truly innovative new product, the chances are that you can already find something at least similar on Amazon. Scraping those product pages can net you invaluable data such as:

The competitors’ pricing strategy
So, that you can adjust your prices to be competitive and understand how others handle promotional deals;
Customer opinions
To see what your future client base cares about most and how to improve their experience;
Most common features
To see what your competition offers to know which functionalities are crucial and which can be left for later.

In essence, Amazon has everything you need for a deep market and product analysis. You’ll be better prepared to design, launch, and expand your product lineup with that data.

The second scenario can apply to both businesses and regular people. The idea is pretty similar to what I mentioned earlier. You can scrape the prices, features, and reviews of all the products you could choose, and so, you’ll be able to pick the one that offers the most benefits for the lowest price. After all, who doesn’t like a good deal?

Not all products deserve this level of attention to detail, but it can make a massive difference with expensive purchases. Unfortunately, while the benefits are clear, many difficulties go along with scraping Amazon.

The Challenges Of Scraping Amazon Product Data

Not all websites are the same. As a rule of thumb, the more complex and widespread a website is, the harder it is to scrape it. Remember when I said that Amazon was the most prominent e-commerce site? Well, that makes it both extremely popular and reasonably complex.

First off, Amazon knows how scraping bots act, so the website has countermeasures in place. Namely, if the scraper follows a predictable pattern, sending requests at fixed intervals, faster than a human could or with almost identical parameters, Amazon will notice and block the IP. Proxies can solve this problem, but I didn’t need them since we won’t be scraping too many pages in the example.

Next, Amazon deliberately uses varying page structures for their products. That is to say, that if you inspect the pages for different products, there’s a good chance that you’ll find significant differences in their structure and attributes. The reason behind this is quite simple. You need to adapt your scraper’s code for a specific system, and if you use the same script on a new kind of page, you’d have to rewrite parts of it. So, they’re essentially making you work more for the data.

Lastly, Amazon is a vast website. If you want to gather large amounts of data, running the scraping software on your computer might turn out to take way too much time for your needs. This problem is further consolidated by the fact that going too fast will get your scraper blocked. So, if you want loads of data quickly, you’ll need a truly powerful scraper.

Well, that’s enough talk about problems, let’s focus on solutions!

How To Build A Web Scraper For Amazon

To keep things simple, we’ll take a step-by-step approach to writing the code. Feel free to work in parallel with the guide.

Look for the data we need

So, here’s a scenario: I’m moving in a few months to a new place, and I’ll need a couple of new shelves to hold books and magazines. I want to know all my options and get as good of a deal as I can. So, let’s go to the Amazon market, search for “shelves”, and see what we get.

The URL for this search and the page we’ll be scraping is here.

Ok, let’s take stock of what we have here. Just by glancing at the page, we can get a good picture about:

how the shelves look;
what the package includes;
how customers rate them;
their price;
the link to the product;
a suggestion for a cheaper alternative for some of the items.

That’s more than we could ask for!

Get the required tools

Let’s ensure we have all the following tools installed and configured before continuing to the next step.

Chrome
We can download it from here.
VSCode
Follow the instructions on this page to install it on your specific device.
Node.js
Before starting using Axios or Cheerio, we need to install Node.js and the Node Package Manager. The easiest way to install Node.js and NPM is to get one of the installers from the Node.Js official source and run it.

Now, let’s create a new NPM project. Create a new folder for the project and run the following command:

npm init -y

To create the web scraper, we need to install a couple of dependencies in our project:

Cheerio
An open-source library that helps us extract useful information by parsing markup and providing an API for manipulating the resulting data. Cheerio allows us to select tags of an HTML document by using selectors: $(“div”). This specific selector helps us pick all <div> elements on a page. To install Cheerio, please run the following command in the projects’ folder:

npm install cheerio

Axios
A JavaScript library used to make HTTP requests from Node.js.

npm install axios

Inspect the page source

In the following steps, we will learn more about how the information is organized on the page. The idea is to get a better understanding of what we can scrape from our source.

The developer tools help us interactively explore the website’s Document Object Model (DOM). We will use the developer tools in Chrome, but you can use any web browser you’re comfortable with.

Let’s open it by right-clicking anywhere on the page and selecting the “Inspect” option:

This will open up a new window containing the source code of the page. As we have said before, we are looking to scrape every shelf’s information.

As we can see from the screenshot above, the containers that hold all the data have the following classes:

sg-col-4-of-12 s-result-item s-asin sg-col-4-of-16 sg-col sg-col-4-of-20

In the next step, we will use Cheerio to select all the elements containing the data we need.

Fetch the data

After we installed all the dependencies presented above, let’s create a new index.js file and type the following lines of code:

const axios = require(“axios”);
const cheerio = require(“cheerio”);

const fetchShelves = async () => {
try {
const response = await axios.get(‘https://www.amazon.com/s?crid=36QNR0DBY6M7J&k=shelves&ref=glow_cls&refresh=1&sprefix=s%2Caps%2C309’);

const html = response.data;

const $ = cheerio.load(html);

const shelves = [];

$(‘div.sg-col-4-of-12.s-result-item.s-asin.sg-col-4-of-16.sg-col.sg-col-4-of-20’).each((_idx, el) => {
const shelf = $(el)
const title = shelf.find(‘span.a-size-base-plus.a-color-base.a-text-normal’).text()

shelves.push(title)
});

return shelves;
} catch (error) {
throw error;
}
};

fetchShelves().then((shelves) => console.log(shelves));

As we can see, we import the dependencies we need on the first two lines, and then we create a fetchShelves() function that, using Cheerio, gets all the elements containing our products’ information from the page.

It iterates over each of them and pushes it to an empty array to get a better-formatted result.

The fetchShelves() function will only return the product’s title at the moment, so let’s get the rest of the information we need. Please add the following lines of code after the line where we defined the variable title.

const image = shelf.find(‘img.s-image’).attr(‘src’)

const link = shelf.find(‘a.a-link-normal.a-text-normal’).attr(‘href’)

const reviews = shelf.find(‘div.a-section.a-spacing-none.a-spacing-top-micro > div.a-row.a-size-small’).children(‘span’).last().attr(‘aria-label’)

const stars = shelf.find(‘div.a-section.a-spacing-none.a-spacing-top-micro > div > span’).attr(‘aria-label’)

const price = shelf.find(‘span.a-price > span.a-offscreen’).text()

let element = {
title,
image,
link: https://amazon.com${link},
price,
}

if (reviews) {
element.reviews = reviews
}

if (stars) {
element.stars = stars
}

And replace shelves.push(title) with shelves.push(element).

We are now selecting all the information we need and adding it to a new object called element. Every element is then pushed to the shelves array to get a list of objects containing just the data we are looking for.

This is how a shelf object should look like before it is added to our list:

{
title: ‘SUPERJARE Wall Mounted Shelves, Set of 2, Display Ledge, Storage Rack for Room/Kitchen/Office – White’,
image: ‘https://m.media-amazon.com/images/I/61fTtaQNPnL._AC_UL320_.jpg’,
link: ‘https://amazon.com/gp/slredirect/picassoRedirect.html/ref=pa_sp_btf_aps_sr_pg1_1?ie=UTF8&adId=A03078372WABZ8V6NFP9L&url=%2FSUPERJARE-Mounted-Floating-Shelves-Display%2Fdp%2FB07H4NRT36%2Fref%3Dsr_1_59_sspa%3Fcrid%3D36QNR0DBY6M7J%26dchild%3D1%26keywords%3Dshelves%26qid%3D1627970918%26refresh%3D1%26sprefix%3Ds%252Caps%252C309%26sr%3D8-59-spons%26psc%3D1&qualifier=1627970918&id=3373422987100422&widgetName=sp_btf’,
price: ‘$32.99’,
reviews: ‘6,171’,
stars: ‘4.7 out of 5 stars’
}

Format the data

Now that we have managed to fetch the data we need, it’s a good idea to save it as a .CSV file to improve readability. After getting all the data, we will use the fs module provided by Node.js and save a new file called saved-shelves.csv to the project’s folder. Import the fs module at the top of the file and copy or write along the following lines of code:

let csvContent = shelves.map(element => {
return Object.values(element).map(item => “${item}”).join(‘,’)
}).join(“n”)

fs.writeFile(‘saved-shelves.csv’, “Title, Image, Link, Price, Reviews, Stars” + ‘n’ + csvContent, ‘utf8’, function (err) {
if (err) {
console.log(‘Some error occurred – file either not saved or corrupted.’)
} else{
console.log(‘File has been saved!’)
}
})

As we can see, on the first three lines, we format the data we have previously gathered by joining all the values of a shelve object using a comma. Then, using the fs module, we create a file called saved-shelves.csv, add a new row that contains the column headers, add the data we have just formatted and create a callback function that handles the errors.

The result should look something like this:

Bonus Tips!
Scraping Single Page Applications

Dynamic content is becoming the standard nowadays, as websites are more complex than ever before. To provide the best user experience possible, developers must adopt different load mechanisms for dynamic content, making our job a little more complicated. If you don’t know what that means, imagine a browser lacking a graphical user interface. Luckily, there is ✨Puppeteer✨ — the magical Node library that provides a high-level API to control a Chrome instance over the DevTools Protocol. Still, it offers the same functionality as a browser, but it must be controlled programmatically by typing a couple of lines of code. Let’s see how that works.

In the previously created project, install the Puppeteer library by running npm install puppeteer, create a new puppeteer.js file, and copy or write along the following lines of code:

const puppeteer = require(‘puppeteer’)

(async () => {
try {
const chrome = await puppeteer.launch()
const page = await chrome.newPage()
await page.goto(‘https://www.reddit.com/r/Kanye/hot/’)
await page.waitForSelector(‘.rpBJOHq2PR60pnwJlUyP0’, { timeout: 2000 })

const body = await page.evaluate(() => {
return document.querySelector(‘body’).innerHTML
})

console.log(body)

await chrome.close()
} catch (error) {
console.log(error)
}
})()

In the example above, we create a Chrome instance and open up a new browser page that is required to go to this link. In the following line, we tell the headless browser to wait until the element with the class rpBJOHq2PR60pnwJlUyP0 appears on the page. We have also specified how long the browser should wait for the page to load (2000 milliseconds).

Using the evaluate method on the page variable, we instructed Puppeteer to execute the Javascript snippets within the page’s context just after the element was finally loaded. This will allow us to access the page’s HTML content and return the page’s body as the output. We then close the Chrome instance by calling the close method on the chrome variable. The resulted work should consist of all the dynamically generated HTML code. This is how Puppeteer can help us load dynamic HTML content.

If you don’t feel comfortable using Puppeteer, note that there are a couple of alternatives out there, like NightwatchJS, NightmareJS, or CasperJS. They are slightly different, but in the end, the process is pretty similar.

Setting user-agent Headers

user-agent is a request header that tells the website you are visiting about yourself, namely your browser and OS. This is used to optimize the content for your set-up, but websites also use it to identify bots sending tons of requests — even if it changes IPS.

Here’s what a user-agent header looks like:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36

In the interest of not being detected and blocked, you should regularly change this header. Take extra care not to send an empty or outdated header since this should never happen for a run-fo-the-mill user, and you’ll stand out.

Rate Limiting

Web scrapers can gather content extremely fast, but you should avoid going at top speed. There are two reasons for this:

Too many requests in short order can slow down the website’s server or even bring it down, causing trouble for the owner and other visitors. It can essentially become a DoS attack.
Without rotating proxies, it’s akin to loudly announcing that you’re using a bot since no human would send hundreds or thousands of requests per second.

The solution is to introduce a delay between your requests, a practice called “rate limiting”. (It’s pretty simple to implement, too!)

In the Puppeteer example provided above, before creating the body variable, we can use the waitForTimeout method provided by Puppeteer to wait a couple of seconds before making another request:

await page.waitForTimeout(3000);

Where ms is the number of seconds you would want to wait.

Also, if we would want to do the same thig for the axios example, we can create a promise that calls the setTimeout() method, in order to help us wait for our desired number of miliseconds:

fetchShelves.then(result => new Promise(resolve => setTimeout(() => resolve(result), 3000)))

In this way, you can avoid putting too much pressure on the targeted server and also, bring a more human approach to web scraping.

Closing Thoughts

And there you have it, a step-by-step guide to creating your own web scraper for Amazon product data! But remember, this was just one situation. If you’d like to scrape a different website, you’ll have to make a few tweaks to get any meaningful results.

Related Reading

If you’d still like to see more web scraping in action, here is some useful reading material for you:

“The Ultimate Guide to Web Scraping with JavaScript and Node.Js,” Robert Sfichi
“Advanced Node.JS Web Scraping with Puppeteer,” Gabriel Cioci
“Python Web Scraping: The Ultimate Guide to Building Your Scraper,” Raluca Penciuc

Creating the Effect of Transparent Glass and Plastic in Three.js

Original Source: http://feedproxy.google.com/~r/tympanus/~3/HMKYObAHg48/

In a recent release of Three.js (r129 and beyond) some fabulous new features to MeshPhysicalMaterial were merged. The new features allow us to create convincing transparent, glass-like and plastic-like materials that refract and diffuse the content behind them, and are as easy-to-use as adding a couple of material properties!

About this article

This article explores some advanced properties of materials. While the results are very technically impressive, the new features that enable them are simple to use! Some experience with three and an intermediate understanding of the concept of “materials” in 3D graphics is ideal. Code examples are written for brevity, so it’s best to dive into the sandbox code (provided with each screenshot) if you’re interested in the gritty implementation details.

The physics of optics, light, reflection and refraction are not discussed in-depth here. This article approaches these effects through an aesthetic lens: aiming for convincing and visually pleasing results, even if they are not scientifically accurate.

Rather than introducing new concepts, this is primarily a walkthrough of features that exist within three and its MeshPhysicalMaterial class. I’d like to gush and shower praise upon the contributors and maintainers of three. It continues to be a core pillar of 3D in the browser. It has a vibrant community and extremely talented contributors who continue to push the boundaries of what’s possible on a humble web page.

Prior Art

Creating transparent materials, especially with texture and diffusion, has for a long time required deep technical expertise and creative problem solving. Some projects have achieved an impressive and convincing effect in WebGL through bespoke techniques:

A screenshot from Make Me Pulse’s 2018 Wishes, showcasing frosted glass materials in WebGL

The Jam3 FWA 100 project, showcasing glass-like orbs

Jesper Vos published an incredible tutorial here on Codrops: Real-time Multiside Refraction in Three Steps, which includes some great insights into the science and simulation of refraction.

In addition, these excellent technical examples provided the inspiration for writing this article, and further exploring what’s possible with these new features.

Three.js

three is an open-source javascript library for rendering 3D graphics in the browser. It provides a friendly API and abstractions that make working with WebGL more palatable and expressive. three has been around since 2010, is extremely well battle-tested, and is the de-facto standard for rendering 3D content on the internet. See the list of case studies on the home page, docs, examples, or source.

MeshPhysicalMaterial

MeshPhysicalMaterial is a relatively recent Physically-Based Rendering (PBR) built-in material for three. It’s an evolution and extension of the already impressive MeshStandardMaterial, providing additional features to pump the photo-realism.

This visual fidelity comes at a cost, from the docs:  As a result of these complex shading features, MeshPhysicalMaterial has a higher performance cost, per pixel, than other Three.js materials. Most effects are disabled by default, and add cost as they are enabled.

Beyond the properties offered in MeshStandardMaterial, it introduces some new ones:

Transmission

transmission is the key to transparent glass-like and plastic-like effects. Traditionally when we adjust the opacity of an element to make it transparent, its visual presence is diluted as a whole. The object appears ghostly, uniformly transparent, and not realistic as a see-through object. In the real-world, transparent objects reflect light and show glare. They have a physical presence even though they may be perfectly clear.

Reflectivity properties

MeshPhysicalMaterial includes some properties that estimate refraction through the transmissible object: thickness, ior (Index-of-refraction) and reflectivity. We’ll mostly ignore ior and reflectivity (which changes ior too, but is mapped to a 0-1 range) as the defaults work great!

thickness is the magic here, as we’ll see shortly.

Clearcoat

Like a layer of lacquer, clearcoat provides an additional thin reflective layer on the surface of objects. Previously this would require a second version of the object, with a separate material, and with different parameters.

Other

There are some other additional properties on MeshPhysicalMaterial like sheen and attenuationTint which I won’t be touching on in this article.

We can expect to see more and more features added to this material in future releases.

First steps

First things first, let’s create a scene and pop something in it! We’ll start with an Icosahedron because hey, they just look cool.

I’m skipping the basic scene setup stuff here, I recommend diving into the sandbox source or three docs if you’re unfamiliar with this.

const geometry = new THREE.IcosahedronGeometry(1, 0);
const material = new THREE.MeshNormalMaterial();
const mesh = new THREE.Mesh(geometry, material)
scene.add(mesh);

An Icosahedron with normal shading in a blank scene
View on CodeSandbox

Looks like an Icosahedron! Let’s apply our MeshPhysicalMaterial:

const material = new THREE.MeshPhysicalMaterial({
metalness: 0,
roughness: 0
});

The options metalness and roughness are the two primary handles with PBR materials (they are on MeshStandardMaterial too). They can be used to set the stage for how our material responds to lighting and environment. Having both set at zero describes something like “A non-metallic object with a highly polished surface”.


View on CodeSandbox

Doesn’t look like much! Physically-based materials need light to reflect, so let’s add some light:

const light = new THREE.DirectionalLight(0xfff0dd, 1);
light.position.set(0, 5, 10);
scene.add(light);


View on CodeSandbox

Cool, there it is again… Now let’s make it transparent!

Call the Glazier

The transmission option is responsible for applying our transparency. It makes the “fill” or “body” of the object transparent, while leaving all lighting and reflections on the surface in-tact.

Note that we’re not using the opacity option, which applies a uniform transparency to the material as a whole. We also don’t need to include the transparent option on the material for it to appear transparent through transmission.

const material = new THREE.MeshPhysicalMaterial({
roughness: 0,
transmission: 1, // Add transparency
});


View on CodeSandbox

I think that’s transparent, we can see the background colour through it. Let’s pop something else behind it to be sure. We’ll add a textured plane as our “backdrop”:

const bgTexture = new THREE.TextureLoader().load("src/texture.jpg");
const bgGeometry = new THREE.PlaneGeometry(5, 5);
const bgMaterial = new THREE.MeshBasicMaterial({ map: bgTexture });
const bgMesh = new THREE.Mesh(bgGeometry, bgMaterial);
bgMesh.position.set(0, 0, -1);
scene.add(bgMesh);


View on CodeSandbox

It’s transparent! It’s lacking something though. There’s nothing but a tiny flicker of movement on the corners of our geometry; as if our material is made from the most delicate and fragile of super-thin, super-clear glass.

Now here’s the magic part!

const material = new THREE.MeshPhysicalMaterial({
roughness: 0,
transmission: 1,
thickness: 0.5, // Add refraction!
});


View on CodeSandbox

By adding a single option: thickness to our material, we’ve now been given the gift of refraction through our object! Our background plane, which is a completely separate object, simply sitting behind our Icosahedron in the scene, now gets refracted.

This is incredible! Previous methods of achieving this required much more work and intense technical understanding. This has immediately democratised refractive materials in WebGL.

The effect is especially impressive when viewed in motion, and from an angle:

Swooping around our glass object to see it refracting the rest of the scene

Have a play by dragging around in this sandbox:

Diverse objects

While the sharp facets of our Icosahedron show a nice “cut-gem” style of refraction, we rarely see such precisely cut glass objects at any size other than tiny. This effect is greatly enhanced when geometries with smoother edges are used.
Let’s increase the detail level of our Icosahedron to form a sphere:

const geometry = new THREE.IcosahedronGeometry(1, 15);


View on CodeSandbox

This shows some optical distortion in addition to the refraction based on the shape of the geometry!

Hot tip: with all of the PolyhedronGeometry types in three, any detail level above zero is rendered as a sphere, rather than a faceted polyhedron as far as transmission is concerned.

You may notice that the distorted content is a little pixelated, this is due to the material upscaling what’s transmitted through it to perform the distortion. We can mitigate this a bit with some other effects which we’ll cover later.

Let’s explore adding some texture to our glass material:

const material = new THREE.MeshPhysicalMaterial({
roughness: 0.7,
transmission: 1,
thickness: 1
});

The roughness option on our transmissible material provides us with a “frosting” level, making light that passes through the material more diffuse.


View in CodeSandbox

This becomes immediately recognisable as a frosted glass object, with a fine powdery texture.

Notes on roughness:

The middle of the roughness range can display some quite noticeably pixelated transmitted content (at the time of writing). In my experience the best results are found in the low (0-0.15) and higher (0.65+) ends of the range. This can also be quite successfully mitigated with some of the things we’ll add shortly.The distance of the transmissible object from the camera affects how roughness is rendered. It’s best to tweak the roughness parameter once you’ve established your scene

Hot tip: Using a small amount of roughness (0.05 – 0.15) can help soften aliasing on the transmitted content at the cost of a bit of sharpness.

For the rest of our examples we’ll include two additional geometries for reference: a RoundedBoxGeometry and a 3D model of a dragon (loaded as a GLTF, but only used for the geometry):


View on CodeSandbox

Through the lens

While the transmission effect is already appealing, there’s so much more we can do to make this appear truer-to-life.

The next thing we’ll do is add an environment map. It’s recommended that you always include an envMap when using MeshPhysicalMaterial, as per the docs: For best results, always specify an environment map when using this material.

Highly reflective objects show reflections, and glare, and glimpses of their surrounding environment reflected off their surface. It’s unusual for a shiny object to be perfectly unreflective; as they have been in our examples so far.

We’ll use a high quality High Dynamic Range Image (HDRI) environment map. I’ve chosen this one for its bright fluorescent overhead lighting:

const hdrEquirect = new THREE.RGBELoader().load(
"src/empty_warehouse_01_2k.hdr",
() => {
hdrEquirect.mapping = THREE.EquirectangularReflectionMapping;
}
);
const material = new THREE.MeshPhysicalMaterial({

envMap: hdrEquirect
});


View on CodeSandbox

NICE! Now that looks more realistic. The objects glint and shimmer in our bright environment; much more like the lighting challenges faced by a photographer of shiny things.

This is where our rounded geometries really shine too. Their smoother curves and edges catch light differently, really amplifying the effect of a highly polished surface.

Hot tip: Adding an envMap texture of some sort helps to resolve some rendering artifacts of this material. This is why it’s always recommended to include one (beyond the fact that it looks great!).

If you adjust the roughness level upward, you’ll notice that the reflections are diffused by the rougher frosted texture of the surface; however, we may want an object that’s semi-transparent while still having a shiny surface.

The clearcoat options allow us to include an additional reflective layer on the surface of our object (think lacquered wood, powder coatings, or plastic films). In the case of our transparent objects, we can make them from semi-transparent glass or plastic which still has a polished and reflective surface.


View on CodeSandbox

Adjusting the clearcoatRoughness option adjusts how highly polished the surface is; visually spanning the range from highly-polished frosted glass through to semi-gloss and matte frosted plastics. This effect is pretty convincing! You can almost feel the tack and texture of these objects.

So far we’ve been exploring objects with perfectly smooth surfaces. To really bring some texture to them, we can add a normal map:

const textureLoader = new THREE.TextureLoader();
const normalMapTexture = textureLoader.load("src/normal.jpg");
normalMapTexture.wrapS = THREE.RepeatWrapping;
normalMapTexture.wrapT = THREE.RepeatWrapping;
const material = new THREE.MeshPhysicalMaterial({

normalMap: normalMapTexture,
clearcoatNormalMap: normalMapTexture,
});


View on CodeSandbox

The interplay between the normalMap and clearcoatNormalMap is interesting. By setting the normalMap we affect the transmission through the object, adding a textured frosting that refracts light differently. By setting the clearcoatNormalMap we affect the finish on the surface of the object.

Hot tip: The additional texture added by the normalMap greatly reduces the visible pixelation on the transmitted content, effectively solving this issue for us.

As a final touch, we’ll add a post-processing pass to apply bloom to our scene. Bloom adds that extra little bit of photographic appeal by simulating volumetric glare from the bright overhead lighting bathing our objects.

I’ll leave information around implementation post-processing within three to the docs and examples. In this sandbox I’ve included the UnrealBloomPass.

Bloom always looks good.

There we have it! Convincingly transparent, textured and reflective 3D objects, rendered in real-time, and without much effort. This deserves to be celebrated, what an empowering experience it is working with MeshPhysicalMaterial.

Drippin ice

Just for fun, let’s crank the dial on this by rendering many of our transparent objects using three‘s InstancedMesh (link).


View on CodeSandbox

OOOUF! YES.
Instances can’t be seen through each other, which is a general limitation of transmission on MeshPhysicalMaterial (current as of r133); but in my opinion the effect is still very cool.

Explore for yourself

Finally, here’s our dragon model with a bunch of material options enabled in the GUI:


View on CodeSandbox

Have a play, check out metalness, play around with color to explore colourful tinted glass, tweak the ior to change our glass into crystal!

Sign-off

I’ve really only scratched the surface of what can be achieved with MeshPhysicalMaterial. There are even more options available within this material, sheen, roughnessMap, transmissionMap, attenuationTint and all sorts of other things provide inroads to many more effects. Dig deep into the docs and source if you’re interested!

This is an enabler, given the creative vision for a transparent object you can use these tools to work towards a convincing result. Transparent materials in three are here, you can start using them in your projects today.

Attributions

Environment map: Empty Warehouse 01 HDRI by Sergej Majboroda, via Poly Haven3D model: Dragon GLB by Stanford University and Morgan McGuire’s Computer Graphics Archive, via KhronosGroupNormal map: Packed Dirt normal by Dim, via opengameart.orgSandboxes: Hosted on CodeSandbox and running in canvas-sketch by @MattDesl.

© 2021 Kelly Milligan

The post Creating the Effect of Transparent Glass and Plastic in Three.js appeared first on Codrops.

Botanical Glassworks in Blender 3D

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/dFu1ww92b6s/botanical-glassworks-blender-3d

Botanical Glassworks in Blender 3D
Botanical Glassworks in Blender 3D

abduzeedo1027—21

Calvin Sprague shared a 3D project using Blender, Octane, Adobe Illustrator & Photoshop for Botanical Glassworks. The project is a selection of personal works experimenting with flat illustrated botanical shapes transformed into a three-dimensional space. As a beginner using 3d software, Calving found the process quite challenging but that gave him a new appreciation towards 3d artists and how they work with cameras and lighting.

3D abstract botanical glass octane plants3D abstract botanical glass octane plants3D abstract botanical glass octane plants3D abstract botanical glass octane plants3D abstract botanical glass octane plants3D abstract botanical glass octane plants

To see more work follow Calvin on

Behance
Instagram
Website


4 Ways to Create an Email Testing Strategy

Original Source: https://www.hongkiat.com/blog/email-testing-strategy/

(Guest writer: Téa Liarokapi) A great email message consists of a plethora of things that, upon testing, could or could not work out. Maybe you’ve got the perfect copy for your email marketing…

Visit hongkiat.com for full content.

How To Build A Real-Time Multi-User Game From Scratch

Original Source: https://smashingmagazine.com/2021/10/real-time-multi-user-game/

As the pandemic lingered, the suddenly-remote team I work with became increasingly foosball-deprived. I thought about how to play foosball in a remote setting, but it was clear that simply reconstructing the rules of foosball on a screen would not be a lot of fun.

What _is_ fun is to kick a ball using toy cars — a realization made as I was playing with my 2-year old kid. The same night I set out to build the first prototype for a game that would become Autowuzzler.

The idea is simple: players steer virtual toy cars in a top-down arena that resembles a foosball table. The first team to score 10 goals wins.

Of course, the idea of using cars to play soccer is not unique, but two main ideas should set Autowuzzler apart: I wanted to reconstruct some of the look and feel of playing on a physical foosball table, and I wanted to make sure it is as easy as possible to invite friends or teammates to a quick casual game.

In this article, I’ll describe the process behind the creation of Autowuzzler, which tools and frameworks I chose, and share a few implementation details and lessons I learned.

First Working (Terrible) Prototype

The first prototype was built using the open-source game engine Phaser.js, mostly for the included physics engine and because I already had some experience with it. The game stage was embedded in a Next.js application, again because I already had a solid understanding of Next.js and wanted to focus mainly on the game.

As the game needs to support multiple players in real-time, I utilized Express as a WebSockets broker. Here is where it becomes tricky, though.

Since the physics calculations were done on the client in the Phaser game, I chose a simple, but obviously flawed logic: The first connected client had the doubtful privilege of doing the physics calculations for all game objects, sending the results to the express server, which in turn broadcasted the updated positions, angles and forces back to the other player’s clients. The other clients would then apply the changes to the game objects.

This led to the situation where the first player got to see the physics happening in real-time (it is happening locally in their browser, after all), while all the other players were lagging behind at least 30 milliseconds (the broadcast rate I chose), or — if the first player’s network connection was slow — considerably worse.

If this sounds like poor architecture to you — you’re absolutely right. However, I accepted this fact in favor of quickly getting something playable to figure out if the game is actually fun to play.

Validate The Idea, Dump The Prototype

As flawed as the implementation was, it was sufficiently playable to invite friends for a first test drive. Feedback was very positive, with the major concern being — not surprisingly — the real-time performance. Other inherent problems included the situation when the first player (remember, the one in charge of everything) left the game — who should take over? At this point there was only one game room, so anyone would join the same game. I was also a bit concerned by the bundle size the Phaser.js library introduced.

It was time to dump the prototype and start with a fresh setup and a clear goal.

Project Setup

Clearly, the “first client rules all” approach needed to be replaced with a solution in which the game state lives on the server. In my research, I came across Colyseus, which sounded like the perfect tool for the job.

For the other main building blocks of the game I chose:

Matter.js as a physics engine instead of Phaser.js because it runs in Node and Autowuzzler does not require a full game framework.
SvelteKit as an application framework instead of Next.js, because it just went into public beta at that time. (Besides: I love working with Svelte.)
Supabase.io for storing user-created game PINs.

Let’s look at those building blocks in more detail.

Synchronized, Centralized Game State With Colyseus

Colyseus is a multiplayer game framework based on Node.js and Express. At its core, it provides:

Synchronizing state across clients in an authoritative fashion;
Efficient real-time communication using WebSockets by sending changed data only;
Multi-room setups;
Client libraries for JavaScript, Unity, Defold Engine, Haxe, Cocos Creator, Construct3;
Lifecycle hooks, e.g. room is created, user joins, user leaves, and more;
Sending messages, either as broadcast messages to all users in the room, or to a single user;
A built-in monitoring panel and load test tool.

Note: The Colyseus docs make it easy to get started with a barebones Colyseus server by providing an npm init script and an examples repository.

Creating A Schema

The main entity of a Colyseus app is the game room, which holds the state for a single room instance and all its game objects. In the case of Autowuzzler, it’s a game session with:

two teams,
a finite amount of players,
one ball.

A schema needs to be defined for all properties of the game objects that should be synchronized across clients. For example, we want the ball to synchronize, and so we need to create a schema for the ball:

class Ball extends Schema {
constructor() {
super();
this.x = 0;
this.y = 0;
this.angle = 0;
this.velocityX = 0;
this.velocityY = 0;
}
}
defineTypes(Ball, {
x: “number”,
y: “number”,
angle: “number”,
velocityX: “number”,
velocityY: “number”
});

In the example above, a new class that extends the schema class provided by Colyseus is created; in the constructor, all properties receive an initial value. The position and movement of the ball is described using the five properties: x, y, angle, velocityX, velocityY. Additionally, we need to specify the types of each property. This example uses JavaScript syntax, but you can also use the slightly more compact TypeScript syntax.

Property types can either be primitive types:

string
boolean
number (as well as more efficient integer and float types)

or complex types:

ArraySchema (similar to Array in JavaScript)
MapSchema (similar to Map in JavaScript)
SetSchema (similar to Set in JavaScript)
CollectionSchema (similar to ArraySchema, but without control over indexes)

The Ball class above has five properties of type number: its coordinates (x, y), its current angle and the velocity vector (velocityX, velocityY).

The schema for players is similar, but includes a few more properties to store the player’s name and team’s number, which need to be supplied when creating a Player instance:

class Player extends Schema {
constructor(teamNumber) {
super();
this.name = “”;
this.x = 0;
this.y = 0;
this.angle = 0;
this.velocityX = 0;
this.velocityY = 0;
this.teamNumber = teamNumber;
}
}
defineTypes(Player, {
name: “string”,
x: “number”,
y: “number”,
angle: “number”,
velocityX: “number”,
velocityY: “number”,
angularVelocity: “number”,
teamNumber: “number”,
});

Finally, the schema for the Autowuzzler Room connects the previously defined classes: One room instance has multiple teams (stored in an ArraySchema). It also contains a single ball, therefore we create a new Ball instance in the RoomSchema’s constructor. Players are stored in a MapSchema for quick retrieval using their IDs.

Now, with all the magic happening on the server, the client only handles the input and draws the state it receives from the server to the screen. With one exception:

Interpolation On The Client

Since we are re-using the same Matter.js physics world on the client, we can improve the experienced performance with a simple trick. Rather than only updating the position of a game object, we also synchronize the velocity of the object. This way, the object keeps on moving on its trajectory even if the next update from the server takes longer than usual. So rather than moving objects in discrete steps from position A to position B, we change their position and make them move in a certain direction.

Lifecycle

The Autowuzzler Room class is where the logic concerned with the different phases of a Colyseus room is handled. Colyseus provides several lifecycle methods:

onCreate: when a new room is created (usually when the first client connects);
onAuth: as an authorization hook to permit or deny entry to the room;
onJoin: when a client connects to the room;
onLeave: when a client disconnects from the room;
onDispose: when the room is discarded.

The Autowuzzler room creates a new instance of the physics world (see section “Physics In A Colyseus App”) as soon as it is created (onCreate) and adds a player to the world when a client connects (onJoin). It then updates the physics world 60 times a second (every 16.6 milliseconds) using the setSimulationInterval method (our main game loop):

// deltaTime is roughly 16.6 milliseconds
this.setSimulationInterval((deltaTime) => this.world.updateWorld(deltaTime));

The physics objects are independent of the Colyseus objects, which leaves us with two permutations of the same game object (like the ball), i.e. an object in the physics world and a Colyseus object that can be synced.

As soon as the physical object changes, its updated properties need to be applied back to the Colyseus object. We can achieve that by listening to Matter.js’ afterUpdate event and setting the values from there:

Events.on(this.engine, “afterUpdate”, () => {
// apply the x position of the physics ball object back to the colyseus ball object
this.state.ball.x = this.physicsWorld.ball.position.x;
// … all other ball properties
// loop over all physics players and apply their properties back to colyseus players objects
})

There’s one more copy of the objects we need to take care of: the game objects in the user-facing game.

Client-Side Application

Now that we have an application on the server that handles the synchronization of the game state for multiple rooms as well as physics calculations, let’s focus on building the website and the actual game interface. The Autowuzzler frontend has the following responsibilities:

enables users to create and share game PINs to access individual rooms;
sends the created game PINs to a Supabase database for persistence;
provides an optional “Join a game” page for players to enter the game PIN;
validates game PINs when a player joins a game;
hosts and renders the actual game on a shareable (i.e. unique) URL;
connects to the Colyseus server and handle state updates;
provides a landing (“marketing”) page.

For the implementation of those tasks, I chose SvelteKit over Next.js for the following reasons:

Why SvelteKit?

I have been wanting to develop another app using Svelte ever since I built neolightsout. When SvelteKit (the official application framework for Svelte) went into public beta, I decided to build Autowuzzler with it and accept any headaches that come with using a fresh beta — the joy of using Svelte clearly makes up for it.

These key features made me choose SvelteKit over Next.js for the actual implementation of the game frontend:

Svelte is a UI framework and a compiler and therefore ships minimal code without a client runtime;
Svelte has an expressive templating language and component system (personal preference);
Svelte includes global stores, transitions and animations out of the box, which means: no decision fatigue choosing a global state management toolkit and an animation library;
Svelte supports scoped CSS in single-file-components;
SvelteKit supports SSR, simple but flexible file-based routing and server-side routes for building an API;
SvelteKit allows for each page to run code on the server, e.g. to fetch data that is used to render the page;
Layouts shared across routes;
SvelteKit can be run in a serverless environment.

Creating And Storing Game PINs

Before a user can start playing the game, they first need to create a game PIN. By sharing the PIN with others, they can all access the same game room.

This is a great use case for SvelteKits server-side endpoints in conjunction with Sveltes onMount function: The endpoint /api/createcode generates a game PIN, stores it in a Supabase.io database and outputs the game PIN as a response. This is response is fetched as soon as the page component of the “create” page is mounted:

Storing Game PINs With Supabase.io

Supabase.io is an open-source alternative to Firebase. Supabase makes it very easy to create a PostgreSQL database and access it either via one of its client libraries or via REST.

For the JavaScript client, we import the createClient function and execute it using the parameters supabase_url and supabase_key we received when creating the database. To store the game PIN that is created on each call to the createcode endpoint, all we need to do is to run this simple insert query:

import { createClient } from ‘@supabase/supabase-js’

const database = createClient(
import.meta.env.VITE_SUPABASE_URL,
import.meta.env.VITE_SUPABASE_KEY
);

const { data, error } = await database
.from(“games”)
.insert([{ code: 123456 }]);

Note: The supabase_url and supabase_key are stored in a .env file. Due to Vite — the build tool at the heart of SvelteKit — it is required to prefix the environment variables with VITE_ to make them accessible in SvelteKit.

Accessing The Game

I wanted to make joining an Autowuzzler game as easy as following a link. Therefore, every game room needed to have its own URL based on the previously created game PIN, e.g. https://autowuzzler.com/play/12345.

In SvelteKit, pages with dynamic route parameters are created by putting the dynamic parts of the route in square brackets when naming the page file: client/src/routes/play/[gamePIN].svelte. The value of the gamePIN parameter will then become available in the page component (see the SvelteKit docs for details). In the play route, we need to connect to the Colyseus server, instantiate the physics world to render to the screen, handle updates to game objects, listen to keyboard input and display other UI like the score, and so on.

Connecting To Colyseus And Updating State

The Colyseus client library enables us to connect a client to a Colyseus server. First, let’s create a new Colyseus.Client by pointing it to the Colyseus server (ws://localhost:2567in development). Then join the room with the name we chose earlier (autowuzzler) and the gamePIN from the route parameter. The gamePIN parameter makes sure the user joins the correct room instance (see “match-making” above).

let client = new Colyseus.Client(“ws://localhost:2567”);
this.room = await client.joinOrCreate(“autowuzzler”, { gamePIN });

Since SvelteKit renders pages on the server initially, we need to make sure that this code only runs on the client after the page is done loading. Again, we use the onMount lifecycle function for that use case. (If you’re familiar with React, onMount is similar to the useEffect hook with an empty dependency array.)

onMount(async () => {
let client = new Colyseus.Client(“ws://localhost:2567”);
this.room = await client.joinOrCreate(“autowuzzler”, { gamePIN });
})

Now that we are connected to the Colyseus game server, we can start to listen to any changes to our game objects.

Here’s an example of how to listen to a player joining the room (onAdd) and receiving consecutive state updates to this player:

this.room.state.players.onAdd = (player, key) => {
console.log(`Player has been added with sessionId: ${key}`);

// add player entity to the game world
this.world.createPlayer(key, player.teamNumber);

// listen for changes to this player
player.onChange = (changes) => {
changes.forEach(({ field, value }) => {
this.world.updatePlayer(key, field, value); // see below
});
};
};

In the updatePlayer method of the physics world, we update the properties one by one because Colyseus’ onChange delivers a set of all changed properties.

Note: This function only runs on the client version of the physics world, as game objects are only manipulated indirectly via the Colyseus server.

updatePlayer(sessionId, field, value) {
// get the player physics object by its sessionId
let player = this.world.players.get(sessionId);
// exit if not found
if (!player) return;
// apply changes to the properties
switch (field) {
case “angle”:
Body.setAngle(player, value);
break;
case “x”:
Body.setPosition(player, { x: value, y: player.position.y });
break;
case “y”:
Body.setPosition(player, { x: player.position.x, y: value });
break;
// set velocityX, velocityY, angularVelocity …
}
}

The same procedure applies to the other game objects (ball and teams): listen to their changes and apply the changed values to the client’s physics world.

So far, no objects are moving because we still need to listen to keyboard input and send it to the server. Instead of directly sending events on every keydown event, we maintain a map of currently pressed keys and send events to the Colyseus server in a 50ms loop. This way, we can support pressing multiple keys at the same time and mitigate the pause that happens after the first and consecutive keydown events when the key stays pressed:

let keys = {};
const keyDown = e => {
keys[e.key] = true;
};
const keyUp = e => {
keys[e.key] = false;
};
document.addEventListener(‘keydown’, keyDown);
document.addEventListener(‘keyup’, keyUp);

let loop = () => {
if (keys[“ArrowLeft”]) {
this.room.send(“move”, { direction: “left” });
}
else if (keys[“ArrowRight”]) {
this.room.send(“move”, { direction: “right” });
}
if (keys[“ArrowUp”]) {
this.room.send(“move”, { direction: “up” });
}
else if (keys[“ArrowDown”]) {
this.room.send(“move”, { direction: “down” });
}
// next iteration
requestAnimationFrame(() => {
setTimeout(loop, 50);
});
}
// start loop
setTimeout(loop, 50);

Now the cycle is complete: listen for keystrokes, send the corresponding commands to the Colyseus server to manipulate the physics world on the server. The Colyseus server then applies the new physical properties to all the game objects and propagates the data back to the client to update the user-facing instance of the game.

Minor Nuisances

In retrospect, two things of the category nobody-told-me-but-someone-should-have come to mind:

A good understanding of how physics engines work is beneficial. I spent a considerable amount of time fine-tuning physics properties and constraints. Even though I built a small game with Phaser.js and Matter.js before, there was a lot of trial-and-error to get objects to move in the way I imagined them to.
Real-time is hard — especially in physics-based games. Minor delays considerably worsen the experience, and while synchronizing state across clients with Colyseus works great, it can’t remove computation and transmission delays.

Gotchas And Caveats With SvelteKit

Since I used SvelteKit when it was fresh out of the beta-oven, there were a few gotchas and caveats I would like to point out:

It took a while to figure out that environment variables need to be prefixed with VITE_ in order to use them in SvelteKit. This is now properly documented in the FAQ.
To use Supabase, I had to add Supabase to both the dependencies and devDependencies lists of package.json. I believe this is no longer the case.
SvelteKits load function runs both on the server and the client!
To enable full hot module replacement (including preserving state), you have to manually add a comment line <!– @hmr:keep-all –> in your page components. See FAQ for more details.

Many other frameworks would have been great fits as well, but I have no regrets about choosing SvelteKit for this project. It enabled me to work on the client application in a very efficient way — mostly because Svelte itself is very expressive and skips a lot of the boilerplate code, but also because Svelte has things like animations, transitions, scoped CSS and global stores baked in. SvelteKit provided all the building blocks I needed (SSR, routing, server routes) and although still in beta, it felt very stable and fast.

Deployment And Hosting

Initially, I hosted the Colyseus (Node) server on a Heroku instance and wasted a lot of time getting WebSockets and CORS working. As it turns out, the performance of a tiny (free) Heroku dyno is not sufficient for a real-time use case. I later migrated the Colyseus app to a small server at Linode. The client-side application is deployed by and hosted on Netlify via SvelteKits adapter-netlify. No surprises here: Netlify just worked great!

Conclusion

Starting out with a really simple prototype to validate the idea helped me a lot in figuring out if the project is worth following and where the technical challenges of the game lay. In the final implementation, Colyseus took care of all the heavy lifting of synchronizing state in real-time across multiple clients, distributed in multiple rooms. It’s impressive how quickly a real-time multi-user application can be built with Colyseus — once you figure out how to properly describe the schema. Colyseus’ built-in monitoring panel helps in troubleshooting any synchronizing issues.

What complicated this setup was the physics layer of the game because it introduced an additional copy of each physics-related game object that needed to be maintained. Storing game PINs in Supabase.io from the SvelteKit app was very straightforward. In hindsight, I could have just used an SQLite database to store the game PINs, but trying out new things is half of the fun when building side projects.

Finally, using SvelteKit for building out the frontend of the game allowed me to move quickly — and with the occasional grin of joy on my face.

Now, go ahead and invite your friends to a round of Autowuzzler!

Further Reading on Smashing Magazine

“Get Started With React By Building A Whac-A-Mole Game,” Jhey Tompkins
“How To Build A Real-Time Multiplayer Virtual Reality Game,” Alvin Wan
“Writing A Multiplayer Text Adventure Engine In Node.js,” Fernando Doglio
“The Future Of Mobile Web Design: Video Game Design And Storytelling,” Suzanne Scacca
“How To Build An Endless Runner Game In Virtual Reality,” Alvin Wan

Tilda – The Website Builder That Disrupted The Way We Create Websites

Original Source: https://www.webdesignerdepot.com/2021/10/tilda-the-website-builder-that-disrupted-the-way-we-create-websites/

Tilda website builder combines everything we liked so much about constructors when we were kids – you can experiment, test out and build myriads of new creative ideas out of ready-to-use blocks. Tilda is that type of constructor that allows you to own your creative process and create pretty much any website: landing page, business website, online store, online course with members area, blog, portfolio, or event promo page.

Founded seven years ago, Tilda is a website builder that completely revamped the way we create websites. Tilda has been the first website builder to introduce block mechanics that allows users to create websites out of pre-designed pieces. This breakthrough technology allowed all users – not only designers – to create professional-looking websites. Just like in kid constructors, you can drag-and-drop and mix-and-match blocks on Tilda to let your creativity flow and build a dazzling website, at extraordinary speed. 

When you ask designers why they love Tilda, they usually say it’s because the platform provides the ultimate balance between choosing from templates and being able to fully customize and create from scratch to bring any creative idea to life. Here’s what else they say:

Tilda has been a game-changer for us. It allows our team to quickly spin up new web pages, make edits, and ship new programs. We left WordPress for Tilda and after being with Tilda for 2 years, I don’t ever want to go back.

~ Andy Page, Executive Director, Forge.

I built my first website in 2001. Since then I’ve used countless platforms and website builders for customer websites and my own business. Tilda is the perfect combination of ease of use with powerful features at an unbeatable value.

~ Robby Fowler, Branding and Marketing Strategist, robbyf.com & The Brand ED Podcast.

Let’s dive deeper into core functionalities you can leverage on Tilda. 

#1 Cut Corners With 550+ Pre-Designed Blocks And 210+ Ready-Made Templates

The beauty of Tilda is that it provides 550+ blocks in the ever-growing Block Library designed by professional designers. Thus, you can quickly build a website out of pre-designed blocks that encompass virtually all elements you might need for your website: menu, about us page, features, contact, pricing, etc. 

Customizing each block is a breeze with Tilda: You can drag-and-drop images, edit text right in the layout, alter block height, background color, padding, select the style of buttons, use custom fonts, and assign ready-made animation effects to specific parts of it. Also, Tilda provides a built-in free image library with 600K+ images, so you can find images that are just right for you without leaving Tilda, add them to your website with just one click, and use them for free.

Finally, all blocks fit together so well that it’s almost impossible to create a bad design on Tilda – even if you are a stranger to website building.

For a quick take-off, you can use 210+ ready-made templates for different kinds of websites and projects: online stores, landing pages, webinar promo pages, multimedia articles, blogs, and more. Each template is a sample of modern web design and consists of blocks. It means that templates don’t limit your creativity: you can modify them to your liking by playing with settings, adding extra or removing existing blocks, and embedding images and text. 

Each of the templates and blocks covers over 90% of use cases you’ll ever require and is mobile-ready, meaning that your website will look great on desktop computers, tablets, and smartphones by default.

#2 Jazz Up Your Site With Zero Block: Professional Editor For Web Designers 

To better meet the demands of a creative brief and unleash your creativity, you can use Tilda’s secret weapon called Zero Block. It is a tool for creating uniquely designed blocks on Tilda.

You can control each element of the block, including text, image, button, or background, and decide on their position, size, and screen resolution on which they’ll appear. For example, you can work with layers to create depth with overlay and opacity techniques or set a transparency level on any element and shadow effects below them. Additionally, you can also insert HTML code to add more complex elements, such as calendars, paywall, comments, social media posts, and so much more.  

Finally, Zero Block allows you to fool around with basic and more advanced step-by-step animation for a more individual look. Here’re some animation examples that you can make on Tilda:

Animation on scroll (position of elements is changing on scroll).

Trigger animation (animation is triggered when pointing at or clicking on an object).

Infinite scrolling text.

#3 Import Designs From Figma To Tilda In Minutes

Creators love using Figma for prototyping, but when you have to transfer every element and rebuild your website design from scratch – that’s what’s killing the party. With Tilda, you can easily turn your static designs into an interactive website in no time. 

All it takes is to prepare your Figma design for import with a few easy steps, paste the Figma API token and your layout URL to Tilda, click import and let the magic happen. Once your design is imported, you can bring your project online just by clicking publish.

#4. Make Search Engines Love Your Website With Built-In SEO Optimization

Thanks to the consecutive positioning of blocks on the page, websites designed on Tilda are automatically indexed well by search engines. There is also a set of SEO parameters you can fine-tune right inside the platform to ensure that your web pages rank high even if you don’t have an SEO specialist in-house. These parameters include the title tag, description and keywords meta tags, reader-friendly URLs, H1, H2, and H3 header tags, alt text for images, and easily customizable social media snippets. 

As an additional value, Tilda provides an SEO Assistant that will show you what errors are affecting the indexing of your website and will help test the website for compliance with the search engines’ main recommendations.

#5. Turn Visitors Into Clients

Tilda gives you the power to set up data capture forms and integrate them with 20+ data capture services, such as Google Sheets, Trello, Notion, Salesforce, Monday.com, etc., to ensure seamless lead generation.

For more fun, Tilda developed its CRM to manage your leads better and keep your business organized right inside of a website builder. This is a very easy-to-use tool that automatically adds leads from forms and allows you to manually add leads you captured outside of the website. There is a kanban board that gives you an overall view of how leads are moving through your sales funnel and allows you to move leads between stages easily.

#6. Build A Powerful Online Store In One Day

Tilda provides a set of convenient features to create a remarkable online shopping experience. The platform gives you the power to sell online using ready-made templates or build an online store completely from scratch, add a shopping cart and connect a payment system of choice — Stripe, PayPal, 2Checkout, etc. — to accept online payments in any currency.

If you are looking to run a large ecommerce business, you should also consider Tilda. Thanks to the built-in Product Catalog, you can add up to 5000 items, import and export products in CSV files, easily manage stock, orders, and keep track of store stats.

And thanks to adaptive design, your store will look good across all devices, including tablets and smartphones. 

#7. Bring Your Project Online For Free

Tilda offers three subscription plans: Free, Personal ($10/month with annual subscription), and Business ($20/month with annual subscription). When you sign up for Tilda, you get a lifetime free account. It allows you to publish a website with a free subdomain and gives you access to a selection of blocks and a limited number of features that offer enough to create an impressive website. 

Personal and Business tariffs allow more advanced options, such as connecting custom domains, adding HTML code, receiving payments, and embedding data collection forms. The business plan also allows users to export their website and create five websites (while personal and free plans allow one website per account). 

To discover all features and templates on Tilda, activate a two-week free trial – no credit card required.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The post Tilda – The Website Builder That Disrupted The Way We Create Websites first appeared on Webdesigner Depot.

6 Instagram Marketing Hacks to Grow Your Business

Original Source: https://www.hongkiat.com/blog/instagram-business-marketing-hacks/

(Guest writer: Jigar Agrawal) Many businesses find it challenging to keep up with fast-changing algorithms on the popular social media site, Instagram. Instagram is nonetheless an incredible…

Visit hongkiat.com for full content.

How to Handle These 9 Client Types like a Pro

Original Source: https://www.hongkiat.com/blog/types-of-clients/

The freelancer-to-client relationship is a tricky thing to deal with. Your ability to work with the various types of clients can make or break your freelancing career. To help you deal with this…

Visit hongkiat.com for full content.

Fighting Your Corner: Assertive SEO in 2021+

Original Source: https://www.webdesignerdepot.com/2021/10/fighting-your-corner-assertive-seo-in-2021/

The web industry is beset by competing ideals and goals so that the simplicity of numbers makes sense to us: one is more than zero, two is more than one.

When it comes to any metric, there is an understandable temptation to focus on volume. In some cases, absolute metrics make more sense than others. If your goal is to make money, then $1 is marginally better than $0, and $2 is marginally better than $1.

However, even in ecommerce, some conversions are worth more than others; high-value items or items that open up repeat sales are inherently more valuable in the long term.

SEO (Search Engine Optimization) has traditionally been built around a high-traffic numbers game: if enough people visit your site, then sooner or later, someone will convert. But it is far more effective to attract the right type of visitor, the high-value user that will become a customer or even a brand advocate.

The best content does not guarantee success on Google, and neither does good UX or even Core Web Vitals. Content is no longer king. What works is brand recognition.

SERPs Look Different in 2021+

Traditional SEO strategies would have you pack content with keywords. Use the right keywords, have more keywords than your competitor, and you’ll rank higher. SERPs (Search Engine Results Pages) used to be a league table for keywords.

Unfortunately, it’s simply not that easy any longer, in part because Google has lost its self-confidence.

Even before the recent introduction of dark mode for Google Search, its SERPs had started to look very different. [We tend to focus on Google in these articles because Google is by far the biggest search engine, and whatever direction Google moves in, the industry follows — except for FLoC, that’s going down like a lead balloon on Jupiter.]

Google’s meteoric success has been due to its all-powerful algorithm. Anything you publish online is scrutinized, categorized, and archived by the all-seeing, all-knowing algorithm. We create quality content to appeal to the algorithm. We trust in its fairness, its wisdom…

…all of us except Google, who have seen behind the curtain and found the great and powerful algorithm, may as well be an old man pulling levers and tugging on ropes.

Content Is President

Google has never been coy about the inadequacies of the algorithm. Backlinks have been one of the most significant ranking factors of the algorithm for years because a backlink is a human confirmation of quality. A backlink validates the algorithm’s hypothesis that content is worth linking to.

One hundred words or so of keyword dense text requires less processing and has fewer outliers, and so is relatively simple for an algorithm to assess. And yet content of this kind performs poorly on Google.

The reason is simple: human beings don’t want thin content. We want rich, high-quality content. Thin content is unlikely to be validated by a human.

The key to ranking well is to create content to which many people want to link. Not only does this drive traffic, but it validates the page for Google’s algorithm.

There Can Be Only One

One of the key motivating factors in the recent changes to search has been the evolution of technology.

Siri, Bixby, and all manner of cyber-butler are queueing up to answer your question with a single, authoritative statement. Suddenly, top-ten on Google is a lot less desirable because it’s only the top answer that is returned.

Google, and other search engines, cannot afford to rely on the all-seeing, all-knowing algorithm because the all-powerful algorithm is just an educated guess. It’s a very good educated guess, but it’s an educated guess nonetheless.

Until now, an educated guess was sufficient because if the top result were incorrect, something in the top ten would work. But when it’s a single returned result, what search engines need is certainty.

The Single Source of Truth

As part of the push towards a single, correct answer, Google introduced knowledge panels. These are panels within search results that present Google’s top answer to any given question.

Go ahead and search for “Black Widow” and you’ll see a knowledge panel at the top of the results hierarchy. Many searchers will never get beyond this.

Knowledge panels are controversial because Google is deferring to a third authority on the subject — in the case of Black Widow, Google is deferring to Marvel Studios. If someone at Marvel decided to redefine Black Widow from action-adventure to romantic comedy, Google would respect that [bizarre] decision and update the knowledge panel accordingly.

Whether we approve of the move towards single results, knowledge panels, and whatever else develops in the next few years, it’s a moot point. It’s happening. Most of us don’t have the pull of Marvel Studios. So the question is, how do we adapt to this future and become the authority within our niche.

Making Use of sameAs

One of the most significant developments in recent years has been structured data. Structured data is essentially metadata that tells search engines how to interpret content.

Using structured data, you can specify whether content refers to a product, a person, an organization, or many other possible categories. Structured data allows a search engine to understand the difference between Tom Ford, the designer, Tom Ford, the corporation, and Tom Ford, the perfume.

Most structured data extends the generic “thing”. And thing contains a valuable property: sameAs.

sameAs is used to provide a reference to other channels for the same “thing”. In the case of an organization, that means your Facebook page, your Twitter profile, your YouTube channel, and anything else you can think of.

Implementing sameAs provides corroboration of your brand presence. In effect, it’s backlinking to yourself and providing the type of third-party validation Google needs to promote you up the rankings.

Be a Brand

Google prefers established brands because people are more likely to trust them, and therefore consider the result that Google returned as high-quality. They will, in turn, come back to Google the next time they need something, and Google’s business model survives another day.

Google’s results are skewed towards brands, so the best strategy is to act like a brand.

Brands tend to be highly localized entities that dominate a small sector. There’s no benefit to spreading out keywords in the hope of catching a lot of traffic. Instead, identify the area that you are an expert in, then focus your content there.

Develop a presence on social media, but don’t sign up for every service available unless you have the time to maintain them properly; a suspended or lapsed account doesn’t corroborate your value.

Big Fish, Flexible Pond

There’s a pop-psychology question that asks whether you would prefer to be a big fish in a small pond or a small fish in a big pond. The direction that search is moving the correct answer is “Big fish, small pond”.

The problem with metaphors is that they carry irrelevant limitations with them. In that question, we assume that there is the choice of two ponds, both a fixed size. There is no reason the pond cannot be flexible and grow with you as you increase in size.

What matters from an SEO point of view is that you dominate your niche. You must become the single source of truth, the number one search result. If you find that you aren’t number one, then instead of competing for that top spot, reduce your niche until you are the number one authority in your niche.

Be the single source of truth that Google defers to, and the all-powerful algorithm can clamber into its balloon and float away.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The post Fighting Your Corner: Assertive SEO in 2021+ first appeared on Webdesigner Depot.

Building The SSG I’ve Always Wanted: An 11ty, Vite And JAM Sandwich

Original Source: https://smashingmagazine.com/2021/10/building-ssg-11ty-vite-jam-sandwich/

I don’t know about you, but I’ve been overwhelmed by all the web development tools we have these days. Whether you like Markdown, plain HTML, React, Vue, Svelte, Pug templates, Handlebars, Vibranium — you can probably mix it up with some CMS data and get a nice static site cocktail.

I’m not going to tell you which UI development tools to reach for because they’re all great — depending on the needs of your project. This post is about finding the perfect static site generator for any occasion; something that lets us use JS-less templates like markdown to start, and bring in “islands” of component-driven interactivity as needed.

I’m distilling a year’s worth of learnings into a single post here. Not only are we gonna talk code (aka duct-taping 11ty and Vite together), but we’re also going to explore why this approach is so universal to Jamstackian problems. We’ll touch on:

Two approaches to static site generation, and why we should bridge the gap;
Where templating languages like Pug and Nunjucks still prove useful;
When component frameworks like React or Svelte should come into play;
How the new, hot-reloading world of Vite helps us bring JS interactivity to our HTML with almost zero configs;
How this complements 11ty’s data cascade, bringing CMS data to any component framework or HTML template you could want.

So without further ado, here’s my tale of terrible build scripts, bundler breakthroughs, and spaghetti-code-duct-tape that (eventually) gave me the SSG I always wanted: an 11ty, Vite and Jam sandwich called Slinkity!

A Great Divide In Static Site Generation

Before diving in, I want to discuss what I’ll call two “camps” in static site generation.

In the first camp, we have the “simple” static site generator. These tools don’t bring JavaScript bundles, single-page apps, and any other buzzwords we’ve come to expect. They just nail the Jamstack fundamentals: pull in data from whichever JSON blob of CMS you prefer, and slide that data into plain HTML templates + CSS. Tools like Jekyll, Hugo, and 11ty dominate this camp, letting you turn a directory of markdown and liquid files into a fully-functional website. Key benefits:

Shallow learning curve
If you know HTML, you’re good to go!
Fast build times
We’re not processing anything complex, so each route builds in a snap.
Instant time to interactive
There’s no (or very little) JavaScript to parse on the client.

Now in the second camp, we have the “dynamic” static site generator. These introduce component frameworks like React, Vue, and Svelte to bring interactivity to your Jamstack. These fulfill the same core promise of combining CMS data with your site’s routes at build time. Key benefits:

Built for interactivity
Need an animated image carousel? Multi-step form? Just add a componentized nugget of HTML, CSS, and JS.
State management
Something like React Context of Svelte stores allow seamless data sharing between routes. For instance, the cart on your e-commerce site.

There are distinct pros to either approach. But what if you choose an SSG from the first camp like Jekyll, only to realize six months into your project that you need some component-y interactivity? Or you choose something like NextJS for those powerful components, only to struggle with the learning curve of React, or needless KB of JavaScript on a static blog post?

Few projects squarely fit into one camp or the other in my opinion. They exist on a spectrum, constantly favoring new feature sets as a project’s need evolve. So how do we find a solution that lets us start with the simple tools of the first camp, and gradually add features from the second when we need them?

Well, let’s walk through my learning journey for a bit.

Note: If you’re already sold on static templating with 11ty to build your static sites, feel free to hop down to the juicy code walkthrough. ?

Going From Components To Templates And Web APIs

Back in January 2020, I set out to do what just about every web developer does each year: rebuild my personal site. But this time was gonna be different. I challenged myself to build a site with my hands tied behind my back, no frameworks or build pipelines allowed!

This was no simple task as a React devotee. But with my head held high, I set out to build my own build pipeline from absolute ground zero. There’s a lot of poorly-written code I could share from v1 of my personal site… but I’ll let you click this README if you’re so brave. ? Instead, I want to focus on the higher-level takeaways I learned starving myself of my JS guilty pleasures.

Templates Go A Lot Further Than You Might Think

I came at this project a recovering JavaScript junky. There are a few static-site-related needs I loved using component-based frameworks to fill:

We want to break down my site into reusable UI components that can accept JS objects as parameters (aka “props”).
We need to fetch some information at build time to slap into a production site.
We need to generate a bunch of URL routes from either a directory of files or a fat JSON object of content.

List taken from this post on my personal blog.

But you may have noticed… none of these really need clientside JavaScript. Component frameworks like React are mainly built to handle state management concerns, like the Facebook web app inspiring React in the first place. If you’re just breaking down your site into bite-sized components or design system elements, templates like Pug work pretty well too!

Take this navigation bar for instance. In Pug, we can define a “mixin” that receives data as props:

// nav-mixins.pug
mixin NavBar(links)
// pug’s version of a for loop
each link in links
a(href=link.href) link.text

Then, we can apply that mixin anywhere on our site.

// index.pug
// kinda like an ESM “import”
include nav-mixins.pug
html
body
+NavBar(navLinksPassedByJS)
main
h1 Welcome to my pug playground ?

If we “render” this file with some data, we’ll get a beautiful index.html to serve up to our users.

const html = pug.render(‘/index.pug’, { navLinksPassedByJS: [
{ href: ‘/’, text: ‘Home’ },
{ href: ‘/adopt’, text: ‘Adopt a Pug’ }
] })
// use the NodeJS filesystem helpers to write a file to our build
await writeFile(‘build/index.html’, html)

Sure, this doesn’t give niceties like scoped CSS for your mixins, or stateful JavaScript where you want it. But it has some very powerful benefits over something like React:

We don’t need fancy bundlers we don’t understand.
We just wrote that pug.render call by hand, and we already have the first route of a site ready-to-deploy.
We don’t ship any JavaScript to the end-user.
Using React often means sending a big ole runtime for people’s browsers to run. By calling a function like pug.render at build time, we keep all the JS on our side while sending a clean .html file at the end.

This is why I think templates are a great “base” for static sites. Still, being able to reach for component frameworks where we really benefit from them would be nice. More on that later. ?

You Don’t Need A Framework To Build Single Page Apps

While I was at it, I also wanted some sexy page transitions on my site. But how do we pull off something like this without a framework?

Crossfade with vertical wipe transition. (Large preview)

Well, we can’t do this if every page is its own .html file. The whole browser refreshes when we jump from one HTML file to the other, so we can’t have that nice cross-fade effect (since we’d briefly show both pages on top of each other).

We need a way to “fetch” the HTML and CSS for wherever we’re navigating to, and animate it into view using JavaScript. This sounds like a job for single-page apps!
I used a simple browser API medley for this:

Intercept all your link clicks using an event listener.
fetch API: Fetch all the resources for whatever page you want to visit, and grab the bit I want to animate into view: the content outside the navbar (which I want to remain stationary during the animation).
web animations API: Animate the new content into view as a keyframe.
history API: Change the route displaying in your browser’s URL bar using window.history.pushState({}, ‘new-route’). Otherwise, it looks like you never left the previous page!

For clarity, here’s a visual illustration of that single page app concept using a simple find-and-replace:

Step-by-step clientside routing process: 1. Medium rare hamburger is returned, 2. We request a well done burger using the fetch API, 3. We massage the response, 4. We pluck out the ‘patty’ element and apply it to our current page. (Large preview)

Source article

You can visit the source code from my personal site as well!

Sure, some pairing of React et al and your animation library of choice can do this. But for a use case as simple as a fade transition… web APIs are pretty dang powerful on their own. And if you want more robust page transitions on static templates like Pug or plain HTML, libraries like Swup will serve you well.

What 11ty Brought To The Table

I was feeling pretty good about my little SSG at this point. Sure it couldn’t fetch any CMS data at build-time, and didn’t support different layouts by page or by directory, and didn’t optimize my images, and didn’t have incremental builds.

Okay, I might need some help.

Given all my learnings from v1, I thought I earned my right to drop the “no third-party build pipelines” rule and reach for existing tools. Turns out, 11ty has a treasure trove of features I need!

Data fetching at buildtime using .11ydata.js files;
Global data available to all my templates from a _data folder;
Hot reloading during development using browsersync;
Support for fancy HTML transforms;
…and countless other goodies.

If you’ve tried out bare-bones SSGs like Jekyll or Hugo, you should have a pretty good idea of how 11ty works. Only difference? 11ty uses JavaScript through-and-through.

11ty supports basically every template library out there, so it was happy to render all my Pug pages to .html routes. It’s layout chaining option helped with my foe-single-page-app setup too. I just needed a single script for all my routes, and a “global” layout to import that script:

// _includes/base-layout.html
<html>
<body>
<!–load every page’s content between some body tags–>
{{ content }}
<!–and apply the script tag just below this–>
<script src=”main.js”></script>
</body>
</html>

// random-blog-post.pug

layout: base-layout

article
h2 Welcome to my blog
p Have you heard the story of Darth Plagueis the Wise?

As long as that main.js does all that link intercepting we explored, we have page transitions!

Oh, And The Data Cascade

So 11ty helped clean up all my spaghetti code from v1. But it brought another important piece: a clean API to load data into my layouts. This is the bread and butter of the Jamstack approach. Instead of fetching data in the browser with JavaScript + DOM manipulation, you can:

Fetch data at build-time using Node. This could be a call to some external API, a local JSON or YAML import, or even the content of other routes on your site (imagine updating a table-of-contents whenever new routes are added ?).
Slot that data into your routes. Recall that .render function we wrote earlier:

const html = pug.render(‘/index.pug’, { navLinksPassedByJS: [
{ href: ‘/’, text: ‘Home’ },
{ href: ‘/adopt’, text: ‘Adopt a Pug’ }
] })

…but instead of calling pug.render with our data every time, we let 11ty do this behind-the-scenes.

Sure, I didn’t have a lot of data for my personal site. But it felt great to whip up a .yaml file for all my personal projects:

# _data/works.yaml
– title: Bits of Good Homepage
hash: bog-homepage
links:
– href: https://bitsofgood.org
text: Explore the live site
– href: https://github.com/GTBitsOfGood/bog-web
text: Scour the Svelt-ified codebase
timeframe: May 2019 – present
tags:
– JAMstack
– SvelteJS
– title: Dolphin Audio Visualizer

And access that data across any template:

// home.pug
.project-carousel
each work in works
h3 #{title}
p #{timeframe}
each tag in tags

Coming from the world of “clientside rendering” with create-react-app, this was a pretty big revelation. No more sending API keys or big JSON blobs to the browser. ?

I also added some goodies for JavaScript fetching and animation improvements over version 1 of my site. If you’re curious, here’s where my README stood at this point.

I Was Happy At This Point But Something Was Missing

I went surprisingly far by abandoning JS-based components and embracing templates (with animated page transitions to boot). But I know this won’t satisfy my needs forever. Remember that great divide I kicked us off with? Well, there’s clearly still that ravine between my build setup (firmly in camp #1) and the haven of JS-ified interactivity (the Next, SvelteKit, and more of camp #2). Say I want to add:

a pop-up modal with an open/close toggle,
a component-based design system like Material UI, complete with scoped styling,
a complex multi-step form, maybe driven by a state machine.

If you’re a plain-JS-purist, you probably have framework-less answers to all those use cases. ? But there’s a reason JQuery isn’t the norm anymore! There’s something appealing about creating discrete, easy-to-read components of HTML, scoped styles, and pieces of JavaScript “state” variables. React, Vue, Svelte, etc. offer so many niceties for debugging and testing that straight DOM manipulation can’t quite match.

So here’s my million dollar question: can we use straight HTML templates to start, and gradually add React / Vue / Svelte components where we want them?

The answer… is yes. Let’s try it.

11ty + Vite: A Match Made In Heaven ❤️

Here’s the dream that I’m imagining here. Wherever I want to insert something interactive, I want to leave a little flag in my template to “put X React component here.” This could be the shortcode syntax that 11ty supports:

# Super interesting programming tutorial

Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!

{% react ‘./components/FancyLiveDemo.jsx’ %}

But remember, the one-piece 11ty (purposely) avoids: a way to bundle all your JavaScript. Coming from the OG guild of bundling, your brain probably jumps to building Webpack, Rollup, or Babel processes here. Build a big ole entry point file, and output some beautiful optimized code right?

Well yes, but this can get pretty involved. If we’re using React components, for instance, we’ll probably need some loaders for JSX, a fancy Babel process to transform everything, an interpreter for SASS and CSS module imports, something to help with live reloading, and so on.

If only there were a tool that could just see our .jsx files and know exactly what to do with them.

Enter: Vite

Vite’s been the talk of the town as of late. It’s meant to be the all-in-one tool for building just about anything in JavaScript. Here’s an example for you to try at home. Let’s make an empty directory somewhere on our machine and install some dependencies:

npm init -y # Make a new package.json with defaults set
npm i vite react react-dom # Grab Vite + some dependencies to use React

Now, we can make an index.html file to serve as our app’s “entry point.” We’ll keep it pretty simple:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Document</title>
</head>
<body>
<h1>Hello Vite! (wait is it pronounced “veet” or “vight”…)</h1>
<div id=”root”></div>
</body>
</html>

The only interesting bit is that div id=”root” in the middle. This will be the root of our React component in a moment!

If you want, you can fire up the Vite server to see our plain HTML file in your browser. Just run vite (or npx vite if the command didn’t get configured in your terminal), and you’ll see this helpful output:

vite vX.X.X dev server running at:

> Local: http://localhost:3000/
> Network: use `–host` to expose

ready in Xms.

Much like Browsersync or other popular dev servers, the name of each .html file corresponds to a route on our server. So if we renamed index.html to about.html, we would visit http://localhost:3000/about/ (yes, you’ll need a trailing slash!)

Now let’s do something interesting. Alongside that index.html file, add a basic React component of some sort. We’ll use React’s useState here to demonstrate interactivity:

// TimesWeMispronouncedVite.jsx
import React from ‘react’

export default function TimesWeMispronouncedVite() {
const [count, setCount] = React.useState(0)
return (
<div>
<p>I’ve said Vite wrong {count} times today</p>
<button onClick={() => setCount(count + 1)}>Add one</button>
</div>
)
}

Now, let’s load that component onto our page. This is all we have to add to our index.html:

<!DOCTYPE html>

<body>
<h1>Hello Vite! (wait is it pronounced “veet” or “vight”…)</h1>
<div id=”root”></div>
<!–Don’t forget type=”module”! This lets us use ES import syntax in the browser–>
<script type=”module”>
// path to our component. Note we still use .jsx here!
import Component from ‘./TimesWeMispronouncedVite.jsx’;
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘root’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>
</body>
</html>

Yep, that’s it. No need to transform our .jsx file to a browser-ready .js file ourselves! Wherever Vite sees a .jsx import, it’ll auto-convert that file to something browsers can understand. There isn’t even a dist or build folder when working in development; Vite processes everything on the fly — complete with hot module reloading every time we save our changes. ?

Okay, so we have an incredibly capable build tool. How can we bring this to our 11ty templates?

Running Vite Alongside 11ty

Before we jump into the good stuff, let’s discuss running 11ty and Vite side-by-side. Go ahead and install 11ty as a dev dependency into the same project directory from last section:

npm i -D @11ty/eleventy # yes, it really is 11ty twice

Now let’s do a little pre-flight check to see if 11ty’s working. To avoid any confusion, I’d suggest you:

Delete that index.html file from earlier;
Move that TimesWeMispronouncedVite.jsx inside a new directory. Say, components/;
Create a src folder for our website to live in;
Add a template to that src directory for 11ty to process.

For example, a blog-post.md file with the following contents:

# Hello world! It’s markdown here

Your project structure should look something like this:

src/
blog-post.md
components/
TimesWeMispronouncedVite.jsx

Now, run 11ty from your terminal like so:

npx eleventy –input=src

If all goes well, you should see an build output like this:

_site/
blog-post/
index.html

Where _site is our default output directory, and blog-post/index.html is our markdown file beautifully converted for browsing.

Normally, we’d run npx eleventy –serve to spin up a dev server and visit that /blog-post page. But we’re using Vite for our dev server now! The goal here is to:

Have eleventy build our markdown, Pug, nunjucks, and more to the _site directory.
Point Vite at that same _site directory so it can process the React components, fancy style imports, and other things that 11ty didn’t pick up.

So a two-step build process, with 11ty handing off the Vite. Here’s the CLI command you’ll need to start 11ty and Vite in “watch” mode simultaneously:

(npx eleventy –input=src –watch) & npx vite _site

You can also run these commands in two separate terminals for easier debugging. ?

With any luck, you should be able to visit http://localhost:3000/blog-post/ (again, don’t forget the trailing slash!) to see that processed Markdown file.

Partial Hydration With Shortcodes

Let’s do a brief rundown on shortcodes. Time to revisit that syntax from earlier:

{% react ‘/components/TimesWeMispronouncedVite.jsx’ %}

For those unfamiliar with shortcodes: they’re about the same as a function call, where the function returns a string of HTML to slide into your page. The “anatomy” of our shortcode is:

{% … %}
Wrapper denoting the start and end of the shortcode.
react
The name of our shortcode function we’ll configure in a moment.
‘/components/TimesWeMispronouncedVite.jsx’
The first (and only) argument to our shortcode function. You can have as many arguments as you’d like.

Let’s wire up our first shortcode! Add a .eleventy.js file to the base of your project, and add this config entry for our react shortcode:

// .eleventy.js, at the base of the project
module.exports = function(eleventyConfig) {
eleventyConfig.addShortcode(‘react’, function(componentPath) {
// return any valid HTML to insert
return `<div id=”root”>This is where we’ll import ${componentPath}</div>`
})

return {
dir: {
// so we don’t have to write `–input=src` in our terminal every time!
input: ‘src’,
}
}
}

Now, let’s spice up our blog-post.md with our new shortcode. Paste this content into our markdown file:

# Super interesting programming tutorial

Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!

{% react ‘/components/TimesWeMispronouncedVite.jsx’ %}

And if you run a quick npx eleventy, you should see this output in your _site directory under /blog-post/index.html:

<h1>Super interesting programming tutorial</h1>

<p>Writing paragraphs has been fun, but that’s no way to learn. Time for an interactive code example!</p>

<div id=”root”>This is where we’ll import /components/TimesWeMispronouncedVite.jsx</div>

Writing Our Component Shortcode

Now let’s do something useful with that shortcode. Remember that script tag we wrote while trying out Vite? Well, we can do the same thing in our shortcode! This time we’ll use the componentPath argument to generate the import, but keep the rest pretty much the same:

// .eleventy.js
module.exports = function(eleventyConfig) {
let idCounter = 0;
// copy all our /components to the output directory
// so Vite can find them. Very important step!
eleventyConfig.addPassthroughCopy(‘components’)

eleventyConfig.addShortcode(‘react’, function (componentPath) {
// we’ll use idCounter to generate unique IDs for each “root” div
// this lets us use multiple components / shortcodes on the same page ?
idCounter += 1;
const componentRootId = `component-root-${idCounter}`
return `
<div id=”${componentRootId}”></div>
<script type=”module”>
// use JSON.stringify to
// 1) wrap our componentPath in quotes
// 2) strip any invalid characters. Probably a non-issue, but good to be cautious!
import Component from ${JSON.stringify(componentPath)};
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘${componentRootId}’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>
`
})

eleventyConfig.on(‘beforeBuild’, function () {
// reset the counter for each new build
// otherwise, it’ll count up higher and higher on every live reload
idCounter = 0;
})

return {
dir: {
input: ‘src’,
}
}
}

Now, a call to our shortcode (ex. {% react ‘/components/TimesWeMispronouncedVite.jsx’ %}) should output something like this:

<div id=”component-root-1″></div>
<script type=”module”>
import Component from ‘./components/FancyLiveDemo.jsx’;
import React from ‘react’;
import ReactDOM from ‘react-dom’;
const componentRoot = document.getElementById(‘component-root-1’);
ReactDOM.render(React.createElement(Component), componentRoot);
</script>

Visiting our dev server using (npx eleventy –watch) & vite _site, we should find a beautifully clickable counter element. ✨

Buzzword Alert — Partial Hydration And Islands Architecture

We just demonstrated “islands architecture” in its simplest form. This is the idea that our interactive component trees don’t have to consume the entire website. Instead, we can spin up mini-trees, or “islands,” throughout our app depending on where we actually need that interactivity. Have a basic landing page of links without any state to manage? Great! No need for interactive components. But do you have a multi-step form that could benefit from X React library? No problem. Use techniques like that react shortcode to spin up a Form.jsx island.

This goes hand-in-hand with the idea of “partial hydration.” You’ve likely heard the term “hydration” if you work with component-y SSGs like NextJS or Gatsby. In short, it’s a way to:

Render your components to static HTML first.
This gives the user something to view when they initially visit your website.
“Hydrate” this HTML with interactivity.
This is where we hook up our state hooks and renderers to, well, make button clicks actually trigger something.

This 1-2 punch makes JS-driven frameworks viable for static sites. As long as the user has something to view before your JavaScript is done parsing, you’ll get a decent score on those lighthouse metrics.

Well, until you don’t. ? It can be expensive to “hydrate” an entire website since you’ll need a JavaScript bundle ready to process every last DOM element. But our scrappy shortcode technique doesn’t cover the entire page! Instead, we “partially” hydrate the content that’s there, inserting components only where necessary.

Don’t Worry, There’s A Plugin For All This — Slinkity

Let’s recap what we discovered here:

Vite is an incredibly capable bundler that can process most file types (jsx, vue, and svelte to name a few) without extra config.
Shortcodes are an easy way to insert chunks of HTML into our templates, component-style.
We can use shortcodes to render dynamic, interactive JS bundles wherever we want using partial hydration.

So what about optimized production builds? Properly loading scoped styles? Heck, using .jsx to create entire pages? Well, I’ve bundled all of this (and a whole lot more!) into a project called Slinkity. I’m excited to see the warm community reception to the project, and I’d love for you, dear reader, to give it a spin yourself!

? Try the quick start guide

Astro’s Pretty Great Too

Readers with their eyes on cutting-edge tech probably thought about Astro at least once by now. ? And I can’t blame you! It’s built with a pretty similar goal in mind: start with plain HTML, and insert stateful components wherever you need them. Heck, they’ll even let you start writing React components inside Vue or Svelte components inside HTML template files! It’s like MDX Xtreme edition. ?

There’s one pretty major cost to their approach though: you need to rewrite your app from scratch. This means a new template format based on JSX (which you might not be comfortable with), a whole new data pipeline that’s missing a couple of niceties right now, and general bugginess as they work out the kinks.

But spinning up an 11ty + Vite cocktail with a tool like Slinkity? Well, if you already have an 11ty site, Vite should bolt into place without any rewrites, and shortcodes should cover many of the same use cases as .astro files. I’ll admit it’s far from perfect right now. But hey, it’s been useful so far, and I think it’s a pretty strong alternative if you want to avoid site-wide rewrites!

Wrapping Up

This Slinkity experiment has served my needs pretty well so far (and a few of y’all’s too!). Feel free to use whatever stack works for your JAM. I’m just excited to share the results of my year of build tool debauchery, and I’m so pumped to see how we can bridge the great Jamstack divide.

Further Reading

Want to dive deeper into partial hydration, or ESM, or SSGs in general? Check these out:

Islands Architecture
This blog post from Jason Format really kicked off a discussion of “islands” and “partial hydration” in web development. It’s chock-full of useful diagrams and the philosophy behind the idea.
Simplify your static with a custom-made static site generator
Another SmashingMag article that walks you through crafting Node-based website builders from scratch. It was a huge inspiration to me!
How ES Modules have redefined web development
A personal post on how ES Modules have changed the web development game. This dives a little further into the “then and now” of import syntax on the web.
An introduction to web components
An excellent walkthrough on what web components are, how the shadow DOM works, and where web components prove useful. Used this guide to apply custom components to my own framework!