Popular Design News of the Week: August 30 2021 – September 5, 2021

Original Source: https://www.webdesignerdepot.com/2021/09/popular-design-news-of-the-week-august-30-2021-september-5-2021/

Every day design fans submit incredible industry stories to our sister-site, Webdesigner News. Our colleagues sift through it, selecting the very best stories from the design, UX, tech, and development worlds and posting them live on the site.

The best way to keep up with the most important stories for web professionals is to subscribe to Webdesigner News or check out the site regularly. However, in case you missed a day this week, here’s a handy compilation of the top curated stories from the last seven days. Enjoy!

Next.js and Drupal Go Together Like PB & Jelly

Couleur.io – Harmonizing Color Palettes for Your Web Projects

11 Open-Source Static Site Generators You Can Use to Build Your Website

Buttons Generator – 100+ Buttons You Can Use In Your Project

3 Essential Design Trends, September 2021

12 CSS Box Shadow Examples

The Fixed Background Attachment Hack

The Internet ‘Died’ Five Years Ago

Media Queries in Responsive Design: A Complete Guide (2021)

The 7 Core Design Principles

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The post Popular Design News of the Week: August 30 2021 – September 5, 2021 first appeared on Webdesigner Depot.

HTTP/3: Practical Deployment Options (Part 3)

Original Source: https://smashingmagazine.com/2021/09/http3-practical-deployment-options-part3/

Hello, and welcome to the final installment of this three-part series on the new HTTP/3 and QUIC protocols! If after the previous two parts — HTTP/3 history and core concepts and HTTP/3 performance features — you’re convinced that starting to use the new protocols is a good idea (and you should be!), then this final piece includes all you need to know to get started!

First, we’ll discuss which changes you need to make to your pages and resources to optimally use the new protocols (that’s the easy part). Next, we’ll look at how to set up servers and clients (that’s the hard part unless you’re using a content delivery network (CDN)). Finally, we’ll see which tools you can use to evaluate the performance impact of the new protocols (that’s the almost impossible part, at least for now).

This series is divided into three parts:

HTTP/3 history and core concepts
This is targeted at people new to HTTP/3 and protocols in general, and it mainly discusses the basics.
HTTP/3 performance features
This is more in-depth and technical. People who already know the basics can start here.
Practical HTTP/3 deployment options (current article)
This explains the challenges involved in deploying and testing HTTP/3 yourself. It details how and if you should change your web pages and resources as well.

Changes To Pages And Resources

Let’s begin with some good news: If you’re already on HTTP/2, you probably won’t have to change anything to your pages or resources when moving to HTTP/3!. This is because, as we’ve explained in part 1 and part 2, HTTP/3 is really more like HTTP/2-over-QUIC, and the high-level features of the two versions have stayed the same. As such, any changes or optimizations made for HTTP/2 will still work for HTTP/3 and vice versa.

However, if you’re still on HTTP/1.1, or you have forgotten about your transition to HTTP/2, or you never actually tweaked things for HTTP/2, then you might wonder what those changes were and why they were needed. You would, however, be hard-pressed even today to find a good article that details the nuanced best practices. This is because, as I stated in the introduction to part 1, much of the early HTTP/2 content was overly optimistic about how well it would work in practice, and some of it, quite frankly, had major mistakes and bad advice. Sadly, much of this misinformation persists today. That’s one of my main motivations in writing this series on HTTP/3, to help prevent that from happening again.

The best all-in-one nuanced source for HTTP/2 I can recommend at this time is the book HTTP/2 in Action by Barry Pollard. However, since that’s a paid resource and I don’t want you to be left guessing here, I’ve listed a few of the main points below, along with how they relate to HTTP/3:

1. Single Connection

The biggest difference between HTTP/1.1 and HTTP/2 was the switch from 6 to 30 parallel TCP connections to a single underlying TCP connection. We discussed a bit in part 2 how a single connection can still be as fast as multiple connections, because of how congestion control can cause more or earlier packet loss with more connections (which undoes the benefits of their aggregated faster start). HTTP/3 continues this approach, but “just” switches from one TCP to one QUIC connection. This difference by itself doesn’t do all that much (it mainly reduces the overhead on the server-side), but it leads to most of the following points.

2. Server Sharding and Connection Coalescing

The switch to the single connection set-up was quite difficult in practice because many pages were sharded across different hostnames and even servers (like img1.example.com and img2.example.com). This was because browsers only opened up to six connections for each individual hostname, so having multiple allowed for more connections! Without changes to this HTTP/1.1 set-up, HTTP/2 would still open up multiple connections, reducing how well other features, such as prioritization (see below), could actually work.

As such, the original recommendation was to undo server sharding and to consolidate resources on a single server as much as possible. HTTP/2 even provided a feature to make the transition from an HTTP/1.1 set-up easier, called connection coalescing. Roughly speaking, if two hostnames resolve to the same server IP (using DNS) and use a similar TLS certificate, then the browser can reuse a single connection even across the two hostnames.

In practice, connection coalescing can be tricky to get right, e.g. due to several subtle security issues involving CORS. Even if you do set it up properly, you could still easily end up with two separate connections. The thing is, that’s not always bad. First, due to poorly implemented prioritization and multiplexing (see below), the single connection could easily be slower than using two or more. Secondly, using too many connections could cause early packet loss due to competing congestion controllers. Using just a few (but still more than one), however, could nicely balance congestion growth with better performance, especially on high-speed networks. For these reasons, I believe that a little bit of sharding is still a good idea (say, two to four connections), even with HTTP/2. In fact, I think most modern HTTP/2 set-ups perform as well as they do because they still have a few extra connections or third-party loads in their critical path.

3. Resource Bundling and Inlining

In HTTP/1.1, you could have only a single active resource per connection, leading to HTTP-level head-of-line (HoL) blocking. Because the number of connections was capped at a measly 6 to 30, resource bundling (where smaller subresources are combined into a single larger resource) was a long-time best practice. We still see this today in bundlers such as Webpack. Similarly, resources were often inlined in other resources (for example, critical CSS was inlined in the HTML).

With HTTP/2, however, the single connection multiplexes resources, so you can have many more outstanding requests for files (put differently, a single request no longer takes up one of your precious few connections). This was originally interpreted as, “We no longer need to bundle or inline our resources for HTTP/2”. This approach was touted to be better for fine-grained caching because each subresource could be cached individually and the full bundle didn’t need to be redownloaded if one of them changed. This is true, but only to a relatively limited extent.

For example, you could reduce compression efficiency, because that works better with more data. Additionally, each extra request or file has an inherent overhead because it needs to be handled by the browser and server. These costs can add up for, say, hundreds of small files compared to a few large ones. In our own early tests, I found seriously diminishing returns at about 40 files. Though those numbers are probably a bit higher now, file requests are still not as cheap in HTTP/2 as originally predicted. Finally, not inlining resources has an added latency cost because the file needs to be requested. This, combined with prioritization and server push problems (see below), means that even today you’re still better off inlining some of your critical CSS. Maybe someday the Resource Bundles proposal will help with this, but not yet.

All of this is, of course, still true for HTTP/3 as well. Still, I’ve read people claim that many small files would be better over QUIC because more concurrently active independent streams mean more profits from the HoL blocking removal (as we discussed in part 2). I think there might be some truth to this, but, as we also saw in part 2, this is a highly complex issue with a lot of moving parameters. I don’t think the benefits would outweigh the other costs discussed, but more research is needed. (An outrageous thought would be to have each file be exactly sized to fit in a single QUIC packet, bypassing HoL blocking completely. I will accept royalties from any startup that implements a resource bundler that does this. ;))

4. Prioritization

To be able to download multiple files on a single connection, you need to somehow multiplex them. As discussed in part 2, in HTTP/2, this multiplexing is steered using its prioritization system. This is why it’s important to have as many resources as possible requested on the same connection as well — to be able to properly prioritize them among each other! As we also saw, however, this system was very complex, causing it to often be badly used and implemented in practice (see the image below). This, in turn, has meant that some other recommendations for HTTP/2 — such as reduced bundling, because requests are cheap, and reduced server sharding, to make optimal use of the single connection (see above) — have turned out to underperform in practice.

Sadly, this is something that you, as an average web developer, can’t do much about, because it’s mainly a problem in the browsers and servers themselves. You can, however, try to mitigate the issue by not using too many individual files (which will lower the chances for competing priorities) and by still using (limited) sharding. Another option is to use various priority-influencing techniques, such as lazy loading, JavaScript async and defer, and resource hints such as preload. Internally, these mainly change the priorities of the resources so that they get sent earlier or later. However, these mechanisms can (and do) suffer from bugs. Additionally, don’t expect to slap a preload on a bunch of resources and make things faster: If everything is suddenly a high priority, then nothing is! It’s even very easy to delay actually critical resources by using things like preload.

As also explained in part 2, HTTP/3 fundamentally changes the internals of this prioritization system. We hope this means that there will be many fewer bugs and problems with its practical deployment, so at least some of this should be solved. We can’t be sure yet, however, because few HTTP/3 servers and clients fully implement this system today. Nevertheless, the fundamental concepts of prioritization won’t change. You still won’t be able to use techniques such as preload without really understanding what happens internally, because it might still mis-prioritize your resources.

5. Server Push and First Flight

Server push allows a server to send response data without first waiting for a request from the client. Again, this sounds great in theory, and it could be used instead of inlining resources (see above). However, as discussed in part 2, push is very difficult to use correctly due to issues with congestion control, caching, prioritization, and buffering. Overall, it’s best not to use it for general web page loading unless you really know what you’re doing, and even then it would probably be a micro-optimization. I still believe it could have a place with (REST) APIs, though, where you can push subresources linked to in the (JSON) response on a warmed-up connection. This is true for both HTTP/2 and HTTP/3.

To generalize a bit, I feel that similar remarks could be made for TLS session resumption and 0-RTT, be it over TCP + TLS or via QUIC. As discussed in part 2, 0-RTT is similar to server push (as it’s typically used) in that it tries to accelerate the very first stages of a page load. However, that means it is equally limited in what it can achieve at that time (even more so in QUIC, due to security concerns). As such, a micro-optimization is, again, how you probably need to fine-tune things on a low level to really benefit from it. And to think I was once very excited to try out combining server push with 0-RTT.

What Does It All Mean?

All the above comes down to a simple rule of thumb: Apply most of the typical HTTP/2 recommendations that you find online, but don’t take them to the extreme.

Here are some concrete points that mostly hold for both HTTP/2 and HTTP/3:

Shard resources over about one to three connections on the critical path (unless your users are mostly on low-bandwidth networks), using preconnect and dns-prefetch where needed.
Bundle subresources logically per path or feature, or per change frequency. Five to ten JavaScript and five to ten CSS resources per page should be just fine. Inlining critical CSS can still be a good optimization.
Use complex features, such as preload, sparingly.
Use a server that properly supports HTTP/2 prioritization. For HTTP/2, I recommend H2O. Apache and NGINX are mostly OK (although could do better), while Node.js is to be avoided for HTTP/2. For HTTP/3, things are less clear at this time (see below).
Make sure that TLS 1.3 is enabled on your HTTP/2 web server.

As you can see, while far from simple, optimizing pages for HTTP/3 (and HTTP/2) is not rocket science. What will be more difficult, however, is correctly setting up HTTP/3 servers, clients, and tools.

Servers and Networks

As you probably understand by now, QUIC and HTTP/3 are quite complex protocols. Implementing them from scratch would involve reading (and understanding!) hundreds of pages spread over more than seven documents. Luckily, multiple companies have been working on open-source QUIC and HTTP/3 implementations for over five years now, so we have several mature and stable options to choose from.

Some of the most important and stable ones include the following:

Language
Implementation

Python
aioquic

Go
quic-go

Rust
quiche (Cloudflare), Quinn, Neqo (Mozilla)

C and C++
mvfst (Facebook), MsQuic, (Microsoft), <a hrefhttps://quiche.googlesource.com/quiche/QUICHE (Google), ngtcp2, LSQUIC (Litespeed), picoquic, quicly (Fastly)

However, many (perhaps most) of these implementations mainly take care of the HTTP/3 and QUIC stuff; they are not really full-fledged web servers by themselves. When it comes to your typical servers (think NGINX, Apache, Node.js), things have been a bit slower, for several reasons. First, few of their developers were involved with HTTP/3 from the start, and now they have to play catch-up. Many bypass this by using one of the implementations listed above internally as libraries, but even that integration is difficult.

Secondly, many servers depend on third-party TLS libraries such as OpenSSL. This is, again, because TLS is very complex and has to be secure, so it’s best to reuse existing, verified work. However, while QUIC integrates with TLS 1.3, it uses it in ways much different from how TLS and TCP interact. This means that TLS libraries have to provide QUIC-specific APIs, which their developers have long been reluctant or slow to do. The issue here especially is OpenSSL, which has postponed QUIC support, but it is also used by many servers. This problem got so bad that Akamai decided to start a QUIC-specific fork of OpenSSL, called quictls. While other options and workarounds exist, TLS 1.3 support for QUIC is still a blocker for many servers, and it is expected to remain so for some time.

A partial list of full web servers that you should be able to use out of the box, along with their current HTTP/3 support, follows:

Apache
Support is unclear at this time. Nothing has been announced. It likely also needs OpenSSL. (Note that there is an Apache Traffic Server implementation, though.)
NGINX
This is a custom implementation. This is relatively new and still highly experimental. It is expected to be merged to mainline NGINX by the end of 2021. This is relatively new and still highly experimental. Note that there is a patch to run Cloudflare’s quiche library on NGINX as well, which is probably more stable for now.
Node.js
This uses the ngtcp2 library internally. It is blocked by OpenSSL progress, although they plan to switch to the QUIC-TLS fork to get something working sooner.
IIS
Support is unclear at this time, and nothing has been announced. It will likely use the MsQuic library internally, though.
Hypercorn
This integrates aioquic, with experimental support.
Caddy
This uses quic-go, with full support.
H2O
This uses quicly, with full support.
Litespeed
This uses LSQUIC, with full support.

Note some important nuances:

Even “full support” means “as good as it gets at the moment”, not necessarily “production-ready”. For instance, many implementations don’t yet fully support connection migration, 0-RTT, server push, or HTTP/3 prioritization.
Other servers not listed, such as Tomcat, have (to my knowledge) made no announcement yet.
Of the web servers listed, only Litespeed, Cloudflare’s NGINX patch, and H2O were made by people intimately involved in QUIC and HTTP/3 standardization, so these are most likely to work best early on.

As you can see, the server landscape isn’t fully there yet, but there are certainly already options for setting up an HTTP/3 server. However, simply running the server is only the first step. Configuring it and the rest of your network is more difficult.

Network Configuration

As explained in part 1, QUIC runs on top of the UDP protocol to make it easier to deploy. This, however, mainly just means that most network devices can parse and understand UDP. Sadly, it does not mean that UDP is universally allowed. Because UDP is often used for attacks and is not critical to normal day-to-day work besides DNS, many (corporate) networks and firewalls block the protocol almost entirely. As such, UDP probably needs to be explicitly allowed to/from your HTTP/3 servers. QUIC can run on any UDP port but expect port 443 (which is typically used for HTTPS over TCP as well) to be most common.

However, many network administrators will not want to just allow UDP wholesale. Instead, they will specifically want to allow QUIC over UDP. The problem there is that, as we’ve seen, QUIC is almost entirely encrypted. This includes QUIC-level metadata such as packet numbers, but also, for example, signals that indicate the closure of a connection. For TCP, firewalls actively track all of this metadata to check for expected behavior. (Did we see a full handshake before data-carrying packets? Do the packets follow expected patterns? How many open connections are there?) As we saw in part 1, this is exactly one of the reasons why TCP is no longer practically evolvable. However, due to QUIC’s encryption, firewalls can do much less of this connection-level tracking logic, and the few bits they can inspect are relatively complex.

As such, many firewall vendors currently recommend blocking QUIC until they can update their software. Even after that, though, many companies might not want to allow it, because firewall QUIC support will always be much less than the TCP features they’re used to.

This is all complicated even more by the connection migration feature. As we’ve seen, this feature allows for the connection to continue from a new IP address without having to perform a new handshake, by the use of connection IDs (CIDs). However, to the firewall, this will look as if a new connection is being used without first using a handshake, which might just as well be an attacker sending malicious traffic. Firewalls can’t just use the QUIC CIDs, because they also change over time to protect users’ privacy! As such, there will be some need for the servers to communicate with the firewall about which CIDs are expected, but none of these things exist yet.

There are similar concerns for load balancers for larger-scale set-ups. These machines distribute incoming connections over a large number of back-end servers. Traffic for one connection must, of course, always be routed to the same back-end server (the others wouldn’t know what to do with it!). For TCP, this could simply be done based on the 4-tuple, because that never changes. With QUIC connection migration, however, that’s no longer an option. Again, servers and load balancers will need to somehow agree on which CIDs to choose in order to allow deterministic routing. Unlike for firewall configuration, however, there is already a proposal to set this up (although this is far from widely implemented).

Finally, there are other, higher-level security considerations, mainly around 0-RTT and distributed denial-of-service (DDoS) attacks. As discussed in part 2, QUIC includes quite a few mitigations for these issues already, but ideally, they will also use extra lines of defense on the network. For example, proxy or edge servers might block certain 0-RTT requests from reaching the actual back ends to prevent replay attacks. Alternatively, to prevent reflection attacks or DDoS attacks that only send the first handshake packet and then stop replying (called a SYN flood in TCP), QUIC includes the retry feature. This allows the server to validate that it’s a well-behaved client, without having to keep any state in the meantime (the equivalent of TCP SYN cookies). This retry process best happens, of course, somewhere before the back-end server — for example, at the load balancer. Again, this requires additional configuration and communication to set up, though.

These are only the most prominent issues that network and system administrators will have with QUIC and HTTP/3. There are several more, some of which I’ve talked about. There are also two separate accompanying documents for the QUIC RFCs that discuss these issues and their possible (partial) mitigations.

What Does It All Mean?

HTTP/3 and QUIC are complex protocols that rely on a lot of internal machinery. Not all of that is ready for prime time just yet, although you already have some options to deploy the new protocols on your back ends. It will probably take a few months to even years for the most prominent servers and underlying libraries (such as OpenSSL) to get updated, however.

Even then, properly configuring the servers and other network intermediaries, so that the protocols can be used in a secure and optimal fashion, will be non-trivial in larger-scale set-ups. You will need a good development and operations team to correctly make this transition.

As such, especially in the early days, it is probably best to rely on a large hosting company or CDN to set up and configure the protocols for you. As discussed in part 2, that’s where QUIC is most likely to pay off anyway, and using a CDN is one of the key performance optimizations you can do. I would personally recommend using Cloudflare or Fastly because they have been intimately involved in the standardization process and will have the most advanced and well-tuned implementations available.

Clients and QUIC Discovery

So far, we have considered server-side and in-network support for the new protocols. However, several issues are also to be overcome on the client’s side.

Before getting to that, let’s start with some good news: Most of the popular browsers already have (experimental) HTTP/3 support! Specifically, at the time of writing, here is the status of support (see also caniuse.com):

Google Chrome (version 91+): Enabled by default.
Mozilla Firefox (version 89+): Enabled by default.
Microsoft Edge (version 90+): Enabled by default (uses Chromium internally).
Opera (version 77+): Enabled by default (uses Chromium internally).
Apple Safari (version 14): Behind a manual flag. Will be enabled by default in version 15, which is currently in technology preview.
Other Browsers: No signals yet that I’m aware of (although other browsers that use Chromium internally, such as Brave, could, in theory, also start enabling it).

Note some nuances:

Most browsers are rolling out gradually, whereby not all users will get HTTP/3 support enabled by default from the start. This is done to limit the risks that a single overlooked bug could affect many users or that server deployments become overloaded. As such, there is a small chance that, even in recent browser versions, you won’t get HTTP/3 by default and will have to manually enable it.
As with the servers, HTTP/3 support does not mean that all features have been implemented or are being used at this time. Particularly, 0-RTT, connection migration, server push, dynamic QPACK header compression, and HTTP/3 prioritization might still be missing, disabled, used sparingly, or poorly configured.
If you want to use client-side HTTP/3 outside of the browser (for example, in your native app), then you would have to integrate one of the libraries listed above or use cURL. Apple will soon bring native HTTP/3 and QUIC support to its built-in networking libraries on macOS and iOS, and Microsoft is adding QUIC to the Windows kernel and their .NET environment, but similar native support has (to my knowledge) not been announced for other systems like Android.

Alt-Svc

Even if you’ve set up an HTTP/3-compatible server and are using an updated browser, you might be surprised to find that HTTP/3 isn’t actually being used consistently. To understand why, let’s suppose you’re the browser for a moment. Your user has requested that you navigate to example.com (a website you’ve never visited before), and you’ve used DNS to resolve that to an IP. You send one or more QUIC handshake packets to that IP. Now several things can go wrong:

The server might not support QUIC.
One of the intermediate networks or firewalls might block QUIC and/or UDP completely.
The handshake packets might be lost in transit.

However, how would you know (which) one of these problems has occurred? In all three cases, you’ll never receive a reply to your handshake packet(s). The only thing you can do is wait, hoping that a reply might still come in. Then, after some waiting time (the timeout), you might decide there’s indeed a problem with HTTP/3. At that point, you would try to open a TCP connection to the server, hoping that HTTP/2 or HTTP/1.1 will work.

As you can see, this type of approach could introduce major delays, especially in the initial year(s) when many servers and networks won’t support QUIC yet. An easy but naïve solution would simply be to open both a QUIC and TCP connection at the same time and then use whichever handshake completes first. This method is called “connection racing” or “happy eyeballs”. While this is certainly possible, it does have considerable overhead. Even though the losing connection is almost immediately closed, it still takes up some memory and CPU time on both the client and server (especially when using TLS). On top of that, there are also other problems with this method involving IPv4 versus IPv6 networks and the previously discussed replay attacks (which my talk covers in more detail).

As such, for QUIC and HTTP/3, browsers would rather prefer to play it safe and only try QUIC if they know the server supports it. As such, the first time a new server is contacted, the browser will only use HTTP/2 or HTTP/1.1 over a TCP connection. The server can then let the browser know it also supports HTTP/3 for subsequent connections. This is done by setting a special HTTP header on the responses sent back over HTTP/2 or HTTP/1.1. This header is called Alt-Svc, which stands for “alternative services”. Alt-Svc can be used to let a browser know that a certain service is also reachable via another server (IP and/or port), but it also allows for the indication of alternative protocols. This can be seen below in figure 1.

Upon receipt of a valid Alt-Svc header indicating HTTP/3 support, the browser will cache this and try to set up a QUIC connection from then on. Some clients will do this as soon as possible (even during the initial page load — see below), while others will wait until the existing TCP connection(s) are closed. This means that the browser will only ever use HTTP/3 after it has downloaded at least a few resources via HTTP/2 or HTTP/1.1 first. Even then, it’s not smooth sailing. The browser now knows that the server supports HTTP/3, but that doesn’t mean the intermediate network won’t block it. As such, connection racing is still needed in practice. So, you might still end up with HTTP/2 if the network somehow delays the QUIC handshake enough. Additionally, if the QUIC connection fails to establish a few times in a row, some browsers will put the Alt-Svc cache entry on a denylist for some time, not trying HTTP/3 for a while. As such, it can be helpful to manually clear your browser’s cache if things are acting up because that should also empty the Alt-Svc bindings. Finally, Alt-Svc has been shown to pose some serious security risks. For this reason, some browsers pose extra restrictions on, for instance, which ports can be used (in Chrome, your HTTP/2 and HTTP/3 servers need to be either both on a port below 1024 or both on a port above or equal to 1024, otherwise Alt-Svc will be ignored). All of this logic varies and evolves wildly between browsers, meaning that getting consistent HTTP/3 connections can be difficult, which also makes it challenging to test new set-ups.

There is ongoing work to improve this two-step Alt-Svc process somewhat. The idea is to use new DNS records called SVCB and HTTPS, which will contain information similar to what is in Alt-Svc. As such, the client can discover that a server supports HTTP/3 during the DNS resolution step instead, meaning that it can try QUIC from the very first page load instead of first having to go through HTTP/2 or HTTP/1.1. For more information on this and Alt-Svc, see last year’s Web Almanac chapter on HTTP/2.

As you can see, Alt-Svc and the HTTP/3 discovery process add a layer of complexity to your already challenging QUIC server deployment, because:

you will always need to deploy your HTTP/3 server next to an HTTP/2 and/or HTTP/1.1 server;
you will need to configure your HTTP/2 and HTTP/1.1 servers to set the correct Alt-Svc headers on their responses.

While that should be manageable in production-level set-ups (because, for example, a single Apache or NGINX instance will likely support all three HTTP versions at the same time), it might be much more annoying in (local) test set-ups (I can already see myself forgetting to add the Alt-Svc headers or messing them up). This problem is compounded by a (current) lack of browser error logs and DevTools indicators, which means that figuring out why exactly the set-up isn’t working can be difficult.

Additional Issues

As if that wasn’t enough, another issue will make local testing more difficult: Chrome makes it very difficult for you to use self-signed TLS certificates for QUIC. This is because non-official TLS certificates are often used by companies to decrypt their employees’ TLS traffic (so that they can, for example, have their firewalls scan inside encrypted traffic). However, if companies would start doing that with QUIC, we would again have custom middlebox implementations that make their own assumptions about the protocol. This could lead to them potentially breaking protocol support in the future, which is exactly what we tried to prevent by encrypting QUIC so extensively in the first place! As such, Chrome takes a very opinionated stance on this: If you’re not using an official TLS certificate (signed by a certificate authority or root certificate that is trusted by Chrome, such as Let’s Encrypt), then you cannot use QUIC. This, sadly, also includes self-signed certificates, which are often used for local test set-ups.

It is still possible to bypass this with some freaky command-line flags (because the common –ignore-certificate-errors doesn’t work for QUIC yet), by using per-developer certificates (although setting this up can be tedious), or by setting up the real certificate on your development PC (but this is rarely an option for big teams because you would have to share the certificate’s private key with each developer). Finally, while you can install a custom root certificate, you would then also need to pass both the –origin-to-force-quic-on and –ignore-certificate-errors-spki-list flags when starting Chrome (see below). Luckily, for now, only Chrome is being so strict, and hopefully, its developers will loosen their approach over time.

If you are having problems with your QUIC set-up from inside a browser, it’s best to first validate it using a tool such as cURL. cURL has excellent HTTP/3 support (you can even choose between two different underlying libraries) and also makes it easier to observe Alt-Svc caching logic.

What Does It All Mean?

Next to the challenges involved with setting up HTTP/3 and QUIC on the server-side, there are also difficulties in getting browsers to use the new protocols consistently. This is due to a two-step discovery process involving the Alt-Svc HTTP header and the fact that HTTP/2 connections cannot simply be “upgraded” to HTTP/3, because the latter uses UDP.

Even if a server supports HTTP/3, however, clients (and website owners!) need to deal with the fact that intermediate networks might block UDP and/or QUIC traffic. As such, HTTP/3 will never completely replace HTTP/2. In practice, keeping a well-tuned HTTP/2 set-up will remain necessary both for first-time visitors and visitors on non-permissive networks. Luckily, as we discussed, there shouldn’t be many page-level changes between HTTP/2 and HTTP/3, so this shouldn’t be a major headache.

What could become a problem, however, is testing and verifying whether you are using the correct configuration and whether the protocols are being used as expected. This is true in production, but especially in local set-ups. As such, I expect that most people will continue to run HTTP/2 (or even HTTP/1.1) development servers, switching only to HTTP/3 in a later deployment stage. Even then, however, validating protocol performance with the current generation of tools won’t be easy.

Tools and Testing

As was the case with many major servers, the makers of the most popular web performance testing tools have not been keeping up with HTTP/3 from the start. Consequently, few tools have dedicated support for the new protocol as of July 2021, although they support it to a certain degree.

Google Lighthouse

First, there is the Google Lighthouse tool suite. While this is an amazing tool for web performance in general, I have always found it somewhat lacking in aspects of protocol performance. This is mostly because it simulates slow networks in a relatively unrealistic way, in the browser (the same way that Chrome’s DevTools handle this). While this approach is quite usable and typically “good enough” to get an idea of the impact of a slow network, testing low-level protocol differences is not realistic enough. Because the browser doesn’t have direct access to the TCP stack, it still downloads the page on your normal network, and it then artificially delays the data from reaching the necessary browser logic. This means, for example, that Lighthouse emulates only delay and bandwidth, but not packet loss (which, as we’ve seen, is a major point where HTTP/3 could potentially differ from HTTP/2). Alternatively, Lighthouse uses a highly advanced simulation model to guesstimate the real network impact, because, for example, Google Chrome has some complex logic that tweaks several aspects of a page load if it detects a slow network. This model has, to the best of my knowledge, not been adjusted to handle IETF QUIC or HTTP/3 yet. As such, if you use Lighthouse today for the sole purpose of comparing HTTP/2 and HTTP/3 performance, then you are likely to get erroneous or oversimplified results, which could lead you to wrong conclusions about what HTTP/3 can do for your website in practice. The silver lining is that, in theory, this can be improved massively in the future, because the browser does have full access to the QUIC stack, and thus Lighthouse could add much more advanced simulations (including packet loss!) for HTTP/3 down the line. For now, though, while Lighthouse can, in theory, load pages over HTTP/3, I would recommend against it.

WebPageTest

Secondly, there is WebPageTest. This amazing project lets you load pages over real networks from real devices across the world, and it also allows you to add packet-level network emulation on top, including aspects such as packet loss! As such, WebPageTest is conceptually in a prime position to be used to compare HTTP/2 and HTTP/3 performance. However, while it can indeed already load pages over the new protocol, HTTP/3 has not yet been properly integrated into the tooling or visualizations. For example, there are currently no easy ways to force a page load over QUIC, to easily view how Alt-Svc was actually used, or even to see QUIC handshake details. In some cases, even seeing whether a response used HTTP/3 or HTTP/2 can be challenging. Still, in April, I was able to use WebPageTest to run quite a few tests on facebook.com and see HTTP/3 in action, which I’ll go over now.

First, I ran a default test for facebook.com, enabling the “repeat view” option. As explained above, I would expect the first page load to use HTTP/2, which will include the Alt-Svc response header. As such, the repeat view should use HTTP/3 from the start. In Firefox version 89, this is more or less what happens. However, when looking at individual responses, we see that even during the first page load, Firefox will switch to using HTTP/3 instead of HTTP/2! As you can see in figure 2, this happens from the 20th resource onwards. This means that Firefox establishes a new QUIC connection as soon as it sees the Alt-Svc header, and it switches to it once it succeeds. If you scroll down to the connection view, it also seems to show that Firefox even opened two QUIC connections: one for credentialed CORS requests and one for no-CORS requests. This would be expected because, as we discussed above, even for HTTP/2 and HTTP/3, browsers will open multiple connections due to security concerns. However, because WebPageTest doesn’t provide more details in this view, it’s difficult to confirm without manually digging through the data. Looking at the repeat view (second visit), it starts by directly using HTTP/3 for the first request, as expected.

Next, for Chrome, we see similar behavior for the first page load, although here Chrome already switches on the 10th resource, much earlier than Firefox. It’s a bit more unclear here whether it switches as soon as possible or only when a new connection is needed (for example, for requests with different credentials), because, unlike for Firefox, the connection view also doesn’t seem to show multiple QUIC connections. For the repeat view, we see some weirder things. Unexpectedly, Chrome starts off using HTTP/2 there as well, switching to HTTP/3 only after a few requests! I performed a few more tests on other pages as well, to confirm that this is indeed consistent behaviour. This could be due to several things: It might just be Chrome’s current policy, it might be that Chrome “raced” a TCP and QUIC connection and TCP won initially, or it might be that the Alt-Svc cache from the first view was unused for some reason. At this point, there is, sadly, no easy way to determine what the problem really is (and whether it can even be fixed).

Another interesting thing I noticed here is the apparent connection coalescing behavior. As discussed above, both HTTP/2 and HTTP/3 can reuse connections even if they go to other hostnames, to prevent downsides from hostname sharding. However, as shown in figure 3, WebPageTest reports that, for this Facebook load, connection coalescing is used over HTTP/3 for facebook.com and fbcdn.net, but not over HTTP/2 (as Chrome opens a secondary connection for the second domain). I suspect this is a bug in WebPageTest, however, because facebook.com and fbcnd.net resolve to different IPs and, as such, can’t really be coalesced.

The figure also shows that some key QUIC handshake information is missing from the current WebPageTest visualization.

Note: As we see, getting “real” HTTP/3 going can be difficult sometimes. Luckily, for Chrome specifically, we have additional options we can use to test QUIC and HTTP/3, in the form of command-line parameters.

On the bottom of WebPageTest’s “Chromium” tab, I used the following command-line options:

–enable-quic –quic-version=h3-29 –origin-to-force-quic-on=www.facebook.com:443,static.xx.fbcdn.net:443

The results from this test show that this indeed forces a QUIC connection from the start, even in the first view, thus bypassing the Alt-Svc process. Interestingly, you will notice I had to pass two hostnames to –origin-to-force-quic-on. In the version where I didn’t, Chrome, of course, still first opened an HTTP/2 connection to the fbcnd.net domain, even in the repeat view. As such, you’ll need to manually indicate all QUIC origins in order for this to work!

We can see even from these few examples that a lot of stuff is going on with how browsers actually use HTTP/3 in practice. It seems they even switch to the new protocol during the initial page load, abandoning HTTP/2 either as soon as possible or when a new connection is needed. As such, it’s difficult not only getting a full HTTP/3 load, but also getting a pure HTTP/2 load on a set-up that supports both! Because WebPageTest doesn’t show much HTTP/3 or QUIC metadata yet, figuring out what’s going on can be challenging, and you can’t trust the tools and visualizations at face value either.

So, if you use WebPageTest, you’ll need to double-check the results to make sure which protocols were actually used. Consequently, I think this means that it’s too early to really test HTTP/3 performance at this time (and especially too early to compare it to HTTP/2). This belief is strengthened by the fact that not all servers and clients have implemented all protocol features yet. Due to the fact that WebPageTest doesn’t yet have easy ways of showing whether advanced aspects such as 0-RTT were used, it will be tricky to know what you’re actually measuring. This is especially true for the HTTP/3 prioritization feature, which isn’t implemented properly in all browsers yet and which many servers also lack full support for. Because prioritization can be a major aspect driving web performance, it would be unfair to compare HTTP/3 to HTTP/2 without making sure that at least this feature works properly (for both protocols!). This is just one aspect, though, as my research shows how big the differences between QUIC implementations can be. If you do any comparison of this sort yourself (or if you read articles that do), make 100% sure that you’ve checked what’s actually going on.

Finally, also note that other higher-level tools (or data sets such as the amazing HTTP Archive) are often based on WebPageTest or Lighthouse (or use similar methods), so I suspect that most of my comments here will be widely applicable to most web performance tooling. Even for those tool vendors announcing HTTP/3 support in the coming months, I would be a bit skeptical and would validate that they’re actually doing it correctly. For some tools, things are probably even worse, though; for example, Google’s PageSpeed Insights only got HTTP/2 support this year, so I wouldn’t wait for HTTP/3 arriving anytime soon.

Wireshark, qlog and qvis

As the discussion above shows, it can be tricky to analyze HTTP/3 behavior by just using Lighthouse or WebPageTest at this point. Luckily, other lower-level tools are available to help with this. First, the excellent Wireshark tool has advanced support for QUIC, and it can experimentally dissect HTTP/3 as well. This allows you to observe which QUIC and HTTP/3 packets are actually going over the wire. However, in order for that to work, you need to obtain the TLS decryption keys for a given connection, which most implementations (including Chrome and Firefox) allow you to extract by using the SSLKEYLOGFILE environment variable. While this can be useful for some things, really figuring out what’s happening, especially for longer connections, could entail a lot of manual work. You would also need a pretty advanced understanding of the protocols’ inner workings.

Luckily, there is a second option, qlog and qvis. qlog is a JSON-based logging format specifically for QUIC and HTTP/3 that is supported by the majority of QUIC implementations. Instead of looking at the packets going over the wire, qlog captures this information on the client and server directly, which allows it to include some additional information (for example, congestion control details). Typically, you can trigger qlog output when starting servers and clients with the QLOGDIR environment variable. (Note that in Firefox, you need to set the network.http.http3.enable_qlog preference. Apple devices and Safari use QUIC_LOG_DIRECTORY instead. Chrome doesn’t yet support qlog.)

These qlog files can then be uploaded to the qvis tool suite at qvis.quictools.info. There, you’ll get a number of advanced interactive visualizations that make it easier to interpret QUIC and HTTP/3 traffic. qvis also has support for uploading Wireshark packet captures (.pcap files), and it has experimental support for Chrome’s netlog files, so you can also analyze Chrome’s behavior. A full tutorial on qlog and qvis is beyond the scope of this article, but more details can be found in tutorial form, as a paper, and even in talk-show format. You can also ask me about them directly because I’m the main implementer of qlog and qvis. 😉

However, I am under no illusion that most readers here should ever use Wireshark or qvis, because these are quite low-level tools. Still, as we have few alternatives at the moment, I strongly recommend not extensively testing HTTP/3 performance without using this type of tool, to make sure you really know what’s happening on the wire and whether what you’re seeing is really explained by the protocol’s internals and not by other factors.

What Does It All Mean?

As we’ve seen, setting up and using HTTP/3 over QUIC can be a complex affair, and many things can go wrong. Sadly, no good tool or visualization is available that exposes the necessary details at an appropriate level of abstraction. This makes it very difficult for most developers to assess the potential benefits that HTTP/3 can bring to their website at this time or even to validate that their set-up works as expected.

Relying only on high-level metrics is very dangerous because these could be skewed by a plethora of factors (such as unrealistic network emulation, a lack of features on clients or servers, only partial HTTP/3 usage, etc.). Even if everything did work better, as we’ve seen in part 2, the differences between HTTP/2 and HTTP/3 will likely be relatively small in most cases, which makes it even more difficult to get the necessary information from high-level tools without targeted HTTP/3 support.

As such, I recommend leaving HTTP/2 versus HTTP/3 performance measurements alone for a few more months and focusing instead on making sure that our server-side set-ups are functioning as expected. For this, it’s easiest to use WebPageTest in combination with Google Chrome’s command-line parameters, with a fallback to curl for potential issues — this is currently the most consistent set-up I can find.

Conclusion and Takeaways

Dear reader, if you’ve read the full three-part series and made it here, I salute you! Even if you’ve only read a few sections, I thank you for your interest in these new and exciting protocols. Now, I will summarize the key takeaways from this series, provide a few key recommendations for the coming months and year, and finally provide you with some additional resources, in case you’d like to know more.

Summary

First, in part 1, we discussed that HTTP/3 was needed mainly because of the new underlying QUIC transport protocol. QUIC is the spiritual successor to TCP, and it integrates all of its best practices, as well as TLS 1.3. This was mainly needed because TCP, due to its ubiquitous deployment and integration in middleboxes, has become too inflexible to evolve. QUIC’s usage of UDP and almost full encryption means that we (hopefully) only have to update the endpoints in the future in order to add new features, which should be easier. QUIC, however, also adds some interesting new capabilities. First, QUIC’s combined transport and cryptographic handshake is faster than TCP + TLS, and it can make good use of the 0-RTT feature. Secondly, QUIC knows it is carrying multiple independent byte streams and can be smarter about how it handles loss and delays, mitigating the head-of-line blocking problem. Thirdly, QUIC connections can survive users moving to a different network (called connection migration) by tagging each packet with a connection ID. Finally, QUIC’s flexible packet structure (employing frames) makes it more efficient but also more flexible and extensible in the future. In conclusion, it’s clear that QUIC is the next-generation transport protocol and will be used and extended for many years to come.

Secondly, in part 2, we took a bit of a critical look at these new features, especially their performance implications. First, we saw that QUIC’s use of UDP doesn’t magically make it faster (nor slower) because QUIC uses congestion control mechanisms very similar to TCP to prevent overloading the network. Secondly, the faster handshake and 0-RTT are more micro-optimizations, because they are really only one round trip faster than an optimized TCP + TLS stack, and QUIC’s true 0-RTT is further affected by a range of security concerns that can limit its usefulness. Thirdly, connection migration is really only needed in a few specific cases, and it still means resetting send rates because the congestion control doesn’t know how much data the new network can handle. Fourthly, the effectiveness of QUIC’s head-of-line blocking removal severely depends on how stream data is multiplexed and prioritized. Approaches that are optimal to recover from packet loss seem detrimental to general use cases of web page loading performance and vice versa, although more research is needed. Fifthly, QUIC could easily be slower to send packets than TCP + TLS, because UDP APIs are less mature and QUIC encrypts each packet individually, although this can be largely mitigated in time. Sixthly, HTTP/3 itself doesn’t really bring any major new performance features to the table, but mainly reworks and simplifies the internals of known HTTP/2 features. Finally, some of the most exciting performance-related features that QUIC allows (multipath, unreliable data, WebTransport, forward error correction, etc.) are not part of the core QUIC and HTTP/3 standards, but rather are proposed extensions that will take some more time to be available. In conclusion, this means QUIC will probably not improve performance much for users on high-speed networks, but will mainly be important for those on slow and less-stable networks.

Finally, in this part 3, we looked at how to practically use and deploy QUIC and HTTP/3. First, we saw that most best practices and lessons learned from HTTP/2 should just carry over to HTTP/3. There is no need to change your bundling or inlining strategy, nor to consolidate or shard your server farm. Server push is still not the best feature to use, and preload can similarly be a powerful footgun. Secondly, we’ve discussed that it might take a while before off-the-shelf web server packages provide full HTTP/3 support (partly due to TLS library support issues), although plenty of open-source options are available for early adopters and several major CDNs have a mature offering. Thirdly, it’s clear that most major browsers have (basic) HTTP/3 support, even enabled by default. There are major differences in how and when they practically use HTTP/3 and its new features, though, so understanding their behavior can be challenging. Fourthly, we’ve discussed that this is worsened by a lack of explicit HTTP/3 support in popular tools such as Lighthouse and WebPageTest, making it especially difficult to compare HTTP/3 performance to HTTP/2 and HTTP/1.1 at this time. In conclusion, HTTP/3 and QUIC are probably not quite ready for primetime yet, but they soon will be.

Recommendations

From the summary above, it might seem like I am making strong arguments against using QUIC or HTTP/3. However, that is quite opposite to the point I want to make.

First, as discussed at the end of part 2, even though your “average” user might not encounter major performance gains (depending on your target market), a significant portion of your audience will likely see impressive improvements. 0-RTT might only save a single round trip, but that can still mean several hundred milliseconds for some users. Connection migration might not sustain consistently fast downloads, but it will definitely help people trying to fetch that PDF on a high-speed train. Packet loss on cable might be bursty, but wireless links might benefit more from QUIC’s head-of-line blocking removal. What’s more, these users are those who would typically encounter the worst performance of your product and, consequently, be most heavily affected by it. If you wonder why that matters, read Chris Zacharias’ famous web performance anecdote.

Secondly, QUIC and HTTP/3 will only get better and faster over time. Version 1 has focused on getting the basic protocol done, keeping more advanced performance features for later. As such, I feel it pays to start investing in the protocols now, to make sure you can use them and the new features to optimal effect when they become available down the line. Given the complexity of the protocols and their deployment aspects, it would be good to give yourself some time to get acquainted with their quirks. Even if you don’t want to get your hands dirty quite yet, several major CDN providers offer mature “flip the switch” HTTP/3 support (particularly, Cloudflare and Fastly). I struggle to find a reason not to try that out if you’re using a CDN (which, if you care about performance, you really should be).

As such, while I wouldn’t say that it’s crucial to start using QUIC and HTTP/3 as soon as possible, I do feel there are plenty of benefits already to be had, and they will only increase in the future.

Further Reading

While this has been a long body of text, sadly, it really only scratches the technical surface of the complex protocols that QUIC and HTTP/3 are.

Below you will find a list of additional resources for continued learning, more or less in order of ascending technical depth:

“HTTP/3 Explained,” Daniel Stenberg
This e-book, by the creator of cURL, summarizes the protocol.
“HTTP/2 in Action,” Barry Pollard
This excellent all-round book on HTTP/2 has reusable advice and a section on HTTP/3.
@programmingart, Twitter
My tweets are mostly dedicated to QUIC, HTTP/3, and web performance (including news) in general. See for example my recent threads on QUIC features.
“YouTube,” Robin Marx
My over 10 in-depth talks cover various aspects of the protocols.
“The Cloudlare Blog”
This is the main product of a company that also runs a CDN on the side.
“The Fastly Blog”
This blog has excellent discussions of technical aspects, embedded in the wider context.
QUIC, the actual RFCs
You’ll find links to the IETF QUIC and HTTP/3 RFC documents and other official extensions.
IIJ Engineers Blog: Excellent deep technical explanations of QUIC feature details.
HTTP/3 and QUIC academic papers, Robin Marx
My research papers cover stream multiplexing and prioritization, tooling, and implementation differences.
QUIPS, EPIQ 2018, and EPIQ 2020
These papers from academic workshops contain in-depth research on security, performance, and extensions of the protocols.

With that, I leave you, dear reader, with a hopefully much-improved understanding of this brave new world. I am always open to feedback, so please let me know what you think of this series!

This series is divided into three parts:

HTTP/3 history and core concepts
This is targeted at people new to HTTP/3 and protocols in general, and it mainly discusses the basics.
HTTP/3 performance features
This is more in-depth and technical. People who already know the basics can start here.
Practical HTTP/3 deployment options (current article)
This explains the challenges involved in deploying and testing HTTP/3 yourself. It details how and if you should change your web pages and resources as well.

Exciting New Tools for Designers, September 2021

Original Source: https://www.webdesignerdepot.com/2021/09/exciting-new-tools-for-designers-september-2021/

Since school is back in session, this month’s roundup has a learning focus. In addition to tools, many of the resources include guides, tutorials, and cheat sheets to help make design work easier.

Here’s what’s new for designers this month.

ScrollingMockup.io

ScrollingMockup.io generates high-definition, animated scrolling mockups in minutes. All you have to do is paste your website URL, select from the expanding template gallery, add some music and post. You can create three mockups for free, and then this tool comes with a subscription model. The paid model allows for custom branding for mockups and more.

FilterSS

FilterSS is a curated collection of CSS image filters for use in projects. Upload an image, sort through the list, and then copy the code for the filter you want to use. It’s that easy!

Buttons Generator

Buttons Generator is a fun tool with so many button options in one place. Choose from three-dimensional, gradient, shadow borders, neumorphic, retro, animated, ghost, with arrows, and more all in one place. Click the one you like, and the code is copied right to your clipboard and ready to use in projects.

UI Cheat Sheet: Spacing Friendships

UI Cheat Sheet: Spacing Friendships is a fun – and memorable approach to figuring out spacing. This guide shows how close or far away elements should be based on “friend” circles with a couple of relatable instances. It’s one of the most relatable examples of this concept out there while emphasizing the importance of spacing in design.

PrettyMaps

PrettyMaps is a minimal Python library that allows you to draw customized maps from OpenStreetMap data. This tool can help you take online map design to the next level with cool, unique map visuals. It’s based on osmnx, matplotlib, shapely, and vsketch libraries.

Card.UX/UI

Card.UX/UI is a card-style generator with more than 20 templates and elements to create custom cards. Use the on-screen tools to design it the way you want and then copy the code for easy use.

Couleur.io

Couleur.io is a simple color palette builder tool that lets you pick a starting color and build a scheme around it. One of the best elements of the tool might be the quick preview, which shows your choices using the palette in context and in dark mode. Get it looking the way you want, and then snag the CSS to use in your projects.

CSS Accent-Color

CSS Accent-Color can help you tint elements with one line of CSS. It’s a time-saving trick that allows for greater customization for your brand in website design projects. Plus, it works equally well in dark or light color schemes. It supports checkboxes, radio, range, and progress bars.

Vytal

Vytal shows what traces your browser leaves behind while surfing the web. This scan lets you understand how easy it is to identify and track your browser even while using private mode. In addition, it scans for digital fingerprints, connections, and system info.

Imba

Imba is a programming language for the web that’s made to be fast. It’s packed with time-saving syntax tags and a memorized DOM. Everything compiles to JavaScript, works with Node and npm, and has amazing performance. While the language is still in active development, the community around it is pretty active and growing.

SVG Shape Dividers Creator

SVG Shape Dividers Creator is a tool that allows you to create interesting shapes with SVG so that your colors and backgrounds aren’t always rectangles. You can adjust and side, change the color, axis, and flip or animate it. Then snag the CSS, and you are ready to go.

Image Cropper

Image Cropper is a tool that allows you to crop and rotate images using the flutter plugin. It works for Android and IOS.

Noteli

Noteli is a CLI-based notes application that uses TypeScript, MongoDB, and Auth0. The tool is just out of beta.

Yofte

Yofte is a set of components for Tailwind CSS that help you create great e-commerce stores. The UI Kit is packed with components with clean and colorful designs that are customizable. The code is easy to export and clean. This premium kit comes with a lifetime license or a monthly plan.

UI Deck

UI Deck is a collection of free and premium landing page templates, themes, and UI kits for various projects. This is a premium resource with paid access to all of the tools. It includes access to more than 80 templates.

Star Rating: An SVG Solution

Star Rating: An SVG Solution is a tutorial that solves a common design dilemma: How to create great star rating icons for pages. This code takes you through creating an imageless element that’s resizable, accessible, includes partial stars, and is easy to maintain with CSS. It’s a great solution to a common design need.

Designing Accessible WCAG-Compliant Focus Indicators

Designing Accessible WCAG-Compliant Focus Indicators is another convenient guide/tutorial for an everyday application. Here’s why it is important: “By designing and implementing accessible focus indicators, we can make our products accessible to keyboard users, as well as users of assistive technology that works through a keyboard or emulates keyboard functionality, such as voice control, switch controls, mouth sticks, and head wands, to mention a few.”

Blockchain Grants

Blockchain Grants is a tool for anyone developing blockchain applications and in need of funding. It’s a database of grants from a variety of organizations for different applications. Start looking through this free resource to help secure additional funding for your projects.

Basement Grotesque

Basement Grotesque is a beautiful slab with a great heavy weight and plenty of character. There are 413 characters in the set with plenty of accents, numbers, and variable capitals.

Gadimon

Gadimon is a fun, almost comic book-style layered script. The font package includes a regular and extrude style.

Lagom

Lagom is a sleek and functional serif typeface with 16 styles in the robust family from ultralight to extra bold italic. It’s readable and has a lot of personality.

Striped Campus

Striped Campus fits our back-to-school theme with a fun, scholastic look and feel. The block letters have a thick outline stroke and some fun inline texture.

Source

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The post Exciting New Tools for Designers, September 2021 first appeared on Webdesigner Depot.

3D & Motion Neons Lines by Stato ®

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/ZaxSF3GkPhI/3d-motion-neons-lines-stato

3D & Motion Neons Lines by Stato ®
3D & Motion Neons Lines by Stato ®

AoiroStudio09.06.21

Stato ® is a duo of creative director Jimena Passadore and 3D Generalist José León Molfino based in Buenos Aires, Argentina. They have 13 years of experience in the motion graphics market as a creative duo leading a talented team of artists. They will transform your ideas into an upscaled visual experience. Push the limits. We are featuring the project they have worked for ‘Auth0’, they created an outstanding animation piece from concept design to final animation, for all screens and media: broadcast branding, all-media shows, and commercials plus site-specific animations

Links

Studio site
Behance
3D & Motion

Image may contain: abstract and light

 

 


UX vs. UI – Which Should You Focus?

Original Source: https://www.hongkiat.com/blog/ui-or-ux-in-web-design/

(Guest writer: Tarif Kahn) Web design can get complicated, especially when it’s difficult to know where to put your focus. As a designer, having a concrete goal to work towards is vital….

Visit hongkiat.com for full content.

10+ New Features You Must Know about Windows 11

Original Source: https://www.hongkiat.com/blog/windows-11-features/

Windows 11 is finally here with a new design and several exciting features. With a newer, fresher, and simpler design language and user experience, Windows 11 empowers productivity and inspires…

Visit hongkiat.com for full content.

Gmail Chrome Extensions — Top Picks for Personal and Business Use

Original Source: https://www.hongkiat.com/blog/personal-business-gmail-chrome-extensions/

(Guest writer: Roman Shvydun) An average US employee spends 28% of their workday on email, amounting to 13 hours per week. Combine the time you spend on both personal and business correspondence, and…

Visit hongkiat.com for full content.

The Apple iPhone 13 may offer game-changing satellite communications

Original Source: http://feedproxy.google.com/~r/CreativeBloq/~3/vlT1hk06IdU/iphone13-satellite-communications

But will we get to use it?

Eerie yet Stunning 3D Compositions by Huleeb

Original Source: http://feedproxy.google.com/~r/abduzeedo/~3/QTaJZMjLcLQ/eerie-yet-stunning-3d-compositions-huleeb

Eerie yet Stunning 3D Compositions by Huleeb
Eerie yet Stunning 3D Compositions by Huleeb

abduzeedo08.27.21

Huleeb is a freelance digital artist based in Montreal, Canada creating daily renders in Blender with the use of various image manipulation softwares. Huleeb’s work is simply stunning with an eerie mood and impeccable compositions. We selected a few samples for you to get inspired. Huleeb also takes on client work.

For more information make sure to check out Huleeb on:

Behance
Instagram


Creating A Public/Private Multi-Monorepo For PHP Projects

Original Source: https://smashingmagazine.com/2021/08/public-private-multi-monorepo-php-projects/

To make the development experience faster, I moved all the PHP packages required by my projects to a monorepo. When each package is hosted on its own repo (the “multirepo” approach), it’d need be developed and tested on its own, and then published to Packagist before I could install it on other packages via Composer. With the monorepo, because all packages are hosted together, these can be developed, tested, versioned and released at the same time.

The monorepo hosting my PHP packages is public, accessible to anyone on GitHub. Git repos cannot grant different access to different assets, it’s all either public or private. As I plan to release a PRO WordPress plugin, I want its packages to be kept private, meaning they can’t be added to the public monorepo.

The solution I found is to use a “multi-monorepo” approach, comprising two monorepos: one public and one private, with the private monorepo embedding the public one as a Git submodule, allowing it to access its files. The public monorepo can be considered the “upstream”, and the private monorepo the “downstream”.

As my kept iterating on my code, the repo set-up I needed to use at each stage of my project also needed to be upgraded. Hence, I didn’t arrive at the multi-monorepo approach on day 1, but it was a process that spanned several years and took its fair amount of effort, going from a single repo, to multiple repos, to the monorepo, to finally the multi-monorepo.

In this article I will describe how I set-up my multi-monorepo using the Monorepo builder, which works for PHP projects based on Composer.

Reusing Code In The Multi-Monorepo

The public monorepo leoloso/PoP is where I keep all my PHP projects.

This monorepo contains workflow generate_plugins.yml, which generates multiple WordPress plugins for distribution when creating a new release on GitHub:

The workflow configuration is not hard-coded within the YAML but injected via PHP code:

– id: output_data
run: |
echo “::set-output name=plugin_config_entries::$(vendor/bin/monorepo-builder plugin-config-entries-json)”

And the configuration is provided via a custom PHP class:

class PluginDataSource
{
public function getPluginConfigEntries(): array
{
return [
// GraphQL API for WordPress
[
‘path’ => ‘layers/GraphQLAPIForWP/plugins/graphql-api-for-wp’,
‘zip_file’ => ‘graphql-api.zip’,
‘main_file’ => ‘graphql-api.php’,
‘dist_repo_organization’ => ‘GraphQLAPI’,
‘dist_repo_name’ => ‘graphql-api-for-wp-dist’,
],
// GraphQL API – Extension Demo
[
‘path’ => ‘layers/GraphQLAPIForWP/plugins/extension-demo’,
‘zip_file’ => ‘graphql-api-extension-demo.zip’,
‘main_file’ => ‘graphql-api-extension-demo.php’,
‘dist_repo_organization’ => ‘GraphQLAPI’,
‘dist_repo_name’ => ‘extension-demo-dist’,
],
];
}
}

Generating multiple WordPress plugins all together, and configuring the workflow via PHP, has reduced the amount of time needed managing the project. The workflow currently handles two plugins (the GraphQL API and its extension demo), but it could handle 200 without additional effort on my side.

It is this set-up that I want to reuse for my private monorepo leoloso/GraphQLAPI-PRO, so that the PRO plugins can also be generated without effort.

The code to reuse will comprise:

The GitHub Actions workflows to generate the WordPress plugins (including scoping, downgrading from PHP 8.0 to 7.1 and uploading to the releases page).
The custom PHP services to configure the workflows.

The private monorepo can then generate the PRO WordPress plugins, simply by triggering the workflows from the public monorepo, and overriding their configuration in PHP.

Linking Monorepos Via Git Submodules

To embed the public repo within the private one we use Git submodules:

git submodule add <public repo URL>

I embedded the public repo under subfolder submodules of the private monorepo, allowing me to add more upstream monorepos in the future if needed. In GitHub, the folder displays the submodule’s specific commit, and clicking on it will take me to that commit on leoloso/PoP:

Since it contains submodules, to clone the private repo we must provide the –recursive option:

git clone –recursive <private repo URL>

Reusing The GitHub Actions Workflows

GitHub Actions only loads workflows from under .github/workflows. Because the public workflows in the downstream monorepo are are under submodules/PoP/.github/workflows, these must be duplicated into the expected location.

In order to keep the upstream workflows as the single source of truth, we can limit ourselves to copying the files to downstream under .github/workflows, but never edit them there. If there is any change to be done, it must be done in the upstream monorepo, and then copied over.

As a side note, notice how this means that the multi-monorepo leaks: the upstream monorepo is not fully autonomous, and will need to be adapted to suit the downstream monorepo.

In my first iteration to copy the workflows, I created a simple Composer script:

{
“scripts”: {
“copy-workflows”: [
“php -r “copy(‘submodules/PoP/.github/workflows/generate_plugins.yml’, ‘.github/workflows/generate_plugins.yml’);””,
“php -r “copy(‘submodules/PoP/.github/workflows/split_monorepo.yaml’, ‘.github/workflows/split_monorepo.yaml’);””
]
}
}

Then, after editing the workflows in the upstream monorepo, I would copy them to downstream by executing:

composer copy-workflows

But then I realized that just copying the workflows is not enough: they must also be modified in the process. This is so because checking out the downstream monorepo requires option –recurse-submodules, as to also checkout the submodules.

In GitHub Actions, the checkout for downstream is done like this:

– uses: actions/checkout@v2
with:
submodules: recursive

So checking out the downstream repo needs input submodules: recursive, but the upstream one does not, and they both use the same source file.

The solution I found is to provide the value for input submodules via an environment variable CHECKOUT_SUBMODULES, which is by default empty for the upstream repo:

env:
CHECKOUT_SUBMODULES: “”

jobs:
provide_data:
steps:
– uses: actions/checkout@v2
with:
submodules: ${{ env.CHECKOUT_SUBMODULES }}

Then, when copying the workflows from upstream to downstream, the value of CHECKOUT_SUBMODULES is replaced with “recursive”:

env:
CHECKOUT_SUBMODULES: “recursive”

When modifying the workflow, it’s a good idea to use a regex, so that it works for different formats in the source file (such as CHECKOUT_SUBMODULES: “” or CHECKOUT_SUBMODULES:” or CHECKOUT_SUBMODULES:) as to not create bugs from this kind of assumed-to-be-harmless changes.

Then, the copy-workflows Composer script seen above is not good enough to handle this complexity.

In my next iteration, I created a PHP command CopyUpstreamMonorepoFilesCommand, to be executed via the Monorepo builder:

vendor/bin/monorepo-builder copy-upstream-monorepo-files

This command uses a custom service FileCopierSystem to copy all files from a source folder to the indicated destination, while optionally replacing their contents:

namespace PoPGraphQLAPIPROExtensionsSymplifyMonorepoBuilderSmartFile;

use NetteUtilsStrings;
use SymplifySmartFileSystemFinderSmartFinder;
use SymplifySmartFileSystemSmartFileSystem;

final class FileCopierSystem
{
public function __construct(
private SmartFileSystem $smartFileSystem,
private SmartFinder $smartFinder,
) {
}

/**
* @param array $patternReplacements a regex pattern to search, and its replacement
*/
public function copyFilesFromFolder(
string $fromFolder,
string $toFolder,
array $patternReplacements = []
): void {
$smartFileInfos = $this->smartFinder->find([$fromFolder], ‘*’);

foreach ($smartFileInfos as $smartFileInfo) {
$fromFile = $smartFileInfo->getRealPath();
$fileContent = $this->smartFileSystem->readFile($fromFile);

foreach ($patternReplacements as $pattern => $replacement) {
$fileContent = Strings::replace($fileContent, $pattern, $replacement);
}

$toFile = $toFolder . substr($fromFile, strlen($fromFolder));
$this->smartFileSystem->dumpFile($toFile, $fileContent);
}
}
}

When invoking this method to copy all workflows downstream, I also replace the value of CHECKOUT_SUBMODULES:

/**
* Copy all workflows to `.github/`, and convert:
* `CHECKOUT_SUBMODULES: “”`
* into:
* `CHECKOUT_SUBMODULES: “recursive”`
*/
$regexReplacements = [
‘#CHECKOUT_SUBMODULES:(s+”.*”)?#’ => ‘CHECKOUT_SUBMODULES: “recursive”‘,
];
(new FileCopierSystem())->copyFilesFromFolder(
‘submodules/PoP/.github/workflows’,
‘.github/workflows’,
$regexReplacements
);

Workflow generate_plugins.yml needs an additional replacement. When the WordPress plugin is generated, its code is downgraded from PHP 8.0 to 7.1 by invoking script ci/downgrade/downgrade_code.sh:

– name: Downgrade code for production (to PHP 7.1)
run: ci/downgrade/downgrade_code.sh “${{ matrix.pluginConfig.rector_downgrade_config }}” “” “${{ matrix.pluginConfig.path }}” “${{ matrix.pluginConfig.additional_rector_configs }}”

In the downstream monorepo, this file will be located under submodules/PoP/ci/downgrade/downgrade_code.sh. Then, we have the downstream workflow point to the right path with this replacement:

$regexReplacements = [
// …
‘#(ci/downgrade/downgrade_code.sh)#’ => ‘submodules/PoP/$1’,
];

Configuring Packages In Monorepo Builder

File monorepo-builder.php — placed at the root of the monorepo — holds the configuration for the Monorepo builder. In it we must indicate where the packages (and plugins, clients, or anything else) are located:

use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;
use SymplifyMonorepoBuilderValueObjectOption;

return static function (ContainerConfigurator $containerConfigurator): void {
$parameters = $containerConfigurator->parameters();
$parameters->set(Option::PACKAGE_DIRECTORIES, [
__DIR__ . ‘/packages’,
__DIR__ . ‘/plugins’,
]);
};

The private monorepo must have access to all code: its own packages, plus those from the public monorepo. Then, it must define all packages from both monorepos in the config file. The ones from the public monorepo are located under “/submodules/PoP”:

return static function (ContainerConfigurator $containerConfigurator): void {
$parameters = $containerConfigurator->parameters();
$parameters->set(Option::PACKAGE_DIRECTORIES, [
// public code
__DIR__ . ‘/submodules/PoP/packages’,
__DIR__ . ‘/submodules/PoP/plugins’,
// private code
__DIR__ . ‘/packages’,
__DIR__ . ‘/plugins’,
__DIR__ . ‘/clients’,
]);
};

As it can be seen, the configuration for upstream and downstream are pretty much the same, with the difference that the downstream one will:

Change the path to the public packages.
Add the private packages.

Then, it makes sense to rewrite the configuration using object-oriented programming, so that we make code DRY (don’t repeat yourself) by having a PHP class in the public repo be extended in the private repo.

Recreating The Configuration Via OOP

Let’s refactor the configuration. In the public repo, file monorepo-builder.php will simply reference a new class ContainerConfigurationService where all action will happen:

use PoPPoPConfigSymplifyMonorepoBuilderConfiguratorsContainerConfigurationService;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;

return static function (ContainerConfigurator $containerConfigurator): void {
$containerConfigurationService = new ContainerConfigurationService(
$containerConfigurator,
__DIR__
);
$containerConfigurationService->configureContainer();
};

The __DIR__ param points to the root of the monorepo. It will be needed to obtain the full path to the package directories.

Class ContainerConfigurationService is now in charge of producing the configuration:

namespace PoPPoPConfigSymplifyMonorepoBuilderConfigurators;

use PoPPoPConfigSymplifyMonorepoBuilderDataSourcesPackageOrganizationDataSource;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;
use SymplifyMonorepoBuilderValueObjectOption;

class ContainerConfigurationService
{
public function __construct(
protected ContainerConfigurator $containerConfigurator,
protected string $rootDirectory,
) {
}

public function configureContainer(): void
{
$parameters = $this->containerConfigurator->parameters();
if ($packageOrganizationConfig = $this->getPackageOrganizationDataSource($this->rootDirectory)) {
$parameters->set(
Option::PACKAGE_DIRECTORIES,
$packageOrganizationConfig->getPackageDirectories()
);
}
}

protected function getPackageOrganizationDataSource(): ?PackageOrganizationDataSource
{
return new PackageOrganizationDataSource($this->rootDirectory);
}
}

The configuration can be split across several classes. In this case, ContainerConfigurationService retrieves the package configuration through class PackageOrganizationDataSource, which has this implementation:

namespace PoPPoPConfigSymplifyMonorepoBuilderDataSources;

class PackageOrganizationDataSource
{
public function __construct(protected string $rootDir)
{
}

public function getPackageDirectories(): array
{
return array_map(
fn (string $packagePath) => $this->rootDir . ‘/’ . $packagePath,
$this->getRelativePackagePaths()
);
}

public function getRelativePackagePaths(): array
{
return [
‘packages’,
‘plugins’,
];
}
}

Overriding The Configuration In The Downstream Monorepo

Now that the configuration in the public monorepo is setup via OOP, we can extend it to suit the needs of the private monorepo.

In order to allow the private monorepo to autoload the PHP code from the public monorepo, we must first configure the downstream composer.json to reference the source code from the upstream, which is under path submodules/PoP/src:

{
“autoload”: {
“psr-4”: {
“PoP\GraphQLAPIPRO\”: “src”,
“PoP\PoP\”: “submodules/PoP/src”
}
}
}

Below is file monorepo-builder.php for the private monorepo. Notice that the referenced class ContainerConfigurationService in the upstream repo belongs to the PoPPoP namespace, but now it switched to the PoPGraphQLAPIPRO namespace. This class must receive the additional input $upstreamRelativeRootPath (with value “submodules/PoP”) as to recreate the full path to the public packages:

use PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderConfiguratorsContainerConfigurationService;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;

return static function (ContainerConfigurator $containerConfigurator): void {
$containerConfigurationService = new ContainerConfigurationService(
$containerConfigurator,
__DIR__,
‘submodules/PoP’
);
$containerConfigurationService->configureContainer();
};

The downstream class ContainerConfigurationService overrides which PackageOrganizationDataSource class is used in the configuration:

namespace PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderConfigurators;

use PoPPoPConfigSymplifyMonorepoBuilderConfiguratorsContainerConfigurationService as UpstreamContainerConfigurationService;
use PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderDataSourcesPackageOrganizationDataSource;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;

class ContainerConfigurationService extends UpstreamContainerConfigurationService
{
public function __construct(
ContainerConfigurator $containerConfigurator,
string $rootDirectory,
protected string $upstreamRelativeRootPath
) {
parent::__construct(
$containerConfigurator,
$rootDirectory
);
}

protected function getPackageOrganizationDataSource(): ?PackageOrganizationDataSource
{
return new PackageOrganizationDataSource(
$this->rootDirectory,
$this->upstreamRelativeRootPath
);
}
}

Finally, downstream class PackageOrganizationDataSource contains the full path to both public and private packages:

namespace PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderDataSources;

use PoPPoPConfigSymplifyMonorepoBuilderDataSourcesPackageOrganizationDataSource as UpstreamPackageOrganizationDataSource;

class PackageOrganizationDataSource extends UpstreamPackageOrganizationDataSource
{
public function __construct(
string $rootDir,
protected string $upstreamRelativeRootPath
) {
parent::__construct($rootDir);
}

public function getRelativePackagePaths(): array
{
return array_merge(
// Public packages – Prepend them with “submodules/PoP/”
array_map(
fn ($upstreamPackagePath) => $this->upstreamRelativeRootPath . ‘/’ . $upstreamPackagePath,
parent::getRelativePackagePaths()
),
// Private packages
[
‘packages’,
‘plugins’,
‘clients’,
]
);
}
}

Injecting The Configuration From PHP Into GitHub Actions

Monorepo builder offers command packages-json, which we can use to inject the package paths into the GitHub Actions workflow:

jobs:
provide_data:
steps:
– id: output_data
name: Calculate matrix for packages
run: |
echo “::set-output name=matrix::$(vendor/bin/monorepo-builder packages-json)”

outputs:
matrix: ${{ steps.output_data.outputs.matrix }}

This command produces a stringified JSON. In the workflow it must be converted to a JSON object via fromJson:

jobs:
split_monorepo:
needs: provide_data
strategy:
matrix:
package: ${{ fromJson(needs.provide_data.outputs.matrix) }}

Unfortunately, command packages-json outputs the package names but not their paths, which works when all packages are under the same folder (such as packages/). It doesn’t work in our case, since public and private packages are located in different folders.

Fortunately, the Monorepo builder can be extended with custom PHP services. So I created a custom command package-entries-json (via class PackageEntriesJsonCommand) which does output the path to the package.

The workflow was then updated with the new command:

run: |
echo “::set-output name=matrix::$(vendor/bin/monorepo-builder package-entries-json)”

Executed on the public monorepo, it produces the following packages (among many others):

[
{
“name”: “graphql-api-for-wp”,
“path”: “layers/GraphQLAPIForWP/plugins/graphql-api-for-wp”
},
{
“name”: “extension-demo”,
“path”: “layers/GraphQLAPIForWP/plugins/extension-demo”
},
{
“name”: “access-control”,
“path”: “layers/Engine/packages/access-control”
},
{
“name”: “api”,
“path”: “layers/API/packages/api”
},
{
“name”: “api-clients”,
“path”: “layers/API/packages/api-clients”
}
]

Executed on the private monorepo, it produces the following entries (among many others):

[
{
“name”: “graphql-api-for-wp”,
“path”: “submodules/PoP/layers/GraphQLAPIForWP/plugins/graphql-api-for-wp”
},
{
“name”: “extension-demo”,
“path”: “submodules/PoP/layers/GraphQLAPIForWP/plugins/extension-demo”
},
{
“name”: “access-control”,
“path”: “submodules/PoP/layers/Engine/packages/access-control”
},
{
“name”: “api”,
“path”: “submodules/PoP/layers/API/packages/api”
},
{
“name”: “api-clients”,
“path”: “submodules/PoP/layers/API/packages/api-clients”
},
{
“name”: “graphql-api-pro”,
“path”: “layers/GraphQLAPIForWP/plugins/graphql-api-pro”
},
{
“name”: “convert-case-directives”,
“path”: “layers/Schema/packages/convert-case-directives”
},
{
“name”: “export-directive”,
“path”: “layers/GraphQLByPoP/packages/export-directive”
}
]

As it can be appreciated, it works well: the configuration for the downstream monorepo contains both public and private packages, and the paths to the public ones were prepended with “submodules/PoP”.

Skipping Public Packages In The Downstream Monorepo

So far, the downstream monorepo has included both public and private packages in its configuration. However, not every command needs to be executed on the public packages.

Take static analysis, for instance. The public monorepo already executes PHPStan on all public packages via workflow phpstan.yml, as shown in this run. If the downstream monorepo runs once again PHPStan on the public packages, it is a waste of computing time. Then, the phpstan.yml workflow needs to run on the private packages only.

That means that depending on the command to execute in the downstream repo, we may want to either include both public and private packages, or only private ones.

To add public packages or not on the downstream configuration, we adapt downstream class PackageOrganizationDataSource to check this condition via input $includeUpstreamPackages:

namespace PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderDataSources;

use PoPPoPConfigSymplifyMonorepoBuilderDataSourcesPackageOrganizationDataSource as UpstreamPackageOrganizationDataSource;

class PackageOrganizationDataSource extends UpstreamPackageOrganizationDataSource
{
public function __construct(
string $rootDir,
protected string $upstreamRelativeRootPath,
protected bool $includeUpstreamPackages
) {
parent::__construct($rootDir);
}

public function getRelativePackagePaths(): array
{
return array_merge(
// Add the public packages?
$this->includeUpstreamPackages ?
// Public packages – Prepend them with “submodules/PoP/”
array_map(
fn ($upstreamPackagePath) => $this->upstreamRelativeRootPath . ‘/’ . $upstreamPackagePath,
parent::getRelativePackagePaths()
) : [],
// Private packages
[
‘packages’,
‘plugins’,
‘clients’,
]
);
}
}

Next, we need to provide value $includeUpstreamPackages as either true or false depending on the command to execute.

We can do this by replacing config file monorepo-builder.php with two other config files: monorepo-builder-with-upstream-packages.php (which passes $includeUpstreamPackages => true) and monorepo-builder-without-upstream-packages.php (which passes $includeUpstreamPackages => false):

// File monorepo-builder-without-upstream-packages.php
use PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderConfiguratorsContainerConfigurationService;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;

return static function (ContainerConfigurator $containerConfigurator): void {
$containerConfigurationService = new ContainerConfigurationService(
$containerConfigurator,
__DIR__,
‘submodules/PoP’,
false, // This is $includeUpstreamPackages
);
$containerConfigurationService->configureContainer();
};

We then update ContainerConfigurationService to receive parameter $includeUpstreamPackages and pass it along to PackageOrganizationDataSource:

namespace PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderConfigurators;

use PoPPoPConfigSymplifyMonorepoBuilderConfiguratorsContainerConfigurationService as UpstreamContainerConfigurationService;
use PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderDataSourcesPackageOrganizationDataSource;
use SymfonyComponentDependencyInjectionLoaderConfiguratorContainerConfigurator;

class ContainerConfigurationService extends UpstreamContainerConfigurationService
{
public function __construct(
ContainerConfigurator $containerConfigurator,
string $rootDirectory,
protected string $upstreamRelativeRootPath,
protected bool $includeUpstreamPackages,
) {
parent::__construct(
$containerConfigurator,
$rootDirectory,
);
}

protected function getPackageOrganizationDataSource(): ?PackageOrganizationDataSource
{
return new PackageOrganizationDataSource(
$this->rootDirectory,
$this->upstreamRelativeRootPath,
$this->includeUpstreamPackages,
);
}
}

Next, we should invoke the monorepo-builder with either config file, by providing the –config option:

jobs:
provide_data:
steps:
– id: output_data
name: Calculate matrix for packages
run: |
echo “::set-output name=matrix::$(vendor/bin/monorepo-builder package-entries-json –config=monorepo-builder-without-upstream-packages.php)”

However, as we saw earlier on, we want to keep the GitHub Actions workflows in the upstream monorepo as the single source of truth, and they clearly do not need these changes.

The solution I found to this issue is to provide a –config option in the upstream repo always, with each command getting its own config file, such as the validate command receiving the validate.php config file:

– name: Run validation
run: vendor/bin/monorepo-builder validate –config=config/monorepo-builder/validate.php

Now, there are no config files in the upstream monorepo, since it doesn’t need them. But it will not break, because the Monorepo builder checks if the config file exists and, if it does not, it loads the default config file instead. So we will either override the config, or nothing happens.

The downstream repo does provide the config files for each command, specifying if to add the upstream packages or not:

Btw, as a side note, this is another example of how the multi-monorepo leaks.

// File config/monorepo-builder/validate.php
return require_once __DIR__ . ‘/monorepo-builder-with-upstream-packages.php’;

Overriding The Configuration

We are almost done. By now the downstream monorepo can override the configuration from the upstream monorepo. So all that’s left to do is to provide the new configuration.

In class PluginDataSource I override the configuration of which WordPress plugins must be generated, providing the PRO ones instead:

namespace PoPGraphQLAPIPROConfigSymplifyMonorepoBuilderDataSources;

use PoPPoPConfigSymplifyMonorepoBuilderDataSourcesPluginDataSource as UpstreamPluginDataSource;

class PluginDataSource extends UpstreamPluginDataSource
{
public function getPluginConfigEntries(): array
{
return [
// GraphQL API PRO
[
‘path’ => ‘layers/GraphQLAPIForWP/plugins/graphql-api-pro’,
‘zip_file’ => ‘graphql-api-pro.zip’,
‘main_file’ => ‘graphql-api-pro.php’,
‘dist_repo_organization’ => ‘GraphQLAPI-PRO’,
‘dist_repo_name’ => ‘graphql-api-pro-dist’,
],
// GraphQL API Extensions
// Google Translate
[
‘path’ => ‘layers/GraphQLAPIForWP/plugins/google-translate’,
‘zip_file’ => ‘graphql-api-google-translate.zip’,
‘main_file’ => ‘graphql-api-google-translate.php’,
‘dist_repo_organization’ => ‘GraphQLAPI-PRO’,
‘dist_repo_name’ => ‘graphql-api-google-translate-dist’,
],
// Events Manager
[
‘path’ => ‘layers/GraphQLAPIForWP/plugins/events-manager’,
‘zip_file’ => ‘graphql-api-events-manager.zip’,
‘main_file’ => ‘graphql-api-events-manager.php’,
‘dist_repo_organization’ => ‘GraphQLAPI-PRO’,
‘dist_repo_name’ => ‘graphql-api-events-manager-dist’,
],
];
}
}

Creating a new release on GitHub will trigger the generate_plugins.yml workflow and generate the PRO plugins on my private monorepo:

Tadaaaaaaaa! ?

Conclusion

As always, there is no “best” solution, only solutions that may work better depending on the context. The multi-monorepo approach is not suitable to every kind of project or team. I believe the biggest beneficiaries are plugin creators who release public plugins to be upgraded to their PRO versions, and agencies customizing plugins for their clients.

In my case, I’m quite happy with this approach. It takes a bit of time and effort to get right, but it’s a one-off investment. Once the set-up is over, I can just focus on building my PRO plugins, and the time savings concerning project management can be huge.