How to Set Up a Reverse NGINX Proxy on Alibaba Cloud

Original Source:

This article was created in partnership with Alibaba Cloud. Thank you for supporting the partners who make SitePoint possible.

Think you got a better tip for making the best use of Alibaba Cloud services? Tell us about it and go in for your chance to win a Macbook Pro (plus other cool stuff). Find out more here.

Need to serve many websites from a single Linux box, optimizing resources, and automating the site launch process? Let’s get serious then, and set up a production-ready environment using Ubuntu, NGINX, and Docker — all of it on Alibaba Cloud.

This is a somewhat advanced tutorial, and we’ll assume some knowledge of networking, server administration, and software containers.

Understanding the Scenario

If you are looking at this guide, chances are that you need to manage a cluster of servers, or an increasing number of websites — if not both — and are looking at what your options are for a secure, performant, and flexible environment. Well then, you came to the right place!

Why a Reverse Proxy

In a nutshell, a reverse proxy takes a request from a client (normally from the Internet), forwards it to a server that can fulfill it (normally on an Intranet), and finally returns the server’s response back to the client.

Reverse proxy

Those making requests to the proxy may not be aware of the internal network.

It is, in a way, similar to a load balancer — but implementing a load balancer only makes sense when you have multiple servers. You can deploy a reverse proxy with just one web server, and this can be particularly useful when there are different configuration requirements behind those end servers. So the reverse proxy is the “public face” sitting at the edge of the app’s network, handling all of the requests.

There are some benefits to this approach:

Performance. A number of web acceleration techniques that can be implemented, including:

Compression: server responses can be compressed before returning them to the client to reduce bandwidth.
SSL termination: decrypting requests and encrypting responses can free up resources on the back-end, while securing the connection.
Caching: returning stores copies of content for when the same request is placed by another client, can decrease response time and load on the back-end server.

Security. Malicious clients cannot directly access your web servers, with the proxy effectively acting as an additional defense; and the number of connections can be limited, minimizing the impact of distributed denial-of-service (DDoS) attacks.
Flexibility. A single URL can be the access point to multiple servers, regardless of the structure of the network behind them. This also allows requests to be distributed, maximizing speed and preventing overload. Clients also only get to know the reverse proxy’s IP address, so you can transparently change the configuration for your back-end as it better suits your traffic or architecture needs.


NGINX logo

NGINX Plus and NGINX are the best-in-class reverse-proxy solutions used by high-traffic websites such as Dropbox, Netflix, and Zynga. More than 287 million websites worldwide, including the majority of the 100,000 busiest websites, rely on NGINX Plus and NGINX to deliver their content quickly, reliably, and securely.

What Is a Reverse Proxy Server? by NGINX.

Apache is great and probably best for what it’s for — a multi-purpose web server, all batteries included. But because of this very reason, it can be more resource hungry as well. Also, Apache is multi-threaded even for single websites, which is not a bad thing in and of itself, especially for multi-core systems, but this can add a lot of overhead to CPU and memory usage when hosting multiple sites.

Tweaking Apache for performance is possible, but it takes savvy and time. NGINX takes the opposite approach in its design — a minimalist web server that you need to tweak in order to add more features in, which to be fair, also takes some savvy. If the topic interests you, a well-established hosting company wrote an interesting piece comparing the two: Apache vs NGINX: Practical Considerations.

In short, NGINX beats Apache big time out-of-the-box performance and resource consumption-wise. For a single site you can chose not to even care, on a cluster or when hosting a many sites, NGINX will surely make a difference.

Why Alibaba Cloud

Alibaba Cloud logo

Part of the Alibaba Group (, AliExpress), Alibaba Cloud has been around for nearly a decade at the time of this writing. It is China’s largest public cloud service provider, and the third of the world; so it isn’t exactly a “new player” in the cloud services arena.

However, it hasn’t been until somewhat recently that Alibaba rebranded its Aliyun cloud services company and put together a fully comprehensive set of products and services, and decidedly stepped out of the Chinese and Asian markets to dive into the “Western world”.

On a Side-by-Side Comparison of AWS, Google Cloud and Azure, we did a full review of what you can do in the cloud — elastic computing, database services, storage and CDN, application service, domain and website, security, networking, analytics, … and yes, Alibaba Cloud covers it all.

Deploying to Alibaba Cloud

You’ll need an Alibaba Cloud account before you can set up your Linux box. And the good news is that you can get one for free! For the full details see How to Sign Up and Get Started.

For this guide will use Ubuntu Linux, so you can see the How to Set Up Your First Ubuntu 16.04 Server on Alibaba Cloud) guide. Mind you, you could use Debian, CentOS, and in fact, you can go ahead and check 3 Ways to Set Up a Linux Server on Alibaba Cloud.

Once you get your Alibaba Cloud account and your Linux box is up and running, you’re good to go.

Hands On!
Installing NGINX

If we wanted to use the whole process ourselves, we would first need to install NGINX.

On Ubuntu we’d use the following commands:

$ sudo apt-get update
$ sudo apt-get install nginx

And you can check the status of the web server with systemctl:

$ systemctl status nginx

With systemctl you can also stop/start/restart the server, and enable/disable the launch of NGINX at boot time.

These are the two main directories of interest for us:

/var/www/html NGINX default website location.
/etc/nginx NGINX configuration directory.

Now, setting a reverse proxy can be a somewhat cumbersome enterprise (and there are several guides that cover this process), as there are a number of network settings we need to go through, and files we need to update as we add sites/nodes behind our proxy.

That is, of course, unless we automate the whole thing using software containers…

Docker to the Rescue

Before we can start using software containers to automate our workflow, we first we need to install Docker, which for Ubuntu is a fairly straight forward process.

Uninstall any old version:

$ sudo apt-get remove docker docker-engine

Install the latest Docker CE version:

$ sudo apt-get update
$ sudo apt-get install docker-ce

If you want to install a specific Docker version, or set up the Docker repository, see Get Docker CE for Ubuntu.

Setting the Network

Part of setting a reverse proxy infrastructure is properly setting networking rules.

So let’s create a network with Docker:

$ docker network create nginx-proxy

And believe or not, the network is set!


Now that we have Docker running on our Ubuntu server, we can streamline the process of installing, setting up the reverse proxy, and launching new sites.

Jason Wilder did an awesome job putting together a Docker image that does exactly that–jwilder/nginx-proxy, a automated NGINX proxy for Docker containers using docker-gen, that works perfectly out-of-the-box.

Here’s how you can run the proxy:

$ docker run -d -p 80:80 -p 443:443 –name nginx-proxy –net nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy


we told Docker to run NGINX as a daemon/service (-d),
mapped the proxy’s HTTP and HTTPS ports (80 and 443) the web server(s) ports behind it (-p 80:80 -p 443:443),
named the NGINX proxy for future reference (–name nginx-proxy),
used the network we previously set (–net nginx-proxy),
mapped the UNIX socket that Docker daemon is listening to, to use it across the network (-v /var/run/docker.sock:/tmp/docker.sock:ro).

And believe or not, the NGINX reverse proxy is up and running!

Launching Sites, Lots of Sites

Normally when using Docker you would launch a “containerized” application, being a standard WordPress site, a specific Moodle configuration, or your own images with your own custom apps.

Launching a proxied container now is as easy as specifying the virtual your domain with

The post How to Set Up a Reverse NGINX Proxy on Alibaba Cloud appeared first on SitePoint.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *