My Webserver Setup December 20, 2017


Hello World!

Hi! I decided to write about the webserver setup I finally settled upon, and I’m pretty sure someone else will find it useful to read about it. So here you go. Oh and this is my first blog post ever.

This post is inspired by Reyk Floeters blog entry #1.

Docker is amazing

Application containers are very administrator friendly, at least IMO. It’s great for a number of reasons:

Almost everything great comes with a down-side, so here are some cons that come to my mind:

Having the ability to abstract all parts required for an application is very powerful. What I like so much about it: I can try different solutions to a problem, and don’t have to remember all the steps required to undo my experiments. An example:

Assume you want to try out a new PHP project that seems promissing. You either don’t use PHP so far or have a setup that’s built for the apps you’re currently using. Both ways, when you’re done experimenting and decided not to use the new project, you have to clean up. Remove unused files, configuration. Maybe you installed a few new packages via apt and modules for Nginx. You also have to make sure that you don’t break your existing projects. Installing a new release of PHP is not always working smoothly, and older PHP-applications have to to tacle deprecations of functions. With Docker you just remove the container and image you’ve been playing with. In most cases, you’re system is clean again. You won’t be able to defect your currently running apps, because they are abstracted as containers.

Introducing My Web Deployment Setup With Docker

This is how I deploy different web applications on my VPS:

            +-------------------------+                       
            | Other Peoples Computers |                       
            +-------------------------+                       
                         |                                  
                         |                                  
                   Port 80, 443                             
                  +------|------+                           
                  | Caddy Proxy | -> Auto Let's Encrypt \o/
                  |             |    (for all services)
                  +-------------+                           
                    /--- | ---\                               
                 /--     |     --\                           
              /--        |        --\                       
          /---           |           ---\                   
 +-------------+  +------|------+  +-------------+          
 |  Container  |  |  Container  |  |  Container  |          
 |      A      |  |      B      |  |      C      |          
 +-------------+  +-------------+  +-------------+          

 # Every container has one or more ports that are exposed to Caddy.

A small and soon incomplete list of services I have running:

Some of the above images I’ve built myself, based upon the Alpine image. A Go application that needs certain C runtime items for example. The other images are all vanilla from the application vendors, like GitLab and NextCloud (with the :fpm tag). You can even use the scratch image, which is nothing. You run static binaries that way.

For security-by-obscurity reasons I’m trying to avoid actual hostnames and ports here.

(Brief) Explanation

I have a Caddy webserver running, which proxies all request to my server based on the hostname to my different containers. Caddy is the only part in my setup that’s not dockerised. I don’t mind though, because I think that a 100% faithful solution is never the answer. This saves me some back-and-forth configuration of ports, links and networks for Docker, so I just run it outside Docker.

Why Caddy? Because Caddy is really awesome!

Caddy is incredibly convenient. Usually it’s a webserver, and like many other webservers it can also work as a transparent proxy. That means it takes request and forwards them to different backends, based on some critera. The criteria in my case are the specified hostnames of the request that’re coming in. So, why Caddy and not one of the others?
Here’s another list.

  1. Caddy has this outstanding feature I never want to miss again: It automatically retrieves Let’s Encrypt certificates for you, when you serve an web application for the first time under a specific hostname. I need to emphasize how great this is: it’s really, really great. Don’t get me wrong, Let’s Encrypt made certificate handling much easier then before, but not as easy as it could be (you still have to select and configure a client, handle certificates in your applications). Caddy does almost everything by itself. You register the hostname via DNS, tell Caddy to serve stuff under that hostname und you’ve got yourself a new web application, secured by modern TLS. I know, this setup might not be to the liking of everybody, but I sure am happy with it.
  2. Caddy’s configuration can be held very minimal. In my case, it’s a few repeating lines, telling Caddy to map hostnames to a few backends via different ports. See below for an example. No extensive includes, explanations where to find a crt, key and what-his-face-files.
  3. Caddy’s written in Go. I like Go.
blog.ls42.de {
    proxy / localhost:10001 {
        # Docker publishes the internal port of the hugo blog
        # to 127.0.0.1:10001 so Caddy can access the service
        # inside the container.
		transparent
	}
    gzip
}

That’s it

This conludes my introduction. If you’re interested in a complete setup guide or have some remarks please send me an e-mail or comment on lobste.rs. I’d also really appreciate feedback on my first blog post in general. My next posts may be about

Thanks!