This guide walks you through setting up multiple Ghost blogs on a single machine. Docker is used to containerize the various components and websites. Dropbox is used to sync the live website as well as storing nightly backups.

Initial Setup

Provision Server

I like DigitalOcean as a hosting provider. They call their virtual machines Droplets, and you can follow their guide on how to create a new droplet. I run Ubuntu 18.04 on my droplet and you can follow this guide on how to setup your server after you provision it.

To begin with, you can start with their smallest droplet (1 vCPU and 1 GB of ram for $5/month). This is enough to run all our core containers along with 2 ghost blogs. It's really easy to resize your droplets, so when you need to add more sites, you can easily do so.

Install Docker

Docker is the containerization tool we will use to separate out the different components of our server, along with isolating the different websites that will be hosted.

Install docker by following docker's installation steps available here. After installing, go through the post-installation steps.

Install Docker-Compose

Docker-compose is a tool for defining and running multi-container Docker applications.

Install docker-compose by following this guide.

Creating Containers

First create a git repository and create the following file structure within:

├── core
│   └── docker-compose.yml
└── websites
       ├── awesome-blog
       │   └── docker-compose.yml
       └── cool-blog
             └── docker-compose.yml

This is an effective way to save to your compose files in case you want to move servers and want to easily bring your containers back up. You can even add a README.md file to keep notes of what sites you have running.

Core Containers

Navigate to core/docker-compose.yml where we will configure the following 3 core services:

1. core-proxy

A container based on the image dannycarrera/nginx-proxy (github) which includes the following components:

  • nginx: A reverse proxy that will listen for incoming requests and forward them to the correct container depending on the request's url
  • docker-gen: A file generator that generates files when containers are brought up or taken down. For our use case, it watches any containers that contain the environment variable VIRTUAL_HOST and creates configuration files for nginx to properly forward requests to the correct container based on the request's url

Note: This image is forked from jwilder/nginx-proxy (github). This fork provides support for virtual host forwarding by looking for the VIRTUAL_HOST_ALIAS environment variable (Ex. forwarding www.dannycarrera.com to dannycarrera.com)

It seems jwilder isn't maintaining this repo that regularly any longer. I've put in a PR for VIRTUAL_HOST_ALIAS but until it gets merged go ahead and use my fork. If you don't have a need for virtual hosts then you may use the original image.

2. core-letsencrypt

A container based on the image jrcs/letsencrypt-nginx-proxy-companion (github) which watches for containers that have set the environment variable LETSENCRYPT_HOST and automates the generation and renewal of SSL certificates from Let's Encrypt.

3. core-dropbox

A container based on the image otherguy/dropox (github) which is forked from janeczku/dropbox. The original image doesn't seem to work any longer (issue) and the owner hasn't updated it in quite a while. The fork from otherguy fixed some issues and is working great.

Note: You must provide the environment variables DROPBOX_UID and DROPBOX_GID and this userid must be used to run your website containers, otherwise dropbox will not be able to sync and backup your files.

version: '3.7'

services:
  nginx-proxy:
    image: dannycarrera/nginx-proxy
    container_name: core-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ~/data/nginx/certs:/etc/nginx/certs
      - ~/data/nginx/vhost.d:/etc/nginx/vhost.d
      - ~/data/nginx/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - webproxy

  letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: core-letsencrypt
    restart: always
    environment:
      NGINX_PROXY_CONTAINER: core-proxy
    volumes:
      - ~/data/nginx/certs:/etc/nginx/certs
      - ~/data/nginx/vhost.d:/etc/nginx/vhost.d
      - ~/data/nginx/html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro

  dropbox:
    image: "otherguy/dropbox"
    container_name: core-dropbox
    restart: always
    environment:
      - DROPBOX_UID=1000
      - DROPBOX_GID=1000
    volumes:
      - type: bind
        source: ~/data/dropbox/config
        target: /opt/dropbox/.dropbox
      - type: bind
        source: ~/data/dropbox/Dropbox
        target: /opt/dropbox/Dropbox


networks:
    webproxy:
      external: true

Before starting these containers, we need to initialize the external network webproxy by running docker network create webproxy. Once that's done, start the containers as daemons by running docker compose up -d.

You can confirm the containers are running by running docker stats. You should see something like the following:

9ae825505104        core-dropbox                   0.32%               185.7MiB / 1.947GiB   9.32%               62.1MB / 238MB      3.56GB / 1.11GB     94
0008c6631c87        core-letsencrypt               0.19%               7.289MiB / 1.947GiB   0.37%               8.17MB / 0B         264MB / 8.19kB      11
172b9ff719ec        core-proxy                     0.19%               10.2MiB / 1.947GiB    0.51%               800MB / 793MB       82.7MB / 77.8kB     19

Website Containers

Now navigate to the websites folder and create a folder with the name of the website you will be hosting. In this guide we will continue with the website name of awesome-blog. In this folder, create a new docker-compose.yml file where we will configure 2 services:

1. ghost

A container based on the image ghost (github) which runs the ghost application. The following is a description of the various configurations for this service:

  • user: The same user which was set for the core-dropbox service
  • environment:
    • url: the canonical url for the website
    • VIRTUAL_HOST: The host for which nginx will forward requests to this container
    • VIRTUAL_HOST_ALIAS: Any hosts that will forward (301) to the VIRTUAL_HOST
    • LETSENCRYPT_HOST: All hosts that require an SSL encrypted connection (Should include VIRTUAL_HOST and all hosts specified in VIRTUAL_HOST_ALIAS)
  • volumes: bind the host source directory (which is a folder we will create in our dropbox folder) to the ghost content directory in the container
  • networks: set this to webproxy, the same network as the core containers

2. backupper

A container based on the image dannycarrera/nightly-web-backupper (github) which sets up cron job that zips up the ghost content folder and uploads it to dropbox, keeping 3 of the most recent backups at a time.

Here is a description of the environment variables:

  • CRON_TIMER - The timer for your backup, in the cron format. Defaults to 30 2 * * *
  • USER_ID - The user id that will create the archives. Ex. 1000
  • GROUP_ID - The group id that will be assigned to the created user. Ex. 1000
  • WEBSITE_NAME - The name that will prepend the archives names

Two volumes need to be bound as well. The first points to the live website. The second points to the backup folder.

version: '3.7'
services: 
    
    ghost:
        image: "ghost"
        user: "1000"
        environment: 
            - url=https://awesome-blog.com
            - VIRTUAL_HOST=awesome-blog.com
            - VIRTUAL_HOST_ALIAS=www.awesome-blog.com
            - LETSENCRYPT_HOST=awesome-blog.com,www.awesome-blog.com
            - LETSENCRYPT_EMAIL=admin@awesome-blog.com
        volumes:
            - type: bind
              source: ~/data/dropbox/Dropbox/websites/awesome-blog/live
              target: /var/lib/ghost/content
        networks:
            - webproxy

    backupper:
        image: "dannycarrera/nightly-web-backupper"
        environment:
            - CRON_TIMER=30 7 * * *
            - USER_ID=1000
            - GROUP_ID=1000
            - WEBSITE_NAME=awesome-blog
        volumes:
            - type: bind
              source: ~/data/dropbox/Dropbox/websites/awesome-blog/live
              target: /var/lib/dropbox/live
            - type: bind
              source: ~/data/dropbox/Dropbox/websites/awesome-blog/backups
              target: /var/lib/dropbox/backups


networks:
    webproxy:
        external: true

Before starting these containers, we need to create 2 folders on the host machine within the dropbox folder. You can specify your own file structure here, just be sure to keep the folders consistent between these 2 containers. In this example, we will need to create the directory at ~/data/dropbox/Dropbox/websites/awesome-blog/live and ~/data/dropbox/Dropbox/websites/awesome-blog/backups.

Once the folders are created, start the containers by running docker-compose up -d. If all goes well, you will see 2 more containers: awesome-blog_ghost_1 and awesome-blog_backupper_.

Post Configuration Check Up

You can now navigate to your newly hosted blog in a browser! If you get a 503 error, just wait a couple of minutes for core-letsencrypt to procure and setup the SSL certificates.

You can also take a look at your dropbox account and see the synced ghost content files. Depending on when you set your backup to run, you won't see any zipped backups until the backup job runs.

Adding New Websites

You can now easily add more sites, by simply copying the website docker-compose.yml file and changing the configs to match the new site.

Note: You can add other web platforms too, such as Wordpress, without changing anything in your core containers. All you need is to add the VIRTUAL_HOST and LETSENCRYPT_HOST environment variables to the new container and core-proxy along with core-letsencrypt will automatically set the proper configs.

Conclusion

I hope this guide thoroughly walked you through the process of hosting multiple websites on a single machine. My goal was to create the simplest and most automated way of adding multiple sites, while keeping server costs low by only hosting on a single machine.

Thanks for reading!