Yes! A month ago, Github announced we can finally have free private repositories. I was very excited about this news. There are a few projects I would have prefered to keep private but I was too cheap to pay for the private repositories. Problem solved… Or is it?
As soon as I made some repositories private I realized that making my repositories private came at a cost:
NOBODY CAN SEE YOUR REPO (DHU!)… BUT NO MORE FREE TRAVIS CI, NO OTHER COOL OPEN-SOURCE TOOLS :(
As I highly rely on those tools to build and deploy, I had 3 options:
- Start paying for Travis CI; (ahaha… no!)
- Revert back to public repository and keep my deployment pipeline;
- Move on to a new deployment pipeline.
Me being me, I decided to just move away from Travis CI and the likes and move to something new. I had to find an alternative way to deploy my software. For the purpose of this post, I am going to use the building and deploying of this Blog as an example. Inception much?
Build using Multi-Stage Dockerfile
This blog is built using Hugo. In short, you have a few Markdown files, you sprinkle a bit of theme over it and you run
hugo -> no seriously, that is the only command needed ;-)
Using that command, Hugo generates the HTML for the website. I chose Hugo because it is simple and I can focus on content, rather than on HTML & my archenemy CSS.
On the Build and Deploy side, I use Docker to bundle my website into a runnable container. In my Dockerfile I use Multi-Stage Builds, which dramatically reduces the complexity of the build pipeline. Not a lot of people seem to know about these Multi-Stage builds, so here is the Dockerfile I use for this website.
I got 2 Stages:
- Using a Hugo Base Image to build the HTML of my Website.
- Use an NGINX Base Image to serve the HTML.
# Dockerfile
# Stage 1: Copy the content of the blog to the container and build it using Hugo.
FROM jojomi/hugo as builder
WORKDIR /builddir
COPY blog .
RUN hugo --minify
# Stage 2: Copy the HTML built in the Hugo container into an NGINX based container.
FROM nginx:alpine
COPY --from=builder /builddir/public/ /www/data/
Using this multi-stage Dockerfile, I only need to run 1 command to build and tag my website.
docker build -t repo/imagename .
After building, I can use Docker Push to push the Docker container to a Docker Registry.
The Build and Deployment Pipeline
For my pipeline, I only need a couple of components:
- A repository to host my code / dockerfile.
- A way to automatically execute
docker build
when I push code to Git. - A location to run the containers and automatically pull new versions.
Repo | Build | Deploy / Run |
---|---|---|
Github Private Repository | Docker Hub Builds | Linux VM on Azure |
Building using Docker Hub Builds
NOTE! This is no longer free! I’m reworking this at this moment and will adapt this blog post to reflect that change.
Docker Hub has the capability of building containers for you, using Automated Builds. Unlike Travis CI, Hub Builds doesn’t require you to pay for building from Private Repos.
The capabilities are roughly the same, with the added benefit that by linking your code to the Docker Hub Repository, you do not need to explicitly docker tag
or docker push
to put the image in your Docker Registry.
Builds run on Git Push by default, which is exactly what I wanted. If you do not want this, there are some filter capabilities available for you.
If you do not want to use a Docker container to run the code, you can still use Docker Hub Build capabilities. A Docker Build is essentially just a shell script executing. Nobody is stopping you from pushing files to an FTP server during the build. The example below uses git ftp, a cool tool which only pushes new changes to your FTP Server.
From the official docs:
If you use Git and you need to upload your files to an FTP server, Git-ftp can save you some time and bandwidth by uploading only those files that changed since the last upload. It keeps track of the uploaded files by storing the commit id in a log file on the server. It uses Git to determine which local files have changed.
source – https://github.com/git-ftp/git-ftp
# Building HTML and pushing to FTP
# Stage 1: Building the website
FROM jojomi/hugo as builder
WORKDIR /builddir
COPY blog .
RUN hugo --minify
# Stage 2, copy the entire GIT project and the HTML built by step 1, then push
# This step assumes you will add variables to your Build Tool, never add credentials in your Dockerfile!
FROM dotsunited/git-ftp
COPY . .
COPY --from=builder /builddir/public/ /source
RUN git ftp push -v --syncroot source/ --user $FTP_USER --passwd $FTP_PASSWORD $FTP_HOST
You are technically creating a useless image now… but hey, you are deploying your code to the FTP server! Thanks for the build resources Docker Hub! ◕‿◕
Automatically updating the running Docker container
I am lucky enough to have some free Azure resources, I usually deploy everything on a Linux VM in Azure.
To automatically update my containers, I use another container called Watchtower. The purpose of Watchtower is pretty simple:
- After a fixed delay (default 5 minutes), look at all running containers and their images
- Check on Docker Hub (or another registry) if there is a newer image version available.
- If there is a new version, stop the current container and start a new container with the latest version. Run variables are copied as well!
docker run -d –name watchtower
-v /var/run/docker.sock:/var/run/docker.sock
v2tec/watchtower
If your Docker Image is private, you need to supply your Docker Hub credentials to the Watchtower container. See the documentation for ways you can achieve this.
This allows continuous deployment with little to no effort. #awesomesauce
For the IT-blog my wife is hosting, I chose a slightly different approach. Instead of hosting it on a Linux VM with Watchtower we used Azure Websites which allows continuous deployment using Docker Containers. You can read more about this in an upcoming follow-up post!