Published on Mar 20, 2024

Why I built Slick Deploy

tl:dr; — Despite a barrage of startups claiming to make it easy to deploy your app, it’s still a pain to use any of those. Despite all the gains in computing power and the proliferation of container tech, the fundamentally simple problem of running a container still needs to be solved. Slick Deploy solves this by letting you handle the initial setup and takes over from there. It’s a CLI tool that lets you declaratively deploy containers with Caddy.

State of Infrastructure tooling

Throughout my career so far, I have worked with various tools, from Ansible to Kubernetes, working with baremetals to dynos on Heroku. In most places, the tooling was put in place before I joined the team. I have always been a fan of the idea of infrastructure as code, but the reality is that it can be challenging to get started with it. K8s is a beast, and not easy to tame, and failure stories of the complexity it brings causing sever downtimes is not uncommon, but hey I am not deploying at scale, I just want to run a container. Nomad is great to be honest. Our friends at Zerodha tech use Nomad and have had a great experience with it, but it’s still a bit too much for my needs. I just want to run a container. And I want to do it easily.

Services at my disposal today

Let’s take a look at the most popular tools for deploying containers:

ServiceConfigurationStorageBase PricePricing
Fly.io1 shared CPU + 2 GB RAM3GB Free + $0.15/GB-$10
1 vCPU + 2 GB RAM3GB Free + $0.15/GB-$31
Railway1 vCPU + 2 GB RAM$0.25/GB$20/user/month$40
Render1 vCPU + 2 GB RAM$0.25/GB$19/user/moth$25
DigitalOcean1 vCPU + 2 GB RAM50GB Free + $0.1/GB-$12

There are more fish in the ocean, but the picture is the same; assume we want to get at parity with DigitalOcean for each platform. Fly.io will cost us close to $38, Railway is $72.5, Render is $56.5. For a couple of side projects, one user, i.e. me, this is a lot to pay.

All of these services charge a significant margin on top of what the base infrastructure costs, and sure, they have a business to run, and they are providing a lot of services around the infra, but I don’t need all of that; I just want to run a container. And I want to do it quickly. While a lot of these services offer free tiers, the infra tooling companies of the ZIRP era are not the same as the tooling companies of today (which I think is good in a lot of ways), but you don’t have to go far to look for companies who killed their free plans, here’s the latest one at the time of writing this article.

Beyond the cost component of hosting projects, there’s the cost of complexity. I’ve tried all the services mentioned in the table above; the setup was more challenging than I would have liked in each case and not as easy as advertised. Fly.io relies a lot on its CLI and has a custom DSL for its config using fly.toml; I tried deploying a Django project of mine, and despite being able to run the container locally, it didn’t work on their VM. While Railway and Render are easier to use and have a much more approachable UI and better tools for debugging, I still had failures deploying some apps. All this is most likely a skill issue on my end, but then again, deploying a container should not be this hard.

If there is something that gives the experience of deploying to Heroku but with the flexibility of running it on my hardware, that would be great. And that’s what I set out to build found.

Coolify

In straightforward terms, Coolify is a self-hosted Heroku. Coolify is a versatile deployment platform that allows you to launch and manage various applications on any server, including your servers, VPS, or cloud providers. It offers a wide range of features, such as Git integration, automatic SSL certificates, database backups, webhooks, and a robust API, to streamline your deployment process.

It’s fully open-source, built by an indie hacker, and has a great community around it. It’s a great project, and I am a big fan of it. For most startups running a fairly un-complicated infrastructure, Coolify is all you need.

I used Coolify for four months until I decided to build Slick. While Coolify is fantastic, two things stood out that led me to take the task of building Slick Deploy.

  1. Coolify uses Traefik. While now they have Caddy support in beta, Traefik is extremely powerful, but it’s complicated to understand, especially for me who has never worked with it before. I wanted something more straightforward, and Caddy is just that.
  2. Coolify is a full-fledged platform; it’s not just a deployment tool, and that’s great. Using Coolify felt like bringing a tank to a knife fight.

Kamal Deploy

Just around the time when I was conceptualizing Slick in my head, the folks at 37Signals launched Kamal Deploy. Basecamp has been in the process of exiting the cloud, and they have done it quite successfully. What does Kamal do? Their website has the best answer:

Kamal offers zero-downtime deploys, rolling restarts, asset bridging, remote builds, accessory service management, and everything else you need to deploy and manage your web app in production with Docker. Originally built for Rails apps, Kamal will work with any type of web app that can be containerized.

Kamal handles many great things and basically allows you to run an entire deployment pipeline with one single command, with minimal configuration. Despite its straightforward approach, it still can get pretty complex. Kamal used Traefik, which I didn’t like (skill issue, I know); it also had a lot of bells and whistles for building the containers and managing envs from the CLI; it ran by SSHing into the server from your machine. It’s a great tool, but I wanted something else.

Also, I already started enjoying the idea of building Slick. So I built it.

Building Slick

Slick Deploy is a CLI tool that declaratively deploys containers with Caddy. It’s a simple tool that does one thing and does it well (at least for me). I have always loved the simplicity of Caddy and the universality of Docker, and I always wanted to learn Go. So, I built Slick. It provides the following features.

  • Zero-downtime deployment: Update your running application without interrupting service.
  • Suport for Private Registry: Pull images from private registries, you just need to provide the credentials in the ENV.
  • Easy configuration: Use simple YAML files to manage deployment settings.
  • Health checks: Ensure your application is running correctly before switching over.
  • Rollback capability: Quickly revert to the previous version if something goes wrong.

A typical slick deploy config looks like this:

app:
  name: "memos"
  image: "ghcr.io/usememos/memos"
  container_port: 5230
  env:
    - AWS_S3_ACCESS_KEY_ID
    - AWS_S3_BUCKET_NAME
    - AWS_S3_CUSTOM_DOMAIN
    - AWS_S3_ENDPOINT_URL
    - AWS_S3_REGION_NAME
    - AWS_S3_SECRET_ACCESS_KEY
  port_range:
    start: 8000
    end: 9000

caddy:
  admin_api: "http://localhost:2019"
  rules:
    - match: "memos.shivam.dev"
      reverse_proxy:
        - path: ""
          to: "http://localhost:{port}"

health_check:
  endpoint: "/health"
  timeout_seconds: 5

And has the following commands

  • deploy: Deploy your application with zero downtime
  • help: Help about any command
  • logs: Tail and follow app logs
  • status: Get the status of your application

Why Caddy?

Caddy has a very simple config and provides automatic SSL certificates. One of the best parts of Caddy is the Admin API, which lets you update your Caddy config on the fly; this makes Caddy an excellent choice for a deployment tool. The one problem with Caddy is that it does not support docker labels, which would become a problem if you had to do the deployments manually.

How Slick works

The deploy command is the main command, it does the following:

  1. Find an existing container for the said image and keep it in memory
  2. Pull the latest image
  3. Find an available port from the range based on app.port_range from the config
  4. Load the ENV variables from the .env file, based on app.env from the config
  5. Start a new container with the new image on the port with the env
  6. Wait for the new container to be healthy
  7. Build a new Caddy config and update it; this routes the traffic to the new container
  8. Stop the old container if present

If any of these steps fail, the apps roll back. You can take a look at deploy.go, which handles all of this. The other commands are fairly straightforward and can be found in the repository

Limitations

  1. Slick Deploy is not a full-fledged deployment platform. It’s a simple tool that does one thing and does it well. It’s not a replacement for Coolify, Kamal, or any other platform.
  2. Initial setup like env, running Postgres or Redis, or handling the storage volume has to be done by you; Slick does not handle that.
  3. Any inflight requests may possibly fail, but it’s great for side projects and small apps, where this is a rare instance.
  4. It manages only one instance of one app per VM; it cannot do multiple images or multiple instances of the same image
  5. Needs better docs

The limitations are, by design, and, in a way, are the strengths of Slick; after all, it’s a simple tool. I have been using Slick for a while now, and it’s been great. I have been able to deploy my apps with ease, and I hope you find it useful, too. You can install it on your VM using the following command.

curl -fsSL https://dub.sh/install-slick | bash

What’s Next?

Slick, for me, is excellent. It’s almost complete. I love how Kamal manages services like Postgres and Redis, and I would love to add that to Slick, built-in templates for standard services are a good idea; perhaps I will take a shot at it next.

Another thing is that converting YAML to a Caddy file is sub-optimal, both for the CLI and also the users; since it’s just simpler to write a Caddyfile, the YAML config feels like an unnecessary abstraction, I want to add support for a Caddy template, which uses Go templating to build a new Caddy file, this will also allow arbitrary config since right now the config is limited to what I have implemented.

Give it a star, please?

I hope you find Slick interesting, and if you do, give it a star. You can find the source code on GitHub. If you have any questions or need help, feel free to reach out to me on Twitter.