Abir Taheer
•
February 17, 2026
•
8 min read
We deploy the Centure API several times a week. Each deployment needs to happen without dropping any API requests—no maintenance windows, no "we'll deploy at 2am when traffic is low" planning. Just push a release and have it go live seamlessly.
After trying a few different approaches, we landed on a deployment strategy that uses GitHub releases, versioned Docker networks, and nginx's atomic reload capability. The whole process is mostly automated, and rollbacks take about 30 seconds if something goes wrong.
This post walks through how we set this up and why it works well for our use case.
The simplest deployment approach is to stop the old container and start a new one:
docker stop my-api
docker run -d --name my-api -p 3001:3001 my-api:latest
The problem is that 5-10 second gap while the container stops and the new one starts. For an API handling real-time traffic, that's hundreds of dropped requests.
Using Docker's port binding (-p 3001:3001) doesn't help either—Docker has to unbind and rebind the host port, creating the same gap. The port itself becomes the bottleneck.
We use GitHub releases to trigger deployments. When we tag a new version (like v0.1.11), our deployment automation kicks off. The key insight is eliminating host port bindings entirely:
centure-api-v0.1.11 gets its own isolated networkThis means old and new versions coexist on separate networks, and we control traffic through nginx rather than container lifecycle events.
When we want to deploy, we create a GitHub release with a version tag like v0.1.11. Our deployment script handles the rest:
The script pulls the release tag and builds a Docker image:
git fetch --tags
git checkout v0.1.11
docker build -t centure-api:v0.1.11 .
Standard Docker build, the version tag becomes the image tag.
docker network create centure-api-v0.1.11
Each version gets its own network. The old container (say, v0.1.10) is still running on centure-api-v0.1.10 and continues handling production traffic.
docker run -d \
--name centure-api-v0.1.11 \
--network centure-api-v0.1.11 \
--restart always \
centure-api:v0.1.11
No -p flag—the container isn't exposed to the host. It only exists on its private network. Production traffic still hits the old version.
The script grabs the new container's internal IP and runs health checks:
NEW_IP=$(docker inspect centure-api-v0.1.11 \
--format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
curl -f http://${NEW_IP}:3001/health || exit 1
This is the safety net. If the new version doesn't pass health checks, the script exits and the old version keeps running. We can debug the new container without affecting users.
Once health checks pass, the script updates nginx to point at the new container:
# Update nginx config to new container IP
sed -i "s|proxy_pass http://.*:3001;|proxy_pass http://${NEW_IP}:3001;|" \
/etc/nginx/sites-available/api.conf
# Test and reload
nginx -t && systemctl reload nginx
The nginx configuration just has a standard reverse proxy setup:
location / {
proxy_pass http://172.21.0.2:3001; # Container IP
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# ... standard proxy headers
}
When nginx reloads, it:
No requests are dropped during this process.
# Quick check that public endpoint works
curl -f https://api.centure.ai/health || exit 1
# Stop old container
docker stop centure-api-v0.1.10
docker rm centure-api-v0.1.10
# Clean up old network (optional, can wait)
docker network rm centure-api-v0.1.10
The script waits a few seconds to verify the public endpoint, then cleans up the old version. The whole process takes about 30-45 seconds.
Three things make this approach reliable:
Network Isolation: Each version lives in its own network namespace. The new container starting doesn't touch the old one—they're on separate networks with different IP ranges.
Atomic Traffic Switching: Nginx's reload is designed for zero-downtime updates. It uses SO_REUSEPORT at the kernel level to transition workers without dropping connections.
Validation Before Traffic: Since both versions run simultaneously, we test the new version thoroughly before it sees production traffic. If health checks fail, the old version is completely unaffected.
If we catch an issue after deployment, we just point nginx back to the old container and reload:
# Script updates nginx to old container IP
sed -i 's|proxy_pass http://NEW_IP|proxy_pass http://OLD_IP|' config.conf
nginx -t && systemctl reload nginx
The old container is still running until we explicitly remove it, so rollback is just another config change.
This deployment strategy fits:
If you're running dozens of servers behind a load balancer, you probably want Kubernetes. But for a lot of production APIs, this hits a good spot between reliability and simplicity.
A few things we've learned running this in production:
Document current state: We keep a file that tracks which version is running, on which network, at which IP. Makes debugging and rollbacks way easier.
Wait before cleanup: We leave the old container running for 1-2 hours after deployment. Gives us time to spot issues and rollback quickly if needed.
Watch metrics during deployment: Even with zero-downtime deploys, keep an eye on response times and error rates. Health checks don't catch everything.
You don't need Kubernetes to do zero-downtime deployments. Docker networks plus nginx reloads get you pretty far for single-server APIs.
This setup has worked well for us through hundreds of deployments. It's simple enough to understand when debugging at 2am, and reliable enough that we don't worry about breaking things when we deploy during business hours.
If you're running a stateless API on one server and deployment downtime is a problem, this approach might work for you. The scripting takes an afternoon to set up, but the deployment confidence is worth it.