Docker
Deploy ADK-TS agents using Docker containers
Docker allows you to package your ADK-TS agent with all its dependencies into a portable container that can run anywhere. Best for: Flexible deployment, any hosting provider, and local development. Docker offers platform-agnostic, reproducible builds with wide hosting options, though it requires container management and orchestration setup. This guide covers creating optimized Docker images and deploying them to various platforms.
What You'll Need
Before deploying to Docker, ensure you have:
- Docker installed and configured on your local machine
- A Container Registry Account (e.g., Docker Hub, GitHub Container Registry, AWS ECR) for storing your Docker images
- A Cloud Platform Account: (e.g., Railway, AWS, Google Cloud) for deploying your Docker containers
- Your ADK-TS project ready for deployment.
Deployment Options
Docker-based deployments can be done in two main ways:
- Deploy via Container Registry: Build locally, push to registry, deploy to cloud
- Deploy to Docker Host: Deploy directly to your own server or VPS running Docker
Option 1: Deploy via Container Registry
Build your Docker image locally, push it to a container registry, and deploy to cloud platforms.
Step 1: Create Dockerfile
Create a Dockerfile in your project root:
# Use Node.js 20 Alpine as the base image (lightweight Linux distribution)
FROM node:20-alpine
# Enable pnpm package manager and set a specific version for consistency
RUN corepack enable && corepack prepare pnpm@9.12.0 --activate
# Set the working directory inside the container
WORKDIR /app
# Copy package files first for better Docker layer caching
COPY package.json pnpm-lock.yaml ./
# Install dependencies using frozen lockfile for reproducible builds
RUN pnpm install --frozen-lockfile
# Copy TypeScript config and source code
COPY tsconfig.json ./
COPY src ./src
# Build the project
RUN pnpm build
# Set production environment
ENV NODE_ENV=production
# Define the command to run when the container starts
CMD ["node", "dist/index.js"]Step 2: Build Docker Image
Docker needs to build your code into an image. The architecture (CPU type) matters because your local machine might be different from where you'll deploy.
If you're just testing locally, build for your machine:
docker build -t my-adk-agent:latest .For deployment to cloud platforms (Railway, AWS, Google Cloud, etc.), most use AMD64 processors. Build specifically for that:
docker buildx build --platform linux/amd64 -t my-adk-agent:latest .Why AMD64?
If you're on an Apple Silicon Mac (M1/M2/M3), your computer uses ARM64. But
most cloud providers use AMD64 processors. Building with --platform linux/amd64 ensures your image works on cloud servers.
Step 3: Push to Container Registry
Choose ONE registry to store your Docker image. This makes it accessible to cloud platforms.
Option A: Docker Hub (Easiest for beginners)
- Create a free account at hub.docker.com
- Run these commands (replace
usernamewith your Docker Hub username):
# Login to Docker Hub
docker login
# Tag your image with your username
docker tag my-adk-agent:latest username/my-adk-agent:latest
# Upload to Docker Hub
docker push username/my-adk-agent:latestOption B: GitHub Container Registry (If you use GitHub)
- Create a GitHub Personal Access Token with
write:packagespermission - Run these commands (replace
USERNAMEand$GITHUB_TOKEN):
# Login to GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
# Tag your image
docker tag my-adk-agent:latest ghcr.io/username/my-adk-agent:latest
# Upload to GitHub
docker push ghcr.io/username/my-adk-agent:latestOption C: AWS ECR (If you use AWS)
Requires AWS CLI installed and configured. Replace region and aws_account_id:
# Login to AWS ECR
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
# Tag your image
docker tag my-adk-agent:latest aws_account_id.dkr.ecr.region.amazonaws.com/my-adk-agent:latest
# Upload to AWS
docker push aws_account_id.dkr.ecr.region.amazonaws.com/my-adk-agent:latestStep 4: Deploy to Cloud Platform
Now that your image is in a registry, deploy it to a cloud platform.
For Railway, Render, or Fly.io:
- Go to your platform's dashboard
- Create a new project/service
- Select "Deploy from Docker Image" or similar option
- Enter your image name (e.g.,
username/my-adk-agent:latest) - Add environment variables: Each platform has a Variables/Environment section where you can add required variables
- Click Deploy
The platform will pull your image from the registry and run it.
For AWS (ECS), Google Cloud (Cloud Run), or Azure (Container Instances):
- Navigate to the container service in the cloud console
- Create a new container instance/service
- Specify your container registry image URL
- Configure environment variables: Add all required variables (API keys, tokens, model names, etc.) in the environment configuration section
- Set memory and CPU limits based on your agent's needs
- Deploy the service
Each platform has slightly different interfaces, but the core concept is the same: point to your Docker image, configure environment variables, and set resource limits.
Option 2: Deploy to Docker Host
Deploy directly to your own server or VPS running Docker instead of using a managed cloud platform.
Step 1: Prepare Your Server
First, install Docker on your Linux server. SSH into your server and run:
# For Ubuntu/Debian servers
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.shFor other operating systems, see Docker's installation guide.
Step 2: Deploy Your Container
You have two options for running your container on the server:
Option A: Pull from Registry
If you pushed your image to a registry (Step 3 from Option 1):
# Pull your image from the registry
docker pull username/my-adk-agent:latest
# Run it in the background
docker run -d --restart unless-stopped --name my-adk-agent username/my-adk-agent:latestOption B: Use Docker Compose
For easier management, create a docker-compose.yml file on your server:
version: "3.8"
services:
agent:
image: username/my-adk-agent:latest
container_name: my-adk-agent
restart: unless-stopped
env_file:
- .env
volumes:
- ./data:/app/data
networks:
- agent-network
networks:
agent-network:
driver: bridgeThen start your agent:
docker-compose up -dStep 3: Configure Environment Variables
Create a .env file on your server with your agent's configuration:
GOOGLE_API_KEY=your_api_key_here
LLM_MODEL=gemini-2.5-flash
ADK_DEBUG=false
DISCORD_TOKEN=your_token_hereNever Commit Secrets
Add .env to your .gitignore to prevent committing sensitive data. Never
share this file publicly.
Verifying Deployment
After deploying, verify your agent is running correctly.
Check Container Status
List all running containers to confirm yours is active:
docker psYou should see your container in the list. If not, check if it exited:
docker ps -aView Logs
Check your agent's output to ensure it started properly:
# See recent logs
docker logs my-agent
# Watch logs in real-time
docker logs -f my-agentTest Your Agent
Depending on your agent type:
- Discord bots: Send a command in your Discord server
- Telegram bots: Send a message to your bot
- API agents: Make a test HTTP request to your endpoint
- CLI agents: Check logs for expected startup messages
Monitoring and Updates
Updating Your Deployment
When you have new code changes:
For registry-based deployments:
- Rebuild your image locally with changes
- Push the new version to your registry
- On your deployment platform or server, pull the new image and restart:
docker pull username/my-adk-agent:latest
docker stop my-agent
docker rm my-agent
docker run -d --restart unless-stopped --name my-agent username/my-adk-agent:latestFor Docker Compose:
docker-compose pull
docker-compose up -dMonitoring Container Health
Keep an eye on resource usage to prevent issues:
- View real-time stats:
docker stats my-agent - Check detailed configuration:
docker inspect my-agent - Review logs regularly for errors
Advanced Configuration
Optimized Multi-Stage Dockerfile
For production deployments, use multi-stage builds to create smaller, more secure images. This separates the build process from the final runtime image:
# Build stage - compiles TypeScript
FROM node:20-alpine AS builder
RUN corepack enable && corepack prepare pnpm@9.12.0 --activate
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
# Production stage - runs the compiled code
FROM node:20-alpine
RUN corepack enable && corepack prepare pnpm@9.12.0 --activate
WORKDIR /app
# Only install production dependencies
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile --prod
# Copy compiled code from build stage
COPY --from=builder /app/dist ./dist
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]Benefits: Smaller images (no dev dependencies), faster deployments, improved security.
Health Checks
Add a health check to automatically restart failed containers:
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "console.log('healthy')" || exit 1Resource Limits and Persistent Data
When running containers, you can set resource limits and mount volumes for data persistence. See your platform's documentation for specific instructions on configuring these settings through their dashboard or CLI.
Docker Troubleshooting
- Out of disk space: Clean up unused Docker resources regularly:
docker system prune -a. - Registry Push Failures: Confirm you're logged in (
docker login) and verify the image name format matches your registry's requirements. - Permission Denied: On Linux, ensure your user is in the
dockergroup or usesudofor Docker commands.
Docker Best Practices
- Image Optimization: Use Alpine-based images and multi-stage builds to minimize image size and attack surface. Add a
.dockerignorefile to exclude unnecessary files likenode_modulesand.git. - Security: Never run containers as the
rootuser. Use theUSERinstruction in yourDockerfileto switch to a non-privileged user. - Workflow: Tag images with git commit SHAs in your CI/CD pipeline for perfect traceability between code and running containers.
Example .dockerignore
Exclude unnecessary files from your Docker build to reduce image size:
node_modules
dist
.git
.env
.env.*
*.log
.DS_Store
coverage
.vscode
.idea
README.md
.gitignore