Telegram Bots
Deploy ADK-TS Telegram bots using polling mode
ADK-TS Telegram bots use the Model Context Protocol (MCP) to connect your agent to Telegram. The @iqai/mcp-telegram server handles communication with Telegram's Bot API using the Telegraf library. Best for: Chat-based agents, customer support bots, and interactive assistants. This guide covers deploying polling-based Telegram bots to various platforms.
Important: Background Worker Required
Telegram bots using polling mode do not expose HTTP endpoints. They must be deployed as background workers, not web services. Deploying as a web service causes health check failures and service restarts.
What You'll Need
Before deploying your Telegram bot, ensure you have:
- Telegram Bot Token: Obtained from @BotFather
- LLM API Key: Google AI, OpenAI, Anthropic, or another supported provider
- ADK-TS Telegram Bot Project: Get started with:
- Use the ADK CLI to create a new project:
adk new my-agent --template telegram-bot - Or clone the Telegram Personal Assistant sample
- Use the ADK CLI to create a new project:
- Cloud Platform Account: Railway, Render, Heroku, or Docker hosting
How Telegram Polling Works
Understanding the architecture helps you choose the right deployment settings.
┌─────────────────┐ polls for ┌─────────────────┐
│ Your Agent │ ←───────────────── │ Telegram API │
│ (MCP Server) │ │ │
│ │ ──────────────────→│ │
└─────────────────┘ sends responses └─────────────────┘
↑
│ LLM requests
↓
┌─────────────────┐
│ LLM Provider │
│ (Google, OpenAI)│
└─────────────────┘Key characteristics of polling mode:
- The MCP server actively polls Telegram's API for new messages
- No incoming HTTP requests — your bot initiates all connections
- No port binding or HTTPS certificates required
- Runs as a long-lived background process
This is why web service deployments fail — they expect your app to respond to HTTP health checks, but polling bots don't listen for incoming requests.
Environment Variables
Configure these environment variables on your deployment platform:
| Variable | Description | Required |
|---|---|---|
TELEGRAM_BOT_TOKEN | Bot token from @BotFather | ✅ |
GOOGLE_API_KEY | Google AI API key (or your LLM provider's key) | ✅ |
LLM_MODEL | Model to use (e.g., gemini-2.5-flash) | ✅ |
ADK_DEBUG | Enable debug logging (true/false) | Optional |
Using Other LLM Providers
Replace GOOGLE_API_KEY with your provider's environment variable (e.g.,
OPENAI_API_KEY, ANTHROPIC_API_KEY) and update LLM_MODEL accordingly.
Deploy to Railway
Railway automatically detects long-running processes, making it ideal for Telegram bots.
Step 1: Create Your Project
- Log in to Railway
- Click "New" or "New Project"
- Select "Deploy from GitHub repo" and connect your repository
Step 2: Configure Build Settings
Railway typically auto-detects Node.js projects. Verify or set:
- Build Command:
pnpm install && pnpm build - Start Command:
node dist/index.js
Step 3: Add Environment Variables
Go to your service's Variables tab and add the required environment variables:
TELEGRAM_BOT_TOKEN=your_bot_token_here
GOOGLE_API_KEY=your_api_key_here
LLM_MODEL=gemini-2.5-flash
ADK_DEBUG=false # optionalStep 4: Deploy
Railway deploys automatically when you push to your connected branch. Check the Deployments tab to monitor progress.
Step 5: Verify
Confirm your bot is running correctly:
- Check the logs for successful initialization
- Send a message to your bot on Telegram
- Confirm you receive a response
Deploy to Render
Render requires explicit configuration for background workers.
Use Background Worker
You must select "Background Worker" as the service type. Web Services require port binding and will fail. For more context on this configuration, see GitHub Issue #345.
Step 1: Create a Background Worker
- Log in to Render
- Click "Add new" → "Background Worker"
- Connect your GitHub repository
Step 2: Configure the Service
Configure the following settings for your service:
| Setting | Value |
|---|---|
| Name | Your bot name |
| Language | Node |
| Build Command | pnpm install && pnpm build |
| Start Command | node dist/index.js |
Step 3: Add Environment Variables
In the Environment section, add the required environment variables:
TELEGRAM_BOT_TOKEN=your_bot_token_here
GOOGLE_API_KEY=your_api_key_here
LLM_MODEL=gemini-2.5-flash
ADK_DEBUG=false # optionalStep 4: Deploy
Click "Deploy Background Worker". Render will build and deploy your bot.
Step 5: Verify
- Check the logs for successful initialization
- Send a message to your bot on Telegram
- Confirm you receive a response
Deploy to Heroku
Heroku uses a Procfile to define process types. Use worker instead of web.
Step 1: Install Heroku CLI
If you haven't already, install the Heroku CLI:
npm install -g herokuThen log in to your Heroku account:
heroku loginStep 2: Create Your App
Create a new Heroku app with a unique name:
heroku create your-bot-nameStep 3: Create a Procfile
Create a Procfile in your project root (no file extension) with the following content:
worker: node dist/index.jsStep 4: Add Environment Variables
Set the required environment variables for your bot:
heroku config:set TELEGRAM_BOT_TOKEN=your_bot_token_here
heroku config:set GOOGLE_API_KEY=your_api_key_here
heroku config:set LLM_MODEL=gemini-2.5-flash
heroku config:set ADK_DEBUG=false # optionalStep 5: Deploy
Push your code to Heroku and scale the worker process:
git push heroku mainScale the worker process (workers start at 0 by default):
heroku ps:scale worker=1Step 6: Verify
Check the logs to confirm your bot is running:
heroku logs --tailVerify the worker process is active:
heroku psDeploy with Docker
Docker provides maximum flexibility for deploying anywhere.
Step 1: Create a Dockerfile
Create a Dockerfile in your project root with the following content:
FROM node:20-alpine
# Enable pnpm
RUN corepack enable && corepack prepare pnpm@9.12.0 --activate
WORKDIR /app
# Install dependencies
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
# Build the project
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
ENV NODE_ENV=production
CMD ["node", "dist/index.js"]Step 2: Build the Image
Build the Docker image for your target platform.
For local testing, build for your machine's architecture:
docker build -t my-telegram-bot .For cloud deployment (most platforms use AMD64), specify the platform:
docker buildx build --platform linux/amd64 -t my-telegram-bot .Step 3: Test Locally
Test your bot locally before deploying.
Run using an env file:
docker run --env-file .env my-telegram-botOr pass variables directly:
docker run \
-e TELEGRAM_BOT_TOKEN=your_token \
-e GOOGLE_API_KEY=your_key \
-e LLM_MODEL=gemini-2.5-flash \
my-telegram-botStep 4: Push to a Registry
Tag your image and push it to Docker Hub:
docker tag my-telegram-bot username/my-telegram-bot:latest
docker push username/my-telegram-bot:latestStep 5: Deploy
Deploy from your registry to Railway, Render, or any Docker-compatible platform by specifying your image URL (e.g., username/my-telegram-bot:latest).
Telegram Troubleshooting
- Service Keeps Restarting: If your bot restarts repeatedly, it's likely deployed as a Web Service instead of a Background Worker. Web services expect a port to be bound, while polling bots do not bind to any port.
- Bot Not Responding: Confirm the
TELEGRAM_BOT_TOKENis valid by contacting @BotFather. Ensure your bot has permission to read messages in group chats if applicable. - Rate Limiting: Telegram has strict rate limits for bots. Check logs for
429 Too Many Requestserrors if your bot is handling high traffic.
Telegram Best Practices
- Separation of Concerns: Keep your bot's logic in the ADK agent and use the Telegram server primarily as a transport layer.
- Error Recovery: Implement a retry strategy for LLM calls to handle intermittent API failures gracefully without crashing the polling loop.
Next Steps
🤖 Telegram Bot API
Official Telegram documentation
📚 MCP Telegram Documentation
Learn about MCP Telegram features and configuration
🐳 Docker Deployment Guide
Advanced Docker deployment options
🚂 Railway Deployment Guide
Detailed Railway setup
📋 Production Best Practices
Learn production best practices for your agents