Documentation
Callahan CI
Callahan CI is an open-source, AI-native, serverless CI/CD platform that runs entirely on your machine. No cloud account. No Kubernetes cluster. No plugin graveyard. One command and you're building.
Installation
Callahan runs on Linux, macOS, and Windows (WSL2). The only requirement is Docker Desktop or Go 1.22+.
One-line installer
# Downloads binary, sets permissions, and starts Callahan
curl -fsSL https://callahanci.com | sh
Docker Compose
git clone https://github.com/goughliam1813/callahan
cd callahan
docker compose up
From source
git clone https://github.com/goughliam1813/callahan
cd callahan/backend
go mod tidy
go run ./cmd/callahan
# In a second terminal:
cd callahan/frontend
npm install
npm run dev
Quickstart
Once installed, open http://localhost:3000 in your browser. You'll see the Callahan dashboard.
- Click Connect Repository and paste your Git repo URL
- Callahan auto-detects your language and suggests a pipeline
- Click Run Build — your first pipeline executes in an ephemeral container
- Add an LLM API key in Settings to unlock all AI agents
Configuration
Callahan is configured via environment variables. Copy .env.example to .env and fill in your values.
# LLM providers — add at least one
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...
# Or use local Ollama (no API key needed)
OLLAMA_HOST=http://localhost:11434
DEFAULT_MODEL=llama3
# Storage
DATA_DIR=./data
PORT=8080
Callahanfile.yaml
Pipelines are defined in a Callahanfile.yaml at the root of your repository. The syntax is a superset of GitHub Actions — if you know Actions, you already know 90% of it.
name: my-pipeline
on: [push, pull_request]
jobs:
build:
runs-on: callahan:latest
steps:
- name: Install
run: npm ci
- name: Test
run: npm test
- name: Build
run: npm run build
# Callahan AI extensions
ai:
review: true
explain-failures: true
security-scan: true
LLM Setup
Callahan supports any major LLM provider. It tries them in this order: Anthropic → OpenAI → Groq → Ollama. The first one with a valid API key is used.
OLLAMA_HOST — all AI features work with no internet connection and no API costs.
Pipeline Architect
The Pipeline Architect agent generates a complete Callahanfile.yaml from a plain English description. Access it from the dashboard by clicking Callahan AI in the sidebar.
Example prompt: "Build my Next.js app, run Playwright end-to-end tests, scan with Trivy, and deploy to Fly.io on green PRs."
Build Debugger
When a build step fails, click the AI Explain button next to the failed step. The Debugger reads your full log context and returns a plain-English diagnosis plus a suggested fix.
Code Reviewer
Enable code review in your Callahanfile.yaml with ai.review: true. On every pull request, Callahan posts structured review comments to your Git provider — bugs, security issues, performance concerns.
Supports: GitHub, GitLab, Bitbucket, Gitea.
Security Analyst
Enable with ai.security-scan: true. Runs Trivy (container/dependency scanning), Semgrep (SAST), and gitleaks (secret detection). Findings are explained in plain English. Critical findings block the pipeline.
Secrets
Add secrets in the dashboard under Project → Secrets. They are stored encrypted and injected as environment variables at runtime. Reference them in your pipeline:
- name: Deploy
run: flyctl deploy
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
Deploy to Fly.io
- name: Deploy to Fly.io
uses: callahan/deploy-fly@v1
with:
app: my-app-name
token: ${{ secrets.FLY_API_TOKEN }}
Deploy to Vercel
- name: Deploy to Vercel
uses: callahan/deploy-vercel@v1
with:
token: ${{ secrets.VERCEL_TOKEN }}
Environment Variables Reference
| Variable | Default | Description |
|---|---|---|
| PORT | 8080 | API server port |
| DATA_DIR | ./data | SQLite database and artifact storage |
| ANTHROPIC_API_KEY | — | Anthropic Claude API key |
| OPENAI_API_KEY | — | OpenAI API key |
| OLLAMA_HOST | — | Ollama endpoint for local AI |
| DOCKER_SOCK | /var/run/docker.sock | Docker socket path |
REST API
Callahan exposes a full REST API at http://localhost:8080/api/v1. All endpoints return JSON.
# List all projects
GET /api/v1/projects
# Trigger a build
POST /api/v1/projects/:id/builds
# Stream build logs (WebSocket)
WS /api/v1/builds/:id/logs
# AI explain a failed build
POST /api/v1/builds/:id/explain