Skip to content
Crow CI
Codeberg

Server

Server configuration for Crow CI.

Crow authenticates users via OAuth2 from a Git forge:

ForgeEnvironment Variable
ForgejoCROW_FORGEJO=true
GiteaCROW_GITEA=true
GitHubCROW_GITHUB=true
GitLabCROW_GITLAB=true
Bitbucket DCCROW_BITBUCKET=true

There is no manual user registration - all users must authenticate via the forge.

Registration is closed by default (CROW_OPEN=false). When open, any forge user can register.

CROW_ADMIN=johnDoe,janeSmith

Admins can also be promoted via UI: Settings → Users → Edit User.

DriverConfiguration
SQLite (default)Data stored in /var/lib/crow/
PostgreSQL (≥11)CROW_DATABASE_DRIVER=postgres
MySQL/MariaDBCROW_DATABASE_DRIVER=mysql
CROW_DATABASE_DRIVER=postgres
CROW_DATABASE_DATASOURCE=postgres://user:pass@localhost:5432/crow?sslmode=disable
CROW_DATABASE_DRIVER=mysql
CROW_DATABASE_DATASOURCE=user:pass@tcp(localhost:3306)/crow?parseTime=true

Crow encrypts sensitive data at rest using Google Tink:

  • Secrets
  • Registry credentials
  • User OAuth tokens
  • Forge client secrets
CROW_ENCRYPTION_TINK_KEYSET_FILE=/path/to/keyset.json

Generate a keyset with Tinkey:

Terminal window
tinkey create-keyset --key-template AES256_GCM_SIV --out keyset.json --out-format json
Terminal window
tinkey rotate-keyset --in keyset.json --out keyset.json --key-template AES256_GCM_SIV

Crow re-encrypts data with the new primary key on restart.

CROW_ENCRYPTION_DISABLE=true

This decrypts all data back to plaintext.

The backend determines where pipelines execute.

BackendUse Case
dockerDefault, containers on agent host
kubernetesPods in K8s cluster
localDirect execution (development only)

Each step runs in a separate container on the agent.

Use CROW_DOCKER_CONFIG to pass Docker credential helpers, or configure credentials in the Crow UI.

steps:
- name: test
image: alpine
backend_options:
docker:
user: 65534:65534

Crow doesn’t auto-clean Docker images. Add to your maintenance routine:

Terminal window
# Remove dangling images
docker image prune -f
# Remove orphaned Crow volumes
docker volume rm $(docker volume ls --filter name=^crow_* --filter dangling=true -q)

Each step runs in a separate Pod. A temporary PVC transfers files between steps.

CROW_BACKEND_K8S_PULL_SECRET_NAMES=my-registry-secret

The secret must be type kubernetes.io/dockerconfigjson in the namespace set by CROW_BACKEND_K8S_NAMESPACE.

steps:
- name: build
image: alpine
backend_options:
kubernetes:
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 400Mi
cpu: 1000m

Additional supported options: nodeSelector, tolerations, securityContext, annotations, labels, runtimeClassName.

CROW_BACKEND=local

The image field specifies the shell (e.g., bash, fish). Plugins work as executables in $PATH.

Crow exposes Prometheus metrics at /metrics.

CROW_PROMETHEUS_AUTH_TOKEN=your-token
scrape_configs:
- job_name: crow
bearer_token: your-token
static_configs:
- targets: ['crow.example.com']
MetricDescription
crow_pipeline_countPipeline count by repo/branch/status
crow_pipeline_timeBuild time
crow_pipeline_total_countTotal pipelines
crow_pending_stepsPending steps
crow_running_stepsRunning steps
crow_repo_countTotal repos
crow_user_countTotal users
crow_worker_countConnected agents
CROW_SERVER_CERT=/path/to/server.crt
CROW_SERVER_KEY=/path/to/server.key

Expose port 443 and mount certificates when using containers.

VariableDefaultDescription
CROW_LOG_LEVELinfoLog level
CROW_LOG_FILEstderrLog output destination
CROW_LOG_STOREdatabasePipeline log storage
CROW_LOG_STORE_FILE_PATH-File path for pipeline logs

Alpine images auto-rotate logs when CROW_LOG_FILE is set to a file path:

VariableDefaultDescription
CROW_LOGROTATE_SCHEDULE0 0 * * *Cron schedule
CROW_LOGROTATE_RETAIN_DAYS7Days to keep

A server-side hook that lets Crow admins dynamically generate or modify pipeline configurations before execution.

CROW_CONFIG_SERVICE_ENDPOINT=https://example.com/ciconfig
  • Centralized pipeline templates — inject standard steps (security scanning, linting) across all repos
  • Dynamic configuration — generate pipelines based on changed files or branch patterns
  • Policy enforcement — validate or modify pipelines to meet organizational standards
  • Monorepo support — generate targeted pipelines based on which packages changed
sequenceDiagram
    participant Forge as Git Forge
    participant Crow as Crow Server
    participant Config as Config Service
    participant Agent as Crow Agent

    Forge->>Crow: Webhook (push, PR, etc.)
    Crow->>Config: POST /ciconfig
    Note right of Config: Evaluate branch,<br/>changed files, etc.
    alt Return custom config
        Config->>Crow: HTTP 200 + configs
    else Use existing
        Config->>Crow: HTTP 204
    end
    Crow->>Agent: Execute pipeline
  1. Before each pipeline run, Crow sends a POST request to the endpoint with repo and pipeline metadata

  2. The config service processes the request and can make dynamic decisions based on branch, changed files, event type, etc.

  3. The service returns new configs (HTTP 200) or signals “use existing” (HTTP 204)

Requests are signed with ed25519. Get the public key for verification from /api/signature/public-key.

The config service is a separate application you deploy and maintain. It can run anywhere accessible to the Crow server:

Deployment OptionNotes
Kubernetes podSame cluster as Crow, use internal service URL
Standalone VMAny HTTP-accessible server
ServerlessAWS Lambda, Cloud Functions, etc.
SidecarContainer alongside Crow server

Setup:

  1. Deploy your config service (see example below)
  2. Ensure it’s reachable from the Crow server
  3. Set CROW_CONFIG_SERVICE_ENDPOINT to the service URL
  4. Restart Crow server
# Example: Crow server configuration
CROW_CONFIG_SERVICE_ENDPOINT=http://config-service.crow.svc:8080
{
"repo": {
"id": 100,
"name": "my-repo",
"owner": "my-org",
"private": true,
"default_branch": "main"
},
"pipeline": {
"event": "push",
"branch": "main",
"commit": "abc123...",
"author": "user",
"message": "commit message",
"changed_files": ["src/main.go", "pkg/api/handler.go"]
},
"netrc": {
"machine": "github.com",
"login": "x-token",
"password": "ghp_..."
}
}

Return HTTP 200 with new configs:

{
"configs": [
{
"name": ".crow/build.yaml",
"data": "steps:\n - name: build\n image: golang:1.22\n commands:\n - go build ./..."
}
]
}

Return HTTP 204 (no body) to use the existing repository configs.

A simple Go service that injects security scanning for main branch pushes:

package main
import (
"encoding/json"
"net/http"
"strings"
)
type Request struct {
Repo struct{ Name string } `json:"repo"`
Pipeline struct {
Event string `json:"event"`
Branch string `json:"branch"`
ChangedFiles []string `json:"changed_files"`
} `json:"pipeline"`
}
type Response struct {
Configs []struct {
Name string `json:"name"`
Data string `json:"data"`
} `json:"configs"`
}
func handler(w http.ResponseWriter, r *http.Request) {
var req Request
json.NewDecoder(r.Body).Decode(&req)
// Only modify main branch pushes
if req.Pipeline.Branch != "main" || req.Pipeline.Event != "push" {
w.WriteHeader(http.StatusNoContent) // Use existing config
return
}
// Inject security scanning step
config := `steps:
- name: security-scan
image: aquasec/trivy
commands:
- trivy fs --severity HIGH,CRITICAL .
- name: build
image: golang:1.22
commands:
- go build ./...
`
resp := Response{
Configs: []struct {
Name string `json:"name"`
Data string `json:"data"`
}{{Name: ".crow/build.yaml", Data: config}},
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}

Run multiple server instances with PostgreSQL or MySQL for HA. The database coordinates queue processing and cron execution.

CROW_ENABLE_HA=true
VariableDefaultDescription
CROW_QUEUE_LOCK_TTL30sQueue lock timeout
CROW_CRON_LOCK_TTL90sCron lock timeout

With Helm, set server.replicaCount > 1.

Built-in maintenance jobs accessible via Settings → Maintenance.

Reclaims database space after log deletion. Essential for SQLite.

Removes orphaned resources (PVCs, secrets, services) older than 7 days.

CROW_MAINTENANCE_KUBERNETES_CLEANUP_AGE=168h