Skip to content
Crow CI

Logging

Logging configuration variables.

  • Name: CROW_LOG_LEVEL
  • Description: Logging level. Possible values are trace, debug, info, warn, error, fatal, panic, and disabled.
  • Default: none

  • Name: CROW_LOG_FILE
  • Description: Output destination for logs. stdout and stderr can be used as special keywords.
  • Default: stderr

  • Name: CROW_LOG_STORE
  • Description: Log store to use (database, file, or s3).
  • Default: database

Pipeline logs can grow quickly, especially with many active repositories. The three backends have different trade-offs:

Stores log entries as rows in the log_entries table.

  • Pros: Simplest setup — no extra infrastructure needed. Supports live log streaming in HA mode (the distributed logger polls the database for new entries).
  • Cons: Database size grows proportionally with log volume. On busy instances this can significantly increase backup times, disk usage, and query latency for unrelated tables. Databases like SQLite are particularly affected since the entire file grows.
  • Best for: Small to medium instances, or when database size is not a concern.

Stores one NDJSON file per pipeline step on the server’s local filesystem.

  • Pros: Moves log data out of the database entirely. Fast local I/O with append-only writes. Easy to inspect logs manually.
  • Cons: Logs are tied to a single server’s disk — not suitable for HA setups with multiple servers unless using a shared filesystem (e.g. NFS). You are responsible for managing disk space and backups.
  • Best for: Single-server deployments where you want to keep the database lean.

Stores logs in any S3-compatible object storage (AWS S3, MinIO, Ceph, DigitalOcean Spaces, etc.).

  • Pros: Decouples log storage from both the database and local disk. Scales to very large volumes without affecting database performance. Works well for HA setups since all servers can access the same bucket. Object storage is typically cheaper than database storage at scale.
  • Cons: Higher latency per read compared to local disk or database — each log view requires listing and fetching objects from S3. Writes create one small object per batch (~1 second of log output), so a long-running step will produce many objects. Not suitable for the distributed logger’s live streaming poll (live streaming still uses the in-memory multiplexer regardless of log store).
  • Best for: Large or multi-server deployments where database size is a concern and you have S3-compatible storage available.

  • Name: CROW_LOG_STORE_FILE_PATH
  • Description: Directory used for file based log storage.
  • Default: none

  • Name: CROW_LOG_STORE_S3_ENDPOINT
  • Description: S3-compatible endpoint for log storage (e.g. s3.amazonaws.com or minio.example.com:9000).
  • Default: none

  • Name: CROW_LOG_STORE_S3_BUCKET
  • Description: S3 bucket name for log storage.
  • Default: none

  • Name: CROW_LOG_STORE_S3_ACCESS_KEY
  • Description: S3 access key for log storage.
  • Default: none

  • Name: CROW_LOG_STORE_S3_SECRET_KEY
  • Description: S3 secret key for log storage.
  • Default: none

  • Name: CROW_LOG_STORE_S3_SSL
  • Description: Use SSL for S3 log storage connection.
  • Default: true

  • Name: CROW_LOG_STORE_S3_PATH_STYLE
  • Description: Use path-style addressing for S3 log storage (required for MinIO and some S3-compatible providers).
  • Default: false

  • Name: CROW_LOG_STORE_S3_PREFIX
  • Description: Key prefix for log objects in the S3 bucket.
  • Default: logs

  • Name: CROW_LOGROTATE_ENABLED
  • Description: Enable automatic log rotation (Alpine images only). Set to auto to enable when CROW_LOG_FILE points to a file, true to always enable, or false to disable.
  • Default: auto

  • Name: CROW_LOGROTATE_SCHEDULE
  • Description: Cron schedule for log rotation (Alpine images only). Uses standard cron syntax (minute hour day month weekday).
  • Default: 0 0 * * * (daily at midnight)

  • Name: CROW_LOGROTATE_RETAIN_DAYS
  • Description: Number of days to retain rotated logs before deletion (Alpine images only).
  • Default: 7