Pipelines
Maravilla pipelines provide GitHub Actions-style CI/CD directly from your git repositories. Push a pipeline file to your repo, and the system automatically parses it, builds a dependency graph, and executes each job as an isolated Docker container.
Pipeline File Locations
Maravilla looks for pipeline definitions in these locations:
| Path | Description |
|---|---|
.maravilla/pipeline.yml | Primary pipeline file |
.maravilla/pipeline/*.yml | Additional pipeline files (any .yml or .yaml) |
You can use a single .maravilla/pipeline.yml for simple projects, or split pipelines into multiple files under .maravilla/pipeline/ for larger projects (e.g., build.yml, deploy.yml, lint.yml). All files are discovered automatically and sorted alphabetically.
How It Works
- You push code containing
.maravilla/pipeline.yml(or files in.maravilla/pipeline/) to a Maravilla-hosted repo - The receive-pack hook reads the YAML from the commit
- The system parses, validates, and matches the trigger config against the push event
- A PipelineRun is created with a sequential run number
- Jobs are expanded (matrix) and organized into a DAG (topological levels)
- Each level executes in parallel — every job becomes a task submitted to the scheduler
- For each job, the engine checks out the git repo on the host, restores artifacts from dependency jobs, then starts the container with
/workspacebind-mounted - Each container runs
/bin/sh -cwithset -eand the job commands - On success, artifact paths are tarred and uploaded for downstream jobs
- Logs are captured in real-time and persisted
- On completion, the pipeline state is computed: Succeeded, Failed, Degraded, or Cancelled
Quick Example
# .maravilla/pipeline.yml
name: ci
image: node:22-alpine
on:
push:
branches: [main]
jobs:
install:
commands:
- npm ci
artifacts:
paths: [node_modules/]
test:
needs: install
commands:
- npm test
build:
needs: [install, test]
commands:
- npm run build
Push this file and the pipeline runs install first, then test, and finally build once test succeeds. No git binary is needed inside the container — checkout happens on the host before your container starts.
YAML Reference
Top-Level Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Pipeline name (displayed in UI and logs) |
image | string | no | Default Docker image inherited by all jobs that don’t specify their own |
checkout | object | no | Checkout configuration (see below) |
on | object | no | Trigger configuration (when to run) |
env | map | no | Environment variables injected into every job |
services | map | no | Sidecar containers (databases, caches) |
secrets | list | no | Secret names resolved from the pipeline secret store |
webhook | map | no | Arbitrary key-value data included in all webhook payloads for this pipeline |
jobs | map | yes | Map of job_name to job definition |
Checkout
Controls host-side git checkout behavior.
checkout:
submodules: false # default: false
The engine always performs a shallow clone (--depth 1) and fetches the exact commit SHA.
Triggers (on)
on:
push:
branches: [main, develop, "release/*"]
paths: ["src/**", "Cargo.toml"]
pull_request:
branches: [main]
events: [opened, synchronize]
manual: true
schedule:
- cron: "0 2 * * *"
- cron: "0 14 * * 1-5"
| Trigger | Fields | Description |
|---|---|---|
push | branches, paths | Fires on git push. Branches support glob patterns (*, **). Paths filter by changed files. Empty lists match all. |
pull_request | branches, events | Fires on PR events against target branches. Events: opened, synchronize, closed. |
manual | boolean | When true, the pipeline can be triggered via the API or UI. |
schedule | list of {cron} | Cron-based scheduling (UTC). Standard 5-field cron syntax. |
Glob patterns:
main— exact matchrelease/*— matchesrelease/1.0but notrelease/1.0/hotfixsrc/**— matchessrc/lib.rs,src/deep/nested/file.rs
Jobs
jobs:
test:
needs: install
image: node:22-alpine
commands:
- npm test
env:
DATABASE_URL: postgres://localhost:5432/test
artifacts:
paths: [coverage/]
cache:
key: "npm-{{ hash('package-lock.json') }}"
paths: [node_modules]
resources:
cpu: "2000m"
memory: "2Gi"
timeout: 1800
failure_strategy: continue
| Field | Type | Required | Description |
|---|---|---|---|
image | string | no | Docker image. Inherits pipeline-level image if absent. One of the two must be set. |
commands | list | no | Shell commands executed in order with set -e |
needs | string or list | no | Job name(s) this job depends on. Accepts needs: install or needs: [a, b] |
env | map | no | Job-specific environment variables (override pipeline-level env) |
artifacts | object | no | Artifact upload configuration |
cache | object | no | Dependency caching configuration |
resources | object | no | CPU and memory limits for the container |
matrix | map | no | Matrix expansion — runs the job for every combination |
if | string | no | Condition expression — skip job if false |
failure_strategy | string | no | What to do when job fails: stop (default), continue, ignore |
timeout | integer | no | Maximum execution time in seconds (default: 1800) |
steps | list | no | Named steps with name and commands fields. Alternative to top-level commands for structured output. |
paths | list | no | Workspace-relative paths for path-based job filtering |
Job Steps
Jobs can use steps instead of (or alongside) commands for named, structured execution:
jobs:
build:
steps:
- name: Install dependencies
commands:
- npm ci
- name: Build application
commands:
- npm run build
Each step has a name (displayed in the UI and logs) and its own commands list.
Webhook Data
The top-level webhook field attaches arbitrary key-value data to all webhook payloads for the pipeline. Maravilla does not interpret the values — consumers (e.g., the deployment system) read them.
webhook:
deploy: true
environment: production
Artifacts
Artifacts from all needs: jobs are automatically downloaded and extracted into the workspace before the current job’s commands run.
jobs:
build:
commands: [npm run build]
artifacts:
paths: [dist/, coverage/]
expire_in: "1 week" # optional; default: 30 days
deploy:
needs: build
commands: [kubectl apply -f k8s/]
# dist/ and coverage/ are automatically restored into /workspace
| Field | Type | Description |
|---|---|---|
paths | list | Workspace-relative paths (files or directories) to tar and upload after success |
expire_in | string | Retention override, e.g. "1 day", "1 week" |
Artifacts are retained for 30 days by default.
Cache
cache:
key: "npm-{{ hash('package-lock.json') }}"
restore_keys: [npm-]
paths: [node_modules]
| Field | Type | Description |
|---|---|---|
key | string | Exact cache key. Supports {{ hash('filename') }} templates that SHA-256 hash the file contents. |
restore_keys | list | Prefix fallback keys. If exact key misses, tries these prefixes in order. |
paths | list | Directories or files to cache. |
Cache is stored per-repository with zstd compression. A per-tenant size limit (default 5 GB) enforces LRU eviction when exceeded.
Matrix Builds
jobs:
build:
commands:
- rustup target add $TARGET
- cargo build --release --target $TARGET
matrix:
target:
- x86_64-unknown-linux-gnu
- aarch64-unknown-linux-gnu
- x86_64-apple-darwin
artifacts:
paths: [target/]
This expands the job into 3 parallel copies. Each copy gets the matrix values as uppercase environment variables (e.g. TARGET=x86_64-unknown-linux-gnu).
Limits: Maximum 25 combinations per job. Maximum 100 jobs per pipeline.
Conditions (if)
if: "branch == 'main' && event == 'push'"
Simple expressions supporting:
branch == 'value'/branch != 'value'event == 'push'/event == 'pull_request'/event == 'manual'tag == 'v1.0'&&for AND (up to 10 conditions)
When a condition evaluates to false, the job is skipped (not failed).
Failure Strategy
| Value | Behavior |
|---|---|
stop | (default) Pipeline fails immediately. Downstream jobs are cancelled. |
continue | Pipeline continues. Final state is Degraded if any job failed. |
ignore | Failure is ignored entirely. Pipeline can still be Succeeded. |
Resources
resources:
cpu: "2000m" # 2 CPU cores (millicores)
memory: "4Gi" # 4 GB RAM
Defaults: 1000m CPU, 512Mi memory. These are Docker container limits.
Service Sidecars
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: test
POSTGRES_DB: mydb
redis:
image: redis:7-alpine
Service containers run alongside your job containers on the same Docker network. Access them via their name as hostname (e.g., postgres:5432).
Secrets
Declare secrets your pipeline uses. They are resolved from the pipeline secret store and injected as environment variables.
secrets: [DATABASE_URL, DEPLOY_TOKEN]
Pipeline secrets are AES-256-GCM encrypted at rest. They never appear in API responses — only names are listed. Avoid printing secrets with echo in your commands, as log output is not filtered.
Manage secrets via the REST API:
POST /api/v1/repos/{ns}/{repo}/pipelines/secrets Set secret
GET /api/v1/repos/{ns}/{repo}/pipelines/secrets List secret names
DELETE /api/v1/repos/{ns}/{repo}/pipelines/secrets/{name} Delete secret
Built-in Environment Variables
Every job automatically receives these environment variables:
| Variable | Example | Description |
|---|---|---|
PIPELINE_RUN_ID | a1b2c3d4-... | Unique ID of this pipeline run |
PIPELINE_REF | refs/heads/main | Full git ref that triggered the run |
PIPELINE_SHA | abc123def456 | Commit SHA being built |
PIPELINE_BRANCH | main | Branch name (extracted from ref) |
PIPELINE_EVENT | push | Trigger type: push, pull_request, manual, schedule, retry |
PIPELINE_JOB_NAME | build | Name of the current job |
Priority order (highest wins): Built-in vars > Run env_vars (vault) > Job env > Pipeline env
DAG Execution
Jobs are organized into levels based on needs dependencies:
Level 0: [install] -- no dependencies, runs first
Level 1: [lint, test] -- both need install, run in parallel
Level 2: [build] -- needs lint + test, runs after both complete
Level 3: [deploy] -- needs build
Within a level, all jobs are submitted to the scheduler simultaneously. If a job fails with failure_strategy: stop, all jobs in subsequent levels are cancelled.
Pipeline States
| State | Description |
|---|---|
| Pending | Run created, not yet started |
| Running | At least one job is executing |
| Succeeded | All jobs completed successfully |
| Failed | A required job failed (stop strategy) |
| Degraded | Some jobs failed but execution continued (continue strategy) |
| Cancelled | Cancelled by user or system |
Security
- Tenant isolation: All pipeline data is scoped by tenant ID. Cross-tenant access is prevented at the store level.
- Container hardening: Jobs run with
cap_drop: ALL,no-new-privileges,readonly_rootfs,pids_limit: 256. - Network isolation: Each job runs in an isolated Docker network.
- Path traversal protection: Artifact names and cache keys are validated —
..,/,\, null bytes are rejected. - YAML limits: Max 1 MB YAML size, max 100 jobs, max 25 matrix combinations, max 512-char conditions.
- Rate limiting: Per-repo cooldown of 5 seconds between pipeline triggers.
Real-World Example
This is a real pipeline configuration from a SvelteKit application:
# .maravilla/pipeline.yml
name: build-and-deploy
image: node:22-alpine
on:
push:
branches: [main]
webhook:
deploy: true
jobs:
build:
commands:
- export NODE_OPTIONS="--max-old-space-size=4096"
- env
steps:
- name: Install dependencies
commands:
- npm ci
- name: Build SvelteKit app
commands:
- npm run build
artifacts:
paths: [build/]
resources:
cpu: "2000m"
memory: "4Gi"
Full-Stack Node.js with Services
name: fullstack-ci
image: node:22-alpine
on:
push:
branches: [main]
pull_request:
branches: [main]
events: [opened, synchronize]
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: test
POSTGRES_DB: app_test
secrets: [DATABASE_URL, DEPLOY_TOKEN]
jobs:
install:
commands:
- npm ci
artifacts:
paths: [node_modules/]
lint:
needs: install
commands:
- npx eslint src/ --max-warnings 0
- npx prettier --check src/
failure_strategy: continue
test:
needs: install
commands:
- npm test
env:
DATABASE_URL: postgres://postgres:test@postgres:5432/app_test
timeout: 600
build:
needs: [lint, test]
commands:
- npm run build
artifacts:
paths: [dist/]
resources:
cpu: "2000m"
memory: "2Gi"
deploy:
image: alpine/curl
needs: build
commands:
- curl -sSf -X POST "https://deploy.example.com/api/deploy"
-H "Authorization: Bearer $DEPLOY_TOKEN"
-d "{\"version\":\"$PIPELINE_SHA\"}"
if: "branch == 'main' && event == 'push'"
Rust Project
name: rust-ci
image: rust:1.82
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
CARGO_TERM_COLOR: always
RUSTFLAGS: "-D warnings"
jobs:
check:
commands:
- cargo check --workspace
test:
needs: check
commands:
- cargo test --workspace
clippy:
needs: check
commands:
- rustup component add clippy
- cargo clippy --workspace -- -D warnings
build-release:
needs: [test, clippy]
commands:
- cargo build --release
artifacts:
paths: [target/release/myapp]
if: "branch == 'main' && event == 'push'"