Pipelines

Maravilla pipelines provide GitHub Actions-style CI/CD directly from your git repositories. Push a pipeline file to your repo, and the system automatically parses it, builds a dependency graph, and executes each job as an isolated Docker container.

Pipeline File Locations

Maravilla looks for pipeline definitions in these locations:

PathDescription
.maravilla/pipeline.ymlPrimary pipeline file
.maravilla/pipeline/*.ymlAdditional pipeline files (any .yml or .yaml)

You can use a single .maravilla/pipeline.yml for simple projects, or split pipelines into multiple files under .maravilla/pipeline/ for larger projects (e.g., build.yml, deploy.yml, lint.yml). All files are discovered automatically and sorted alphabetically.

How It Works

  1. You push code containing .maravilla/pipeline.yml (or files in .maravilla/pipeline/) to a Maravilla-hosted repo
  2. The receive-pack hook reads the YAML from the commit
  3. The system parses, validates, and matches the trigger config against the push event
  4. A PipelineRun is created with a sequential run number
  5. Jobs are expanded (matrix) and organized into a DAG (topological levels)
  6. Each level executes in parallel — every job becomes a task submitted to the scheduler
  7. For each job, the engine checks out the git repo on the host, restores artifacts from dependency jobs, then starts the container with /workspace bind-mounted
  8. Each container runs /bin/sh -c with set -e and the job commands
  9. On success, artifact paths are tarred and uploaded for downstream jobs
  10. Logs are captured in real-time and persisted
  11. On completion, the pipeline state is computed: Succeeded, Failed, Degraded, or Cancelled

Quick Example

# .maravilla/pipeline.yml
name: ci
image: node:22-alpine

on:
  push:
    branches: [main]

jobs:
  install:
    commands:
      - npm ci
    artifacts:
      paths: [node_modules/]

  test:
    needs: install
    commands:
      - npm test

  build:
    needs: [install, test]
    commands:
      - npm run build

Push this file and the pipeline runs install first, then test, and finally build once test succeeds. No git binary is needed inside the container — checkout happens on the host before your container starts.


YAML Reference

Top-Level Fields

FieldTypeRequiredDescription
namestringyesPipeline name (displayed in UI and logs)
imagestringnoDefault Docker image inherited by all jobs that don’t specify their own
checkoutobjectnoCheckout configuration (see below)
onobjectnoTrigger configuration (when to run)
envmapnoEnvironment variables injected into every job
servicesmapnoSidecar containers (databases, caches)
secretslistnoSecret names resolved from the pipeline secret store
webhookmapnoArbitrary key-value data included in all webhook payloads for this pipeline
jobsmapyesMap of job_name to job definition

Checkout

Controls host-side git checkout behavior.

checkout:
  submodules: false   # default: false

The engine always performs a shallow clone (--depth 1) and fetches the exact commit SHA.

Triggers (on)

on:
  push:
    branches: [main, develop, "release/*"]
    paths: ["src/**", "Cargo.toml"]

  pull_request:
    branches: [main]
    events: [opened, synchronize]

  manual: true

  schedule:
    - cron: "0 2 * * *"
    - cron: "0 14 * * 1-5"
TriggerFieldsDescription
pushbranches, pathsFires on git push. Branches support glob patterns (*, **). Paths filter by changed files. Empty lists match all.
pull_requestbranches, eventsFires on PR events against target branches. Events: opened, synchronize, closed.
manualbooleanWhen true, the pipeline can be triggered via the API or UI.
schedulelist of {cron}Cron-based scheduling (UTC). Standard 5-field cron syntax.

Glob patterns:

  • main — exact match
  • release/* — matches release/1.0 but not release/1.0/hotfix
  • src/** — matches src/lib.rs, src/deep/nested/file.rs

Jobs

jobs:
  test:
    needs: install
    image: node:22-alpine
    commands:
      - npm test
    env:
      DATABASE_URL: postgres://localhost:5432/test
    artifacts:
      paths: [coverage/]
    cache:
      key: "npm-{{ hash('package-lock.json') }}"
      paths: [node_modules]
    resources:
      cpu: "2000m"
      memory: "2Gi"
    timeout: 1800
    failure_strategy: continue
FieldTypeRequiredDescription
imagestringnoDocker image. Inherits pipeline-level image if absent. One of the two must be set.
commandslistnoShell commands executed in order with set -e
needsstring or listnoJob name(s) this job depends on. Accepts needs: install or needs: [a, b]
envmapnoJob-specific environment variables (override pipeline-level env)
artifactsobjectnoArtifact upload configuration
cacheobjectnoDependency caching configuration
resourcesobjectnoCPU and memory limits for the container
matrixmapnoMatrix expansion — runs the job for every combination
ifstringnoCondition expression — skip job if false
failure_strategystringnoWhat to do when job fails: stop (default), continue, ignore
timeoutintegernoMaximum execution time in seconds (default: 1800)
stepslistnoNamed steps with name and commands fields. Alternative to top-level commands for structured output.
pathslistnoWorkspace-relative paths for path-based job filtering

Job Steps

Jobs can use steps instead of (or alongside) commands for named, structured execution:

jobs:
  build:
    steps:
      - name: Install dependencies
        commands:
          - npm ci
      - name: Build application
        commands:
          - npm run build

Each step has a name (displayed in the UI and logs) and its own commands list.


Webhook Data

The top-level webhook field attaches arbitrary key-value data to all webhook payloads for the pipeline. Maravilla does not interpret the values — consumers (e.g., the deployment system) read them.

webhook:
  deploy: true
  environment: production

Artifacts

Artifacts from all needs: jobs are automatically downloaded and extracted into the workspace before the current job’s commands run.

jobs:
  build:
    commands: [npm run build]
    artifacts:
      paths: [dist/, coverage/]
      expire_in: "1 week"          # optional; default: 30 days

  deploy:
    needs: build
    commands: [kubectl apply -f k8s/]
    # dist/ and coverage/ are automatically restored into /workspace
FieldTypeDescription
pathslistWorkspace-relative paths (files or directories) to tar and upload after success
expire_instringRetention override, e.g. "1 day", "1 week"

Artifacts are retained for 30 days by default.


Cache

cache:
  key: "npm-{{ hash('package-lock.json') }}"
  restore_keys: [npm-]
  paths: [node_modules]
FieldTypeDescription
keystringExact cache key. Supports {{ hash('filename') }} templates that SHA-256 hash the file contents.
restore_keyslistPrefix fallback keys. If exact key misses, tries these prefixes in order.
pathslistDirectories or files to cache.

Cache is stored per-repository with zstd compression. A per-tenant size limit (default 5 GB) enforces LRU eviction when exceeded.


Matrix Builds

jobs:
  build:
    commands:
      - rustup target add $TARGET
      - cargo build --release --target $TARGET
    matrix:
      target:
        - x86_64-unknown-linux-gnu
        - aarch64-unknown-linux-gnu
        - x86_64-apple-darwin
    artifacts:
      paths: [target/]

This expands the job into 3 parallel copies. Each copy gets the matrix values as uppercase environment variables (e.g. TARGET=x86_64-unknown-linux-gnu).

Limits: Maximum 25 combinations per job. Maximum 100 jobs per pipeline.


Conditions (if)

if: "branch == 'main' && event == 'push'"

Simple expressions supporting:

  • branch == 'value' / branch != 'value'
  • event == 'push' / event == 'pull_request' / event == 'manual'
  • tag == 'v1.0'
  • && for AND (up to 10 conditions)

When a condition evaluates to false, the job is skipped (not failed).


Failure Strategy

ValueBehavior
stop(default) Pipeline fails immediately. Downstream jobs are cancelled.
continuePipeline continues. Final state is Degraded if any job failed.
ignoreFailure is ignored entirely. Pipeline can still be Succeeded.

Resources

resources:
  cpu: "2000m"     # 2 CPU cores (millicores)
  memory: "4Gi"    # 4 GB RAM

Defaults: 1000m CPU, 512Mi memory. These are Docker container limits.


Service Sidecars

services:
  postgres:
    image: postgres:16
    env:
      POSTGRES_PASSWORD: test
      POSTGRES_DB: mydb
  redis:
    image: redis:7-alpine

Service containers run alongside your job containers on the same Docker network. Access them via their name as hostname (e.g., postgres:5432).


Secrets

Declare secrets your pipeline uses. They are resolved from the pipeline secret store and injected as environment variables.

secrets: [DATABASE_URL, DEPLOY_TOKEN]

Pipeline secrets are AES-256-GCM encrypted at rest. They never appear in API responses — only names are listed. Avoid printing secrets with echo in your commands, as log output is not filtered.

Manage secrets via the REST API:

POST   /api/v1/repos/{ns}/{repo}/pipelines/secrets       Set secret
GET    /api/v1/repos/{ns}/{repo}/pipelines/secrets       List secret names
DELETE /api/v1/repos/{ns}/{repo}/pipelines/secrets/{name} Delete secret

Built-in Environment Variables

Every job automatically receives these environment variables:

VariableExampleDescription
PIPELINE_RUN_IDa1b2c3d4-...Unique ID of this pipeline run
PIPELINE_REFrefs/heads/mainFull git ref that triggered the run
PIPELINE_SHAabc123def456Commit SHA being built
PIPELINE_BRANCHmainBranch name (extracted from ref)
PIPELINE_EVENTpushTrigger type: push, pull_request, manual, schedule, retry
PIPELINE_JOB_NAMEbuildName of the current job

Priority order (highest wins): Built-in vars > Run env_vars (vault) > Job env > Pipeline env


DAG Execution

Jobs are organized into levels based on needs dependencies:

Level 0:  [install]           -- no dependencies, runs first
Level 1:  [lint, test]        -- both need install, run in parallel
Level 2:  [build]             -- needs lint + test, runs after both complete
Level 3:  [deploy]            -- needs build

Within a level, all jobs are submitted to the scheduler simultaneously. If a job fails with failure_strategy: stop, all jobs in subsequent levels are cancelled.


Pipeline States

StateDescription
PendingRun created, not yet started
RunningAt least one job is executing
SucceededAll jobs completed successfully
FailedA required job failed (stop strategy)
DegradedSome jobs failed but execution continued (continue strategy)
CancelledCancelled by user or system

Security

  • Tenant isolation: All pipeline data is scoped by tenant ID. Cross-tenant access is prevented at the store level.
  • Container hardening: Jobs run with cap_drop: ALL, no-new-privileges, readonly_rootfs, pids_limit: 256.
  • Network isolation: Each job runs in an isolated Docker network.
  • Path traversal protection: Artifact names and cache keys are validated — .., /, \, null bytes are rejected.
  • YAML limits: Max 1 MB YAML size, max 100 jobs, max 25 matrix combinations, max 512-char conditions.
  • Rate limiting: Per-repo cooldown of 5 seconds between pipeline triggers.

Real-World Example

This is a real pipeline configuration from a SvelteKit application:

# .maravilla/pipeline.yml
name: build-and-deploy
image: node:22-alpine

on:
  push:
    branches: [main]

webhook:
  deploy: true

jobs:
  build:
    commands:
      - export NODE_OPTIONS="--max-old-space-size=4096"
      - env
    steps:
      - name: Install dependencies
        commands:
          - npm ci
      - name: Build SvelteKit app
        commands:
          - npm run build
    artifacts:
      paths: [build/]
    resources:
      cpu: "2000m"
      memory: "4Gi"

Full-Stack Node.js with Services

name: fullstack-ci
image: node:22-alpine

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
    events: [opened, synchronize]

services:
  postgres:
    image: postgres:16
    env:
      POSTGRES_PASSWORD: test
      POSTGRES_DB: app_test

secrets: [DATABASE_URL, DEPLOY_TOKEN]

jobs:
  install:
    commands:
      - npm ci
    artifacts:
      paths: [node_modules/]

  lint:
    needs: install
    commands:
      - npx eslint src/ --max-warnings 0
      - npx prettier --check src/
    failure_strategy: continue

  test:
    needs: install
    commands:
      - npm test
    env:
      DATABASE_URL: postgres://postgres:test@postgres:5432/app_test
    timeout: 600

  build:
    needs: [lint, test]
    commands:
      - npm run build
    artifacts:
      paths: [dist/]
    resources:
      cpu: "2000m"
      memory: "2Gi"

  deploy:
    image: alpine/curl
    needs: build
    commands:
      - curl -sSf -X POST "https://deploy.example.com/api/deploy"
        -H "Authorization: Bearer $DEPLOY_TOKEN"
        -d "{\"version\":\"$PIPELINE_SHA\"}"
    if: "branch == 'main' && event == 'push'"

Rust Project

name: rust-ci
image: rust:1.82

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  CARGO_TERM_COLOR: always
  RUSTFLAGS: "-D warnings"

jobs:
  check:
    commands:
      - cargo check --workspace

  test:
    needs: check
    commands:
      - cargo test --workspace

  clippy:
    needs: check
    commands:
      - rustup component add clippy
      - cargo clippy --workspace -- -D warnings

  build-release:
    needs: [test, clippy]
    commands:
      - cargo build --release
    artifacts:
      paths: [target/release/myapp]
    if: "branch == 'main' && event == 'push'"