# Maravilla Documentation > Complete documentation for Maravilla — an edge runtime platform for full-stack applications. --- # Getting Started Source: https://www.maravilla.cloud/docs/getting-started Section: Getting Started Description: Get up and running with Maravilla Cloud in minutes Maravilla is an edge runtime platform that lets you deploy full-stack web applications with built-in platform services like KV storage and a document database. It runs your app with sub-100ms cold starts. ## Prerequisites - **Node.js >= 22.12.0** (required for the CLI and build tooling) - A supported platform: Linux x64, macOS x64/arm64, or Windows x64 ## Install the CLI ```bash npm install -g @maravilla-labs/cli ``` To see installation progress with npm 7+, add the `--foreground-scripts` flag: ```bash npm install -g @maravilla-labs/cli --foreground-scripts ``` Verify the installation: ```bash maravilla --version ``` ## Initialize a project Run `maravilla init` inside an existing project to set up the Maravilla configuration: ```bash maravilla init ``` This detects your framework and scaffolds the necessary adapter configuration and project structure. ## Start the dev server Launch the local development environment, which starts the platform dev server (KV, Database) alongside your framework's dev server: ```bash maravilla dev ``` Your app runs on `http://localhost:5173` with platform services available on port 3001. Hot module replacement works as usual. ## Build for production Create an optimized production bundle by running your framework's standard build command — the Maravilla adapter (configured in `svelte.config.js` / `nuxt.config.ts` / `vite.config.ts`) hooks into it: ```bash npm run build ``` This produces a `build/` directory containing your server bundle, static assets, and a platform manifest with the `auth`, `database`, and `transforms` blocks from `maravilla.config.ts` already merged in. You can preview the build locally: ```bash maravilla preview ``` Optional post-build steps: ```bash # Validate output structure maravilla check # Generate a V8 snapshot for 5-10x faster cold starts maravilla snapshot ``` ## Deploy Push your code to trigger a CI/CD pipeline build. Maravilla uses a `.maravilla/pipeline.yml` file to define build steps and deployment configuration. A typical pipeline installs dependencies, runs `npm run build`, and produces the `.maravilla/` artifact for deployment. ## Next steps - [Installation](/docs/installation) -- system requirements, authentication, and project structure details - [Frameworks](/docs/frameworks) -- set up SvelteKit, React Router, SolidStart, or TanStack Start --- # Installation Source: https://www.maravilla.cloud/docs/installation Section: Getting Started Description: Install the Maravilla CLI and set up your first project ## System requirements | Requirement | Details | |-------------|---------| | Node.js | >= 22.12.0 | | Linux | x64 (GNU) | | macOS | x64 and arm64 | | Windows | x64 | ## Install the CLI Install the CLI globally using your preferred package manager: ```bash npm install -g @maravilla-labs/cli # or pnpm add -g @maravilla-labs/cli # or yarn global add @maravilla-labs/cli ``` Verify the installation: ```bash maravilla --version ``` ## Initialize a project Run `maravilla init` inside an existing project directory. The command detects your framework and configures the appropriate adapter: ```bash cd my-app maravilla init ``` ## Project structure after init After initialization, your project will contain a `.maravilla/` configuration directory. When you build, the adapter produces the following output structure: ``` .maravilla/ ├── manifest.json # Platform configuration ├── pipeline.yml # CI/CD pipeline definition ├── server.js # Bundled server application ├── server.js.map # Source maps ├── functions/ # Serverless functions (when present) └── static/ # Static assets and prerendered pages ``` The `manifest.json` describes your app's routing, features, and environment configuration. The runtime reads this file to serve your application. ## Authentication Register for an account and authenticate the CLI: ```bash # Create a new account maravilla register # Log in to an existing account maravilla login # Check the currently authenticated user maravilla whoami ``` ## CLI commands The CLI provides the following core commands: | Command | Description | |---------|-------------| | `maravilla init` | Initialize Maravilla in an existing project | | `maravilla dev` | Start the dev server with platform services (port 3001) | | `maravilla doctor` | Diagnose Maravilla setup (`--fix` for safe autofixes) | | `maravilla snapshot` | Generate a V8 snapshot from a built `server.js` for faster cold starts | | `maravilla check` | Validate that a build output directory has the expected structure | | `maravilla preview` | Preview a production build locally | | `maravilla platform` | Manage platform services (kv, db subcommands) | | `maravilla update` | Update the CLI to the latest version | > Production bundles are produced by your framework's normal build (`npm run build`) — the Maravilla adapter hooks in. There is no `maravilla build` command. See [CLI Reference → Building your project](./cli-reference#building-your-project). ### Platform service management Browse KV and database records interactively from the terminal: ```bash # Launch interactive KV browser maravilla platform kv # Launch interactive database browser maravilla platform db # Show platform service status maravilla platform status ``` ## Updating the CLI Keep the CLI up to date with a single command: ```bash maravilla update ``` ## Next steps - [Frameworks](/docs/frameworks) -- configure SvelteKit, React Router, or Nitro-based frameworks - [Getting Started](/docs/getting-started) -- quick start walkthrough --- # Frameworks Source: https://www.maravilla.cloud/docs/frameworks Section: Getting Started Description: Framework adapters for SvelteKit, React Router, and more Maravilla supports modern full-stack frameworks through dedicated adapters and a Nitro preset. Each integration produces a `.maravilla/` build output that the runtime can serve at the edge. ## SvelteKit Fully supported via `@maravilla-labs/adapter-sveltekit`. ### Install ```bash npm install -D @maravilla-labs/adapter-sveltekit npm install @maravilla-labs/platform ``` ### Configure Update your `svelte.config.js` to use the Maravilla adapter: ```javascript import adapter from '@maravilla-labs/adapter-sveltekit'; const config = { kit: { adapter: adapter({ out: 'build', // Output directory (default: '.maravilla') precompress: false, // Pre-compress static assets (default: false) polyfill: true // Include polyfills (default: true) }) } }; export default config; ``` ### Adapter options | Option | Type | Default | Description | |--------|------|---------|-------------| | `out` | `string` | `'.maravilla'` | Output directory for the build | | `precompress` | `boolean` | `false` | Pre-compress static assets with gzip/brotli | | `polyfill` | `boolean` | `true` | Include runtime polyfills | | `envPrefix` | `string` | `'PUBLIC_'` | Prefix for client-exposed environment variables | | `include` / `exclude` | `string[]` | `['/*']` / `[]` | Route include/exclude patterns | | `external` | `string[]` | `[]` | Dependencies to exclude from the server bundle | ### Use platform services Access KV and Database in your server-side load functions: ```typescript // src/routes/+page.server.ts import { getPlatform } from '@maravilla-labs/platform'; import type { PageServerLoad } from './$types'; export const load: PageServerLoad = async () => { const platform = getPlatform(); // KV store const cached = await platform.KV.demo.get('homepage:data'); // Document database (document queries) const users = await platform.DB.find('users', { active: true }); return { cached, users }; }; ``` --- ## React Router Fully supported via `@maravilla-labs/adapter-react-router`. Works with React Router 7 (the framework formerly known as Remix). ### Install ```bash npm install -D @maravilla-labs/adapter-react-router npm install @maravilla-labs/platform ``` ### Configure Add the Maravilla Vite plugin to your `vite.config.ts`. The plugin runs after React Router's build to produce the `.maravilla/` bundle: ```typescript import { defineConfig } from 'vite'; import { reactRouter } from '@react-router/dev/vite'; import { maravillaReactRouter } from '@maravilla-labs/adapter-react-router/vite'; export default defineConfig({ plugins: [ reactRouter(), maravillaReactRouter({ out: '.maravilla', // Output directory (default: '.maravilla') serverBuildPath: 'build/server', // React Router server output (default) clientBuildPath: 'build/client', // React Router client output (default) }) ] }); ``` ### Adapter options | Option | Type | Default | Description | |--------|------|---------|-------------| | `out` | `string` | `'.maravilla'` | Output directory | | `serverBuildPath` | `string` | `'build/server'` | Path to React Router server build | | `clientBuildPath` | `string` | `'build/client'` | Path to React Router client build | | `serverBuildFile` | `string` | `'index.js'` | Server build entry file name | | `envPrefix` | `string` | `'PUBLIC_'` | Prefix for client-exposed environment variables | | `polyfill` | `boolean` | `true` | Include runtime polyfills | | `precompress` | `boolean` | `false` | Pre-compress static assets | | `external` | `string[]` | `[]` | Dependencies to exclude from the bundle | --- ## Nitro-based frameworks (SolidStart, TanStack Start) Frameworks built on [Nitro](https://nitro.build) are supported through `@maravilla-labs/preset-nitro`. This includes SolidStart and TanStack Start. ### Install ```bash npm install -D @maravilla-labs/preset-nitro npm install @maravilla-labs/platform ``` ### SolidStart Configure the Nitro preset in your `app.config.ts`: ```typescript import { defineConfig } from '@solidjs/start/config'; export default defineConfig({ server: { preset: '@maravilla-labs/preset-nitro' } }); ``` ### TanStack Start Configure the Nitro preset in your `app.config.ts`: ```typescript import { defineConfig } from '@tanstack/start/config'; export default defineConfig({ server: { preset: '@maravilla-labs/preset-nitro' } }); ``` ### Standalone Nitro You can also use the preset directly in a Nitro config: ```typescript // nitro.config.ts export default defineNitroConfig({ preset: '@maravilla-labs/preset-nitro' }); ``` The preset automatically detects which framework is in use (SolidStart, TanStack Start, or plain Nitro) and configures the build accordingly. --- ## What About Next.js? Next.js is not currently supported. The framework has historically been deeply coupled to Vercel's proprietary infrastructure, making it impractical to run on alternative runtimes without relying on undocumented internals that break between releases. Next.js 16.2 (March 2026) introduced a stable Adapter API, and the OpenNext working group — with Cloudflare, Netlify, AWS, and Google — is building verified adapters for non-Vercel platforms. This is a promising step toward true portability. We are monitoring the Adapter API as it matures. Once the ecosystem stabilizes and verified adapters prove reliable in production, we will re-evaluate adding Next.js support. In the meantime, React Router provides a fully supported alternative for React-based applications on Maravilla. --- ## Platform services across frameworks Regardless of framework, use `@maravilla-labs/platform` to access platform services in server-side code: ```bash npm install @maravilla-labs/platform ``` The platform client auto-detects whether it is running in development (connecting to the local platform dev server on port 3001) or production (using the runtime's built-in services). Import and use it in any server-side handler: ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // KV Store -- key-value storage with optional TTL await platform.KV.demo.put('key', 'value', { expirationTtl: 3600 }); const value = await platform.KV.demo.get('key'); // Database -- document document operations await platform.DB.insertOne('users', { name: 'Alice', active: true }); const users = await platform.DB.find('users', { active: true }, { sort: { createdAt: -1 }, limit: 20 }); ``` --- ## CI/CD pipeline All frameworks use the same pipeline format. Place a `pipeline.yml` in your `.maravilla/` directory: ```yaml name: build-and-deploy image: node:22-alpine on: push: branches: [main] jobs: build: steps: - name: Install dependencies commands: - npm ci - name: Build app commands: - npm run build artifacts: paths: [build/] resources: cpu: "2000m" memory: "4Gi" ``` ## Next steps - [Getting Started](/docs/getting-started) -- quick start walkthrough - [Installation](/docs/installation) -- CLI installation and authentication --- # Authentication Source: https://www.maravilla.cloud/docs/auth Section: Platform Description: Add user authentication to your apps with email/password, OAuth, and session management Add user authentication to your apps. Supports email/password, OAuth providers (Google, GitHub, Okta, OIDC), session management, and user administration. ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); const session = await platform.auth.login({ email: 'jane@example.com', password: 'securePassword123' }); // session.access_token — short-lived JWT // session.refresh_token — single-use refresh token ``` ## Quick Start **1. Enable Auth** in project settings on Maravilla Cloud. **2. Use hosted pages** — every project gets branded login pages at `/_auth/login` and `/_auth/register`. No code needed. **Or build your own UI** using the `platform.auth` API: ```javascript // SvelteKit example: src/routes/login/+page.server.ts import { redirect, fail } from '@sveltejs/kit'; import { getPlatform } from '@maravilla-labs/platform'; export const actions = { default: async ({ request, cookies }) => { const platform = getPlatform(); const data = await request.formData(); try { const session = await platform.auth.login({ email: data.get('email'), password: data.get('password') }); cookies.set('__session', session.access_token, { httpOnly: true, secure: true, sameSite: 'lax', path: '/', maxAge: 60 * 60 * 24 * 30 }); throw redirect(303, '/'); } catch (e) { if (e instanceof Response) throw e; return fail(401, { error: 'Invalid credentials' }); } } }; ``` **3. Protect routes** with `withAuth` or manual validation: ```javascript // Middleware helper — extracts token from Authorization header or __session cookie export default { fetch: platform.auth.withAuth(async (request) => { // request.user is guaranteed return Response.json({ hello: request.user.email }); }) }; ``` ## Hosted Login Pages Customizable through **Auth Settings > Branding**: - **Layouts** — Centered, split-left, split-right, fullscreen - **Theming** — Colors, logo, dark mode, Google Fonts, background image/gradient - **Legal** — Terms and privacy policy links ## OAuth / OIDC Configure providers in **Auth Settings > OAuth Providers**. Hosted login pages automatically show provider buttons. | Provider | Setup | |----------|-------| | **Google** | Client ID + Secret from Google Cloud Console | | **GitHub** | Client ID + Secret from GitHub Developer Settings | | **Okta** | Client ID + Secret + Issuer URL | | **Custom OIDC** | Client ID + Secret + Discovery URL | For custom UIs, start the OAuth flow programmatically: ```javascript const { auth_url } = await platform.auth.getOAuthUrl('google'); return Response.redirect(auth_url); // User authenticates with provider → redirected back with session cookie ``` When an OAuth email matches an existing account, the user is prompted to confirm linking. ## API Reference ### Registration & Login | Method | Description | |--------|-------------| | `register({ email, password, profile? })` | Create account. Returns `AuthUser` | | `login({ email, password })` | Authenticate. Returns `AuthSession` | | `validate(accessToken)` | Verify token. Returns `AuthUser` or throws | | `refresh(refreshToken)` | New token pair. Returns `AuthSession` | | `logout(sessionId)` | Revoke session | ### OAuth | Method | Description | |--------|-------------| | `getOAuthUrl(provider, options?)` | Get authorization URL for redirect | | `handleOAuthCallback(provider, { code, state })` | Exchange code for session | ### Email & Password | Method | Description | |--------|-------------| | `sendVerification(userId)` | Returns `{ token }` — you deliver it | | `verifyEmail(token)` | Mark email as verified | | `sendPasswordReset(email)` | Returns `{ token }` | | `resetPassword(token, newPassword)` | Reset password, invalidate sessions | | `changePassword(userId, old, new)` | Change with current password | ### User Management | Method | Description | |--------|-------------| | `getUser(userId)` | Get user or `null` | | `listUsers(filter?)` | Paginated list. Filter by `status`, `email_contains`, `group_id` | | `updateUser(userId, { email?, status?, profile? })` | Update user | | `deleteUser(userId)` | Delete user and sessions | | `getFieldConfig()` | Get configured registration fields | ### `withAuth(handler)` Middleware that validates the token and sets `request.user`. Returns 401 if unauthenticated. ## Types ```typescript interface AuthUser { id: string; email: string; email_verified: boolean; status: 'active' | 'suspended' | 'deactivated'; provider: string; // "email", "google", "github", etc. groups: string[]; created_at: number; updated_at: number; last_login_at?: number; } interface AuthSession { access_token: string; // short-lived JWT (default 15 min) refresh_token: string; // single-use (default 30 days) expires_in: number; user: AuthUser; } ``` ## Framework Examples **React Router 7:** ```javascript // app/routes/login.tsx export async function action({ request }) { const formData = await request.formData(); const session = await getPlatform().auth.login({ email: formData.get('email'), password: formData.get('password') }); return redirect('/', { headers: { 'Set-Cookie': `__session=${session.access_token}; HttpOnly; Secure; SameSite=Lax; Path=/; Max-Age=2592000` } }); } ``` **Nuxt 3:** ```javascript // server/api/login.post.ts export default defineEventHandler(async (event) => { const { email, password } = await readBody(event); const session = await getPlatform().auth.login({ email, password }); setCookie(event, '__session', session.access_token, { httpOnly: true, secure: true, sameSite: 'lax', path: '/', maxAge: 60 * 60 * 24 * 30 }); return { user: session.user }; }); ``` ## Security - **Passwords** are hashed with industry-standard algorithms and never stored in plaintext - **Tokens** are short-lived and automatically rotated on refresh - **OAuth flows** use PKCE and verify provider ID tokens cryptographically - **Client secrets** are encrypted at rest - **Session limits** and password policies are configurable per project ## Next Steps - [Authorization](./authorization) — Decide *what* signed-in users can do with per-resource policies - [KV Store](./kv-store) — Store session data and user preferences - [Database](./database) — Query and store application data per user - [Storage](./storage) — Upload and manage user files - [Channels](./channels) — Real-time pub/sub and presence - [Deployment](./deployment) — Deploy your app with auth enabled --- # Authorization Source: https://www.maravilla.cloud/docs/authorization Section: Platform Description: Per-resource policies that decide who can read, write, or act on your data Authentication tells you *who* the caller is. Authorization decides *what they can do*. Maravilla evaluates **policies** you attach to your **resources** every time user code touches KV, the database, a realtime channel, or a media room. ```javascript // In your Maravilla admin console, on the "documents" resource, set a policy: // auth.user_id == node.owner || auth.is_admin // That's it — the rule runs on every kv.get / kv.put / db.find / etc. // scoped to that resource. ``` ## How it works There are three layers. Most apps only think about the second one. 1. **Tenant isolation** — automatic. Data from one project is never visible to another, period. You don't configure this. 2. **Per-resource policies** — you define rules for each resource. Maravilla evaluates them at every read/write. This is what you'll write 90% of the time. 3. **App-level checks** — you can also ask "would this be allowed?" from inside your code with `platform.auth.can(...)`. Policies are opt-in. A resource with no policy only gets Layer 1 (tenant isolation). Attach a policy when you want per-user access control. ## Two ways to configure: UI or code You can set everything up from the admin UI, or declare it in a `maravilla.config.{ts,yaml,json}` at the root of your project and let the deploy do it for you. Most teams end up with both — UI for quick iteration, config file for PR review and environment parity. **Config file wins for anything it declares. Anything it omits, the DB keeps.** That lets you adopt incrementally: drop in just `resources` today, layer in `groups`, `branding`, etc. as you need them. Supported out of the box: | Framework | Adapter | How it picks up the config | |---|---|---| | SvelteKit | `@maravilla-labs/adapter-sveltekit` | Automatic on `vite build` | | React Router 7 | `@maravilla-labs/adapter-react-router` | Automatic on `vite build` | | Nuxt / SolidStart / TanStack Start | `@maravilla-labs/preset-nitro` | Automatic on `nitro build` | If you're on one of these, you already have `@maravilla-labs/platform` as a dep — the `defineConfig` helper lives at the `/config` subpath: ```typescript // maravilla.config.ts — at your project root import { defineConfig } from '@maravilla-labs/platform/config'; export default defineConfig({ auth: { resources: [ { name: 'todos', title: 'Todos', actions: ['read', 'write', 'delete'], policy: 'auth.user_id == node.owner || auth.is_admin', }, ], groups: [ { name: 'moderators', permissions: [{ resource_name: 'todos', actions: ['read', 'delete'] }] }, ], relations: [ { relation_name: 'STEWARDS', title: 'Stewards', category: 'family', implies_stewardship: true, bidirectional: false }, ], registration: { fields: [ { key: 'email', label: 'Email', field_type: 'email', required: true, show_on_register: true }, { key: 'display_name', label: 'Display name', field_type: 'text', required: true, show_on_register: true }, ], }, oauth: { google: { enabled: true, client_id: '1234567890.apps.googleusercontent.com', client_secret: { env: 'GOOGLE_CLIENT_SECRET' }, // resolved server-side scopes: ['openid', 'email', 'profile'], }, }, security: { password_policy: { min_length: 12, require_uppercase: true, require_number: true, require_special: false }, session: { access_token_ttl_secs: 900, refresh_token_ttl_secs: 2592000, max_sessions_per_user: 5, require_email_verification: true }, }, branding: { app_name: 'HoneyBee', primary_color: '#f59e0b', layout: 'centered' }, }, }); ``` **Every section is optional.** Declare what you want to own in the repo; leave the rest in the UI. Runtime data (user-to-group memberships, circles, stewardship overrides) stays in the UI — it's tied to individual users, not something you version in your repo. **OAuth secrets.** Accept either `"${env.VAR_NAME}"` (string shorthand) or `{ env: "VAR_NAME" }` (object). Resolved from the tenant's environment at reconcile time — never plaintext in your repo. ### What happens on deploy 1. You run `npm run build` (or your framework's build command). The adapter finds `maravilla.config.*`, validates it, and bakes the `auth` block into the framework's output `manifest.json` (typically `build/manifest.json`; older projects may use `.maravilla/manifest.json`). 2. Your deploy pipeline ships the manifest. Delivery reads the `auth` block on the first request to the new deployment. 3. Declared items are upserted by natural key (resource name, group name, relation name). Singleton sections (registration fields, security, branding) replace the current config only for the fields you declared. 4. Anything the admin UI created but your config doesn't mention is **kept**, never auto-deleted. The deploy log lists it under `authz.config.drift` so you can reconcile manually. 5. Malformed policy expressions fail the deploy with a line/column error before anything ships. **YAML** — same schema, no TS types: ```yaml # maravilla.config.yaml auth: resources: - name: todos title: Todos actions: [read, write, delete] policy: auth.user_id == node.owner || auth.is_admin ``` ### Sync without a full deploy While iterating, a full build-and-deploy cycle is slow. The CLI has two subcommands that go straight to the admin API: ```bash # Show what's different between your maravilla.config.* and the project maravilla auth diff --project # Apply it (same logic as a deploy's reconcile) maravilla auth sync --project ``` - `diff` exits non-zero when there are differences — wire it into CI to gate PRs. - `sync` is safe to re-run — it's an upsert, not a replace. Drift (admin-UI-only items) is reported, not deleted. - Both read your **built** `manifest.json` (default `build/manifest.json`, fallback `.maravilla/manifest.json`) — so run your build first. Keeps the CLI thin; it never has to re-parse TypeScript configs. - Requires `maravilla login` first. ## Defining a resource (UI) Resources live in your project's **Auth Settings → Resources** tab. 1. Click **Add Resource**. 2. **Title**: a human name like "Documents" or "Messages". 3. **Slug**: used in code. Typically the KV namespace or DB collection name (e.g. `documents`). 4. **Service type** (optional): which platform service this resource gates — `kv`, `database`, `realtime`, `media`, `vector`, `storage`, `queue`, `push`, `workflow`, `transforms`. When set, the editor offers per-service action presets and the reconciler validates that the policy only references legal `node.*` fields for that service. Omit for legacy / cross-service resources. 5. **Actions**: the operations you care about — `read`, `write`, `delete`, etc. 6. **Policy** (optional): a REL expression (see below). Leave empty to skip policy checks. Save. The policy is live the moment you hit save — existing deployments pick it up on their next request. ### Why the `type` matters Without a `type`, a KV namespace called `todos` and a DB collection also called `todos` will silently share a policy — the policy can disambiguate by checking `node.namespace` vs `node.collection`, but that's a footgun. Setting `type: 'kv'` (or `'database'`) makes the binding explicit, lets the UI offer service-correct action presets and policy snippets, and lets the reconciler reject obvious mismatches before they ship. ## Writing a policy Policies are boolean expressions. `true` → allow; anything else → deny. Maravilla makes two things available: | Variable | What it is | |---|---| | `auth.user_id` | The caller's user id (`""` if no one's signed in) | | `auth.email` | The caller's email | | `auth.is_admin` | `true` if the caller has the admin flag | | `auth.roles` | Array of role names on the current project | | `auth.groups` | Array of group names the caller belongs to | | `auth.circles` | Array of circle ids the caller belongs to | | `node.*` | Resource-shaped data for this specific operation (see below) | ### Examples ```text # Anyone signed in can read, only admins can write auth.user_id != "" && (node.action == "read" || auth.is_admin) ``` ```text # Only the owner of the row, or someone in the "editors" group auth.user_id == node.owner || auth.groups.contains("editors") ``` ```text # Public read for published items, owner-only writes (node.action == "read" && node.status == "published") || auth.user_id == node.owner ``` ```text # Admins from any project, plus users whose email ends in your domain auth.is_admin || auth.email.endsWith("@acme.com") ``` ### Operators & methods - Comparison: `==`, `!=`, `>`, `<`, `>=`, `<=` - Logic: `&&`, `||`, `!` - Null-safe chaining: `node.meta.published == true` returns `false` (not an error) if `node.meta` is missing. - On strings: `.contains(s)`, `.startsWith(s)`, `.endsWith(s)` - On arrays: `.contains(x)` - On paths: `.descendantOf("/content/blog")` ### What's in `node`? It depends on which op fired. For all of them, `node.action` is a string like `"read"` / `"write"` / `"delete"` / `"list"`. | Service | What `node` looks like | |---|---| | KV | `{ namespace, key, action, value, value_new? }` (list ops: `{ namespace, prefix, limit, action: "list" }`) | | Database | `{ collection, filter, action, document?, update? }` | | Realtime | `{ channel, action: "publish" \| "subscribe" \| "presence:join" \| …}` | | Media | `{ room, role, action: "join" \| "create" \| "record:start" \| …}` | | Vector | `{ collection, action: "index:read" \| "index:admin" }` | | Storage | `{ bucket, key, action: "get" \| "put" \| "delete" \| "list" \| "upload-url" \| …}` | | Queue | `{ queue, action: "enqueue" }` | | Push | `{ action: "subscribe" \| "send" \| "schedule" \| …, target?, payload? }` | | Workflow | `{ workflow, input?, action: "start" \| "send-event" }` | | Transforms | `{ bucket, src_key, action: "transcode" \| "thumbnail" \| …}` | Reference the fields you care about. Policies that ignore `node` still work — they just act as tenant-wide rules. #### Pre-fetched value/document For ops where the platform can resolve the record before evaluating the policy, it does — and exposes the result as `node.value` (KV) or `node.document` (Database). This lets you write the natural rule: ```text auth.user_id == node.value.owner || node.value.public == true ``` Without this, on a KV `get`, `node.value` would be undefined and `node.value.owner` would never match — your policy would silently fall through to the rest of the expression. The pre-fetch makes ownership clauses fire correctly. | Op | Pre-fetched? | Field exposed | |---|---|---| | `kv.get` | yes (the read itself) | `node.value` | | `kv.put` | yes when policy attached (extra read) | `node.value` (existing), `node.value_new` (incoming) | | `kv.delete` | yes when policy attached (extra read) | `node.value` | | `kv.list` | no — list has no per-record context | — | | `db.findOne` | yes (the read itself) | `node.document` | | `db.find` | no — list has no per-record context | use `read_filter` instead, see below | | `db.insertOne` | n/a — record doesn't exist yet | `node.document` (incoming) | | `db.updateOne` | yes when policy attached (extra read) | `node.document` (existing), `node.update` (delta) | | `db.deleteOne` | yes when policy attached (extra read) | `node.document` | The "extra read when policy attached" cases pay nothing when the resource has no policy — the pre-fetch is gated on a single SELECT for policy presence. ## Scoping reads with `read_filter` A REL policy is a per-record predicate — it answers "should this caller see this row?" That works fine for `findOne` and single-key `get`, where the platform fetches the record and runs the predicate. But for `find` and `list` returning many rows, evaluating a free-form predicate per row would be expensive. Maravilla takes a different route: declare a **filter** the runtime ANDs into the caller's query before it runs. ```typescript { name: 'documents', type: 'database', actions: ['read', 'write', 'delete'], policy: 'auth.user_id == node.document.owner || node.document.public == true', read_filter: '{"$or":[{"owner":"$auth.user_id"},{"public":true}]}', } ``` What that does: - A caller who runs `db.find('documents', { status: 'draft' })` is silently rewritten to: `{ $and: [ { status: 'draft' }, { $or: [ { owner: 'u1' }, { public: true } ] } ] }` - The caller can't see rows they're not allowed to see — the filter rewrite happens server-side; their query has to AND with it. - `policy` still gates per-record reads (e.g. `findOne`) and writes — `read_filter` only adds the bulk-read scoping. **Allowed `$auth.X` placeholders.** The runtime substitutes these from the caller's identity at request time: | Placeholder | Resolves to | |---|---| | `$auth.user_id` | string | | `$auth.email` | string | | `$auth.is_admin` | boolean | | `$auth.roles` | array of strings | Any other `$auth.X` reference fails validation when you save the resource. **When to use which.** Use `read_filter` when the caller can run unbounded queries against a collection. Use `policy` (with `node.document` / `node.value`) when single-record access decisions need richer logic than a relational predicate can express. Most apps that share data across users will want both. ## Identity in your code Policies need to know *who* is making the call. Three ways to bind identity: ```javascript // 1. Sign someone in — this binds identity implicitly. await platform.auth.login({ email, password }); // 2. You already have a JWT from a session cookie or Authorization header. await platform.auth.setCurrentUser(token); // 3. To clear (log out). await platform.auth.setCurrentUser(null); ``` `platform.auth.validate(token)` is **pure** — it verifies a token and returns the user, but does *not* change who policies see as the caller. Use `setCurrentUser` when you want the identity to apply. ```javascript // Read who policies will see right now const caller = platform.auth.getCurrentUser(); // { user_id, email, is_admin, roles, is_anonymous } ``` **Scope:** identity binding lasts for the duration of one inbound HTTP request. Two concurrent requests on your deployment see two separate identities — they never bleed into each other. ## Checking permission in code ```javascript // Can the current caller delete this document? const ok = await platform.auth.can("delete", "documents", { owner: doc.owner, status: doc.status }); if (!ok) { return new Response("Forbidden", { status: 403 }); } ``` `platform.auth.can(action, resourceId, node)` runs the same policy the platform itself would run. Returns a plain boolean — no exceptions. **When you'd reach for this:** the platform pre-fetches the record on `kv.get` / `db.findOne` and on policy-attached writes, so most rules fire correctly without you doing anything. But for richer per-record decisions — e.g. a custom service handler that already loaded a document for unrelated reasons, or a cross-resource check ("can this caller see this *other* record?") — call `auth.can(...)` with the resolved data and the policy fires against your `node`. Same engine, you provide the shape. ## How denials surface When a policy denies a direct KV/DB/Realtime/Media call, the op **throws**. Catch it like any other platform error: ```javascript try { await platform.KV.documents.put(id, doc); } catch (e) { if (String(e).includes("authz denied")) { return new Response("Forbidden", { status: 403 }); } throw e; } ``` Prefer `platform.auth.can(...)` when you want a boolean you can branch on cleanly. ## Advanced: graph relationships For scenarios like "guardian can read ward's data" or "manager can update reports from their team", policies can traverse your configured relationships: ```text # Allow if the caller is a steward of whoever owns the resource node.owner RELATES auth.user_id VIA "STEWARDS" DEPTH 1..2 ``` Set up stewardship in **Auth Settings → Stewardship**. The `VIA` name matches the relation type you configured. Keep `DEPTH` small — deep traversal is slow. ## Escape hatch: turning policies off Sometimes you have code paths (admin jobs, first-run seeders) that need to operate without policy checks. Inside your app code: ```javascript platform.policy.setEnabled(false); // ...trusted work here — Layer 2 is bypassed... platform.policy.setEnabled(true); ``` **Important:** - This **only** disables per-resource policies (Layer 2). Tenant isolation (Layer 1) is always enforced — you still can't cross tenants. - It's scoped to the current request. A new request starts with policies re-enabled. - Every flip is logged server-side with the caller's identity so bypasses are auditable. - Don't pass user input into this. It's for your code, not theirs. ## Groups Groups are named sets of users managed in **Auth Settings → Groups**. Add users to a group from the Users tab. Groups become available to policies as `auth.groups`. ```text auth.groups.contains("moderators") || auth.user_id == node.owner ``` Groups can also have resource-level permissions set directly in the admin UI, no policy needed — useful for broad roles like "all editors can write all documents". ## Bad policies don't ship When you save a malformed policy, Maravilla rejects it with a line/column error. Your deployment never sees a broken expression. A policy that evaluates to an error at runtime (missing field on a null, type mismatch, …) is treated as a **deny** — never as an allow. If your policies start denying unexpectedly, check the request logs for the parse or eval error. ## Quick reference **Inside your app code:** ```javascript // Identity await platform.auth.login({ email, password }); // implicit bind await platform.auth.setCurrentUser(token); // explicit bind await platform.auth.setCurrentUser(null); // clear platform.auth.getCurrentUser(); // snapshot // Checks await platform.auth.can(action, resourceId, node); // boolean // Per-request policy toggle platform.policy.setEnabled(false); // Layer 2 off platform.policy.isEnabled(); // current state ``` **In your project config:** ```typescript // maravilla.config.ts import { defineConfig } from '@maravilla-labs/platform/config'; export default defineConfig({ auth: { resources: [ /* name, title, actions, policy */ ], groups: [ /* name, description, permissions */ ], relations: [ /* relation_name, title, implies_stewardship, ... */ ], registration: { fields: [ /* ... */ ] }, oauth: { google: { client_id, client_secret: { env: 'X' }, ... } }, security: { password_policy, session }, branding: { app_name, primary_color, layout, ... }, }, }); ``` **From the CLI:** ```bash maravilla login # once per machine maravilla auth diff --project # preview, CI-safe maravilla auth sync --project # apply ``` ## Next steps - [Authentication](./auth) — Sign users in to populate `auth.*` - [KV Store](./kv-store) — Policies run on every `kv.get/put/delete/list` - [Database](./database) — Policies run on every `db.find/insert/update/delete` - [Realtime](./realtime) — Policies gate `publish`, `subscribe`, and presence - [Media](./media) — Policies decide who gets a LiveKit join token --- # Platform Services Source: https://www.maravilla.cloud/docs/platform-overview Section: Platform Description: Overview of built-in KV, database, and storage services Maravilla provides three built-in platform services that give your application persistent storage without managing infrastructure. Every service is accessed through a single entry point and works identically in development and production. ## Accessing Platform Services All services are available through the `getPlatform()` helper from `@maravilla-labs/platform`: ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // KV Store — fast key-value storage await platform.KV.myapp.put('key', 'value'); // Database — document queries await platform.DB.find('users', { active: true }); // Storage — object/file storage with presigned URLs await platform.STORAGE.put('uploads/photo.jpg', fileData); ``` ## The Three Services ### KV Store A namespaced key-value store for fast reads and writes. Ideal for session data, feature flags, caching, and any data that maps naturally to key-value pairs. Supports optional TTL-based expiration and prefix-based listing with cursor pagination. ```javascript // Store and retrieve a value await platform.KV.sessions.put('user:abc', JSON.stringify({ role: 'admin' })); const session = JSON.parse(await platform.KV.sessions.get('user:abc')); ``` ### Database A document database with a powerful query API. Supports comparison, logical, and element operators for filtering, plus sort, limit, and skip for pagination. Includes MongoDB-style secondary indexes and native vector search for semantic similarity queries. Write queries once -- the platform handles everything. ```javascript // Insert and query documents await platform.DB.insertOne('products', { name: 'Widget', price: 29.99, inStock: true }); const affordable = await platform.DB.find('products', { price: { $lte: 50 }, inStock: true }, { sort: { price: 1 }, limit: 20 }); // Hybrid vector + metadata search const similar = await platform.DB.find('products', { inStock: true }, { vector: { field: 'embedding', value: queryEmbedding, k: 10 } }, ); ``` ### Storage Object storage for files of any size. Supports direct server uploads, presigned URLs for browser-to-storage uploads, download URL generation, and file metadata. ```javascript // Upload a file await platform.STORAGE.put('reports/q1.pdf', pdfBuffer, { contentType: 'application/pdf', metadata: { generatedBy: 'reporting-service' } }); // Generate a temporary download link const { url } = await platform.STORAGE.generateDownloadUrl('reports/q1.pdf', 3600); ``` ## Development vs Production In development, the CLI runs the full platform locally. In production, Maravilla Cloud handles everything. Your code works identically in both environments. ```javascript // This query works the same in development and production await platform.DB.find('users', { age: { $gte: 18 }, active: true }); ``` You never need to worry about the underlying infrastructure -- just write your code once and it runs everywhere. ## Tenant Isolation All platform services enforce automatic tenant isolation. Every operation is scoped to the current tenant without any extra code on your part. ```javascript // Your code: await platform.DB.find('users', { active: true }); // What the platform actually executes: // { active: true, _tenant_id: "current-tenant-id" } ``` This applies to all three services: - **KV Store** -- keys are prefixed and isolated per tenant - **Database** -- queries automatically include tenant ID filters - **Storage** -- object keys are scoped to the tenant's namespace Cross-tenant data access is not possible through the API. Cursors are also validated to ensure they belong to the requesting tenant. ## Multi-Layer Caching In production, both the KV Store and Database benefit from an integrated multi-layer cache: 1. **L1 (in-process)** -- fast hot lookups within a single runtime instance 2. **L2 (distributed)** -- shared across all processes and nodes for cluster-wide coherence Caching uses a versioned invalidation strategy: writes bump a version counter, making all previously cached entries instantly unreachable. This guarantees no stale data is ever served after a write. If the distributed cache is unavailable, the system operates without it transparently. ## Next Steps - [KV Store API Reference](/docs/kv-store) -- full API for key-value operations - [Database API Reference](/docs/database) -- document query, mutation, and indexing API - [Vector Search](/docs/vector-search) -- semantic similarity search with embeddings - [Storage API Reference](/docs/storage) -- file uploads, downloads, and presigned URLs - [Event Handlers](/docs/event-handlers) -- react to platform events with `onDbChange`, `onAuth`, `onQueue`, `onSchedule` - [Workflows](/docs/workflows) -- durable, multi-step business logic that survives crashes and sleeps --- # KV Store Source: https://www.maravilla.cloud/docs/kv-store Section: Platform Description: Key-value storage API reference The KV Store provides fast, namespaced key-value storage. Access it through `platform.KV.{namespace}` where `{namespace}` is any name you choose to organize your data. ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // "demo" is the namespace await platform.KV.demo.put('greeting', 'hello world'); const value = await platform.KV.demo.get('greeting'); ``` ## API Reference ### `get(key)` Retrieves a value by key. Returns the stored value as a string, or `null` if the key does not exist. ```javascript const value = await platform.KV.myapp.get('user:abc'); if (value === null) { console.log('Key not found'); } ``` ### `put(key, value, options?)` Stores a value under the given key. Overwrites any existing value for that key. **Parameters:** - `key` (string) -- the key to store the value under - `value` (string) -- the value to store - `options` (object, optional): - `ttl` (number) -- time-to-live in seconds; the key will be automatically deleted after this duration ```javascript // Store a value permanently await platform.KV.sessions.put('session:xyz', JSON.stringify({ userId: '123' })); // Store a value that expires in 1 hour await platform.KV.sessions.put('session:xyz', JSON.stringify({ userId: '123' }), { ttl: 3600 }); ``` ### `delete(key)` Deletes a key and its associated value. No error is thrown if the key does not exist. ```javascript await platform.KV.demo.delete('todo:abc123'); ``` ### `list(options?)` Lists keys in the namespace. Supports prefix filtering and cursor-based pagination for iterating over large datasets. **Parameters (options object):** - `prefix` (string, optional) -- only return keys that start with this prefix - `limit` (number, optional) -- maximum number of keys to return (max 1000) - `cursor` (string, optional) -- pagination cursor from a previous `list` call **Returns:** ```typescript { result: Array<{ name: string; expiration?: number }>; success: boolean; result_info: { cursor?: string; // present if there are more results count: number; // number of keys returned }; } ``` ```javascript // List all keys with a prefix const result = await platform.KV.demo.list({ prefix: 'todo:', limit: 50 }); for (const key of result.result) { console.log(key.name); // e.g. "todo:abc123" console.log(key.expiration); // Unix timestamp (seconds), if set } ``` #### Paginating Through All Keys Use the returned `cursor` to fetch subsequent pages: ```javascript let cursor = undefined; const allKeys = []; do { const result = await platform.KV.myapp.list({ prefix: 'user:', limit: 100, cursor }); allKeys.push(...result.result); cursor = result.result_info.cursor; } while (cursor); console.log(`Found ${allKeys.length} keys`); ``` ## Data Serialization The KV Store stores values as strings. To store objects, arrays, or other complex types, serialize them with `JSON.stringify` and deserialize with `JSON.parse`: ```javascript // Storing an object const todo = { id: 'abc', text: 'Buy groceries', done: false }; await platform.KV.demo.put('todo:abc', JSON.stringify(todo)); // Retrieving an object const raw = await platform.KV.demo.get('todo:abc'); const parsed = raw ? JSON.parse(raw) : null; ``` ## Namespace Patterns Namespaces let you logically separate different types of data. Use descriptive names that reflect the purpose: ```javascript // Different namespaces for different concerns platform.KV.sessions // user sessions platform.KV.cache // application cache platform.KV.config // configuration values platform.KV.demo // demo/example data ``` Within a namespace, use key prefixes with a delimiter (typically `:`) to create a hierarchical structure: ``` todo:abc123 -- a specific todo item todo:def456 -- another todo item user:123:profile -- user profile user:123:prefs -- user preferences ``` This makes prefix-based listing very effective: ```javascript // Get all todos const todos = await platform.KV.demo.list({ prefix: 'todo:' }); // Get everything for user 123 const userData = await platform.KV.myapp.list({ prefix: 'user:123:' }); ``` ## Real-World Example: Todo Application This example is taken from the Maravilla demo application and shows a complete CRUD implementation using the KV Store. ### Loading Todos ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // List all todo keys const res = await platform.KV.demo.list({ prefix: 'todo:' }); const keys = res?.keys || []; // Fetch each todo's value const todos = []; for (const key of keys) { const raw = await platform.KV.demo.get(key.name); if (raw) { try { todos.push(typeof raw === 'string' ? JSON.parse(raw) : raw); } catch { // skip malformed entries } } } // Sort by creation date (newest first) todos.sort((a, b) => b.createdAt.localeCompare(a.createdAt)); ``` ### Creating a Todo ```javascript const id = crypto.randomUUID().slice(0, 8); const todo = { id, text: 'Buy groceries', done: false, createdAt: new Date().toISOString() }; await platform.KV.demo.put(`todo:${id}`, JSON.stringify(todo)); ``` ### Toggling a Todo ```javascript const raw = await platform.KV.demo.get(`todo:${id}`); if (!raw) throw new Error('Todo not found'); const todo = typeof raw === 'string' ? JSON.parse(raw) : raw; todo.done = !todo.done; await platform.KV.demo.put(`todo:${id}`, JSON.stringify(todo)); ``` ### Deleting a Todo ```javascript await platform.KV.demo.delete(`todo:${id}`); ``` ### REST API Endpoint ```javascript import { getPlatform } from '@maravilla-labs/platform'; // GET /api/todos -- list all todos export const GET = async () => { const platform = getPlatform(); const res = await platform.KV.demo.list({ prefix: 'todo:' }); const keys = res?.keys || []; const todos = []; for (const key of keys) { const raw = await platform.KV.demo.get(key.name); if (raw) { todos.push(typeof raw === 'string' ? JSON.parse(raw) : raw); } } return json({ todos, count: todos.length }); }; // POST /api/todos -- create a new todo export const POST = async ({ request }) => { const platform = getPlatform(); const body = await request.json(); const id = crypto.randomUUID().slice(0, 8); const todo = { id, text: body.text.trim(), done: false, createdAt: new Date().toISOString() }; await platform.KV.demo.put(`todo:${id}`, JSON.stringify(todo)); return json(todo, { status: 201 }); }; ``` ## Limits | Parameter | Development | Production | |-----------|-------------|------------| | Max value size | 1 MB | 16 MB | | Max keys per list | 1,000 | 1,000 | | Key format | UTF-8 string | UTF-8 string | ## Next Steps - [Platform Services Overview](/docs/platform-overview) -- how all three services fit together - [Database API Reference](/docs/database) -- for structured document queries - [Storage API Reference](/docs/storage) -- for file and object storage --- # Database Source: https://www.maravilla.cloud/docs/database Section: Platform Description: Document database API reference The Database service provides a document query API. Write queries once and the platform handles everything -- your code works identically in development and production. ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); await platform.DB.insertOne('users', { name: 'Alice', age: 30 }); const user = await platform.DB.findOne('users', { name: 'Alice' }); ``` ## API Reference ### `find(collection, filter?, options?)` Finds multiple documents matching a filter. **Parameters:** - `collection` (string) -- the collection name - `filter` (object, optional) -- document query filter - `options` (object, optional): - `sort` (object) -- field-to-direction map (`1` for ascending, `-1` for descending) - `limit` (number) -- maximum documents to return (default/max: 1000) - `skip` (number) -- number of documents to skip **Returns:** an array of matching documents. ```javascript const users = await platform.DB.find('users', { active: true }, { sort: { createdAt: -1 }, limit: 20, skip: 0 }); ``` ### `findOne(collection, filter)` Finds a single document matching the filter. Returns the document or `null`. ```javascript const user = await platform.DB.findOne('users', { email: 'alice@example.com' }); ``` ### `insertOne(collection, doc)` Inserts a single document. Returns the generated document ID as a string. ```javascript const id = await platform.DB.insertOne('users', { name: 'Alice', email: 'alice@example.com', age: 30, tags: ['premium', 'early-adopter'] }); console.log(id); // generated document ID string ``` ### `updateOne(collection, filter, update)` Updates a single document matching the filter. Supports update operators. ```javascript // Simple field replacement await platform.DB.updateOne('users', { email: 'alice@example.com' }, { age: 31, lastLogin: Date.now() } ); // Using update operators await platform.DB.updateOne('users', { email: 'alice@example.com' }, { $set: { status: 'inactive' }, $inc: { loginCount: 1 } } ); ``` ### `deleteOne(collection, filter)` Deletes a single document matching the filter. ```javascript await platform.DB.deleteOne('users', { email: 'alice@example.com' }); ``` ## Query Operators ### Comparison Operators ```javascript // $eq -- equals (implicit or explicit) await platform.DB.find('users', { age: 30 }); // implicit await platform.DB.find('users', { age: { $eq: 30 } }); // explicit // $ne -- not equals await platform.DB.find('users', { status: { $ne: 'deleted' } }); // $gt -- greater than await platform.DB.find('products', { price: { $gt: 100 } }); // $gte -- greater than or equal await platform.DB.find('users', { age: { $gte: 18 } }); // $lt -- less than await platform.DB.find('products', { stock: { $lt: 10 } }); // $lte -- less than or equal await platform.DB.find('users', { score: { $lte: 100 } }); // Combining operators on the same field await platform.DB.find('products', { price: { $gte: 50, $lte: 200 } // between 50 and 200 }); ``` ### Array/Set Operators ```javascript // $in -- value matches any element in the array await platform.DB.find('users', { status: { $in: ['active', 'premium', 'vip'] } }); // $nin -- value does not match any element await platform.DB.find('users', { role: { $nin: ['admin', 'moderator'] } }); ``` ### Logical Operators ```javascript // $or -- matches if any condition is true await platform.DB.find('products', { $or: [ { category: 'electronics' }, { price: { $lt: 20 } } ] }); // $and -- matches if all conditions are true (explicit) await platform.DB.find('users', { $and: [ { age: { $gte: 18 } }, { status: 'active' } ] }); // Implicit $and -- multiple fields in a single filter await platform.DB.find('users', { age: { $gte: 18 }, status: 'active', verified: true }); ``` ### Element Operators ```javascript // $exists -- check if a field exists await platform.DB.find('users', { email: { $exists: true } // documents that have an email field }); await platform.DB.find('users', { phone: { $exists: false } // documents without a phone field }); ``` ### String Operators ```javascript // $regex -- pattern matching await platform.DB.find('users', { email: { $regex: '.*@company.com' } }); ``` ### Array Operators ```javascript // $size -- match arrays of a specific length await platform.DB.find('posts', { tags: { $size: 3 } // posts with exactly 3 tags }); ``` ## Update Operators Use these with `updateOne` to perform targeted modifications instead of replacing entire documents. ### `$set` -- Set Field Values ```javascript await platform.DB.updateOne('users', { id: '123' }, { $set: { status: 'inactive', updatedAt: Date.now() } } ); ``` ### `$unset` -- Remove Fields ```javascript await platform.DB.updateOne('users', { id: '123' }, { $unset: { temporaryFlag: '' } } ); ``` ### `$inc` -- Increment Numeric Values ```javascript await platform.DB.updateOne('users', { id: '123' }, { $inc: { loginCount: 1, score: 5 } } ); ``` ### `$push` -- Add to Array ```javascript await platform.DB.updateOne('users', { id: '123' }, { $push: { tags: 'verified' } } ); ``` ### `$pull` -- Remove from Array ```javascript await platform.DB.updateOne('users', { id: '123' }, { $pull: { tags: 'unverified' } } ); ``` ### `$addToSet` -- Add to Array (No Duplicates) ```javascript await platform.DB.updateOne('users', { id: '123' }, { $addToSet: { tags: 'premium' } } // only adds if not already present ); ``` ## Indexes Indexes make reads fast. Without them, the database scans every document in a collection for each query. With them, lookups on indexed fields are instant — even on large collections. You have two ways to create indexes: declare them in `maravilla.config.ts` (recommended — they provision automatically on deploy and when the dev server starts), or create them at runtime with the imperative API. ### Declarative: `maravilla.config.ts` ```typescript // maravilla.config.ts — at your project root import { defineConfig } from '@maravilla-labs/platform/config'; export default defineConfig({ database: { indexes: [ // Lookup users by email, and enforce uniqueness { collection: 'users', keys: { email: 1 }, unique: true }, // Compound index for "posts by author, newest first" { collection: 'posts', keys: [['authorId', 1], ['createdAt', -1]] }, // Only index published posts (partial index) { collection: 'posts', keys: { category: 1 }, partial: { status: 'published' }, }, // Auto-delete expired sessions after 1 hour { collection: 'sessions', keys: { createdAt: 1 }, expireAfterSeconds: 3600, }, ], }, }); ``` When you run `maravilla dev` or deploy your app, Maravilla reconciles the declared indexes into the database. Declared indexes are upsert-only — existing indexes with matching configuration are left alone, and removing a declaration never auto-drops an index (use `dropIndex()` or the CLI for that). ### Imperative: `createIndex()`, `dropIndex()`, `listIndexes()` For ad-hoc or test-only indexes, or when you need to create an index after your app is already deployed: ```javascript // Simple single-field index await platform.DB.createIndex('users', { keys: { email: 1 }, unique: true, }); // Compound index — key order matters for performance await platform.DB.createIndex('posts', { keys: [['authorId', 1], ['createdAt', -1]], }); // Partial index — only includes rows matching the predicate await platform.DB.createIndex('posts', { keys: { category: 1 }, partial: { status: 'published' }, }); // Sparse index — only includes documents where every key field is non-null await platform.DB.createIndex('users', { keys: { phoneNumber: 1 }, sparse: true, }); // TTL index — auto-deletes old documents await platform.DB.createIndex('sessions', { keys: { createdAt: 1 }, expireAfterSeconds: 3600, }); // List every index on a collection const indexes = await platform.DB.listIndexes('users'); // Drop an index by name await platform.DB.dropIndex('users', 'users_email_unique'); ``` ### Index Options | Option | Type | Description | |--------|------|-------------| | `keys` | object or `[field, direction][]` | Field(s) to index. `1` for ascending, `-1` for descending. Use the tuple array for compound indexes to guarantee key order. | | `unique` | boolean | Reject inserts/updates that would create a duplicate in the indexed columns. | | `partial` | object | MongoDB-style filter. Only documents matching the filter are indexed. Supports `$eq`, `$ne`, `$gt/$gte/$lt/$lte`, `$in/$nin`, `$exists`, `$and`, `$or`. | | `sparse` | boolean | Shorthand for "only index documents where every key field exists". | | `expireAfterSeconds` | number | TTL in seconds. The field must hold a Unix timestamp (seconds). Requires a single-field index. | | `name` | string | Optional custom index name. Falls back to an auto-derived name based on fields and options. | ### When to Use Which Index - **Single-field equality** (`{ email: 1 }`) — the bread and butter. Use for `findOne({ email })`, `find({ email })`. - **Compound** (`[['tenantId', 1], ['createdAt', -1]]`) — speeds up queries that filter by the first field and sort by the second. Also handles queries that filter by only the first field. - **Unique** — enforces data integrity at the database level. Faster than a read-before-write check in your application. - **Partial** — when you frequently query only a subset of documents (e.g., only published posts). Smaller index, faster lookups. - **Sparse** — when a field is only present on some documents (optional fields). Avoids indexing `null`s. - **TTL** — for session tokens, ephemeral cache entries, or any data that should auto-expire. ## Complex Query Examples ### Pagination ```javascript const pageSize = 20; const pageNumber = 2; const results = await platform.DB.find('users', { active: true }, { sort: { createdAt: -1 }, limit: pageSize, skip: pageSize * (pageNumber - 1) } ); ``` ### Compound Filters ```javascript // Find adult users in specific cities with premium status const premiumAdults = await platform.DB.find('users', { age: { $gte: 18 }, city: { $in: ['New York', 'Los Angeles', 'Chicago'] }, status: 'premium', active: true }); ``` ### Nested Logical Operators ```javascript // Cheap products OR highly-rated electronics in stock const products = await platform.DB.find('products', { $or: [ { price: { $lt: 20 } }, { $and: [ { category: 'electronics' }, { rating: { $gte: 4.5 } }, { inStock: true } ] } ] }); ``` ### Optional Field Queries ```javascript // Users with no email field, or with a verified email const users = await platform.DB.find('users', { $or: [ { email: { $exists: false } }, { emailVerified: true } ] }); ``` ## Vector Search The database natively supports **vector search** — semantic similarity queries over embeddings, with optional metadata pre-filtering and support for quantization, matryoshka embeddings, and multi-vector (ColBERT-style) indexes. ```javascript // Declare a vector index await platform.DB.createVectorIndex('products', { field: 'embedding', dimensions: 1536, metric: 'cosine', }); // Hybrid search: metadata filter + vector similarity in one call const hits = await platform.DB.find('products', { category: 'electronics', inStock: true }, { vector: { field: 'embedding', value: queryEmbedding, k: 10 }, }, ); ``` See [Vector Search](/docs/vector-search) for the full API — quantization, matryoshka embeddings, multi-vector (ColBERT), and declarative config. ## Tenant Isolation All queries are automatically scoped to the current tenant. You never need to manage tenant IDs manually: ```javascript // You write: await platform.DB.find('users', { active: true }); // The platform executes: // { active: true, _tenant_id: "current-tenant-id" } ``` Cross-tenant data access is not possible through the API. ## Type Handling The platform correctly handles all standard JavaScript types: | JavaScript Type | Supported | |----------------|-----------| | String | Yes | | Number | Yes | | Boolean | Yes | | Object | Yes | | Array | Yes | | null | Yes | ## Best Practices 1. **Write queries once** -- your code runs in any environment 2. **Leverage query operators** -- use `$gte`, `$in`, etc. instead of fetching all documents and filtering in JavaScript 3. **Always use `limit`** -- avoid unbounded queries on large collections 4. **Batch reads with `find`** -- use filters instead of multiple `findOne` calls 5. **Use `$set` for partial updates** -- avoid overwriting entire documents when updating a few fields 6. **Declare indexes in `maravilla.config.ts`** -- for every field you query on often. Indexes are the single biggest performance knob available to you. ## Limits | Parameter | Value | |-----------|-------| | Max documents per query | 1,000 | | Automatic indexing | `_tenant_id` on every collection | ## Next Steps - [Vector Search](/docs/vector-search) -- semantic similarity queries with embeddings - [Platform Services Overview](/docs/platform-overview) -- how all three services fit together - [KV Store API Reference](/docs/kv-store) -- for simple key-value storage - [Storage API Reference](/docs/storage) -- for file and object storage --- # Vector Search Source: https://www.maravilla.cloud/docs/vector-search Section: Platform Description: Hybrid semantic + metadata search with built-in quantization, matryoshka, and multi-vector support Maravilla's database has native vector search. Store embeddings alongside your documents, declare a vector index, and query by semantic similarity — optionally combined with regular metadata filters in a single call. Bring your own embeddings (from OpenAI, Mistral, Cohere, a local model — anything). Maravilla handles the indexing, storage, and search. ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // 1. Declare a vector index (one-time setup) await platform.DB.createVectorIndex('products', { field: 'embedding', dimensions: 1536, metric: 'cosine', }); // 2. Insert documents with embeddings — vectors sync automatically await platform.DB.insertOne('products', { name: 'Wireless Headphones', category: 'electronics', embedding: [0.12, -0.45, /* ...1536 numbers */], }); // 3. Search: metadata filter + vector similarity in one call const hits = await platform.DB.find('products', { category: 'electronics', inStock: true }, { vector: { field: 'embedding', value: queryEmbedding, k: 10, }, }, ); console.log(hits[0]._score); // 0–1 similarity score console.log(hits[0]._distance); // raw distance ``` Every hit is a regular document with `_score` and `_distance` added. `_score` is normalized to `[0, 1]` — higher means more similar — so you can apply consistent thresholds across metrics. ## Creating a Vector Index ```javascript await platform.DB.createVectorIndex('products', { field: 'embedding', // JSON path to the vector field inside each doc dimensions: 1536, // must match your embedding model's output metric: 'cosine', // 'cosine' | 'l2' | 'hamming' — default 'cosine' storage: 'float32', // 'float32' | 'int8' | 'bit' — default 'float32' matryoshka: false, // allow queries with shorter vectors multiVector: false, // each document holds an array of vectors }); ``` Creating the same index twice is a no-op. Creating an index with the same field but different configuration errors — drop the old one first with `dropVectorIndex()` if you want to change shape. ### Declarative config — recommended Declare vector indexes in `maravilla.config.ts` so they provision automatically when you run `maravilla dev` or deploy: ```typescript // maravilla.config.ts import { defineConfig } from '@maravilla-labs/platform/config'; export default defineConfig({ database: { vectorIndexes: [ { collection: 'products', field: 'embedding', dimensions: 1536, metric: 'cosine', storage: 'int8', }, ], }, }); ``` ## Inserting Documents Just set the field you declared. Maravilla syncs the vector into the index transparently, in the same transaction as the document write — either both commit, or neither does. ```javascript await platform.DB.insertOne('products', { name: 'Wireless Headphones', category: 'electronics', price: 199, embedding: [0.12, -0.45, /* ... */], }); ``` A few things to know: - **Dimension mismatches are rejected.** If your index is 1536-dim and you pass a 768-dim array, insert fails with a clear error. - **Missing vector fields are tolerated.** A document that doesn't carry the declared field just skips the vector side; the document still gets stored, and it won't appear in vector search results. - **Updates sync automatically.** `updateOne()` that changes the vector field rewrites the vector row in the index. - **Deletes cascade.** `deleteOne()` removes the document and its vector rows together. ## Searching ### Hybrid search — metadata filter + vector Pass a `vector` clause inside the existing `find()` options. The metadata filter is applied alongside the vector ranking so you can narrow to a segment of your collection before (or during) the similarity search. ```javascript const hits = await platform.DB.find('products', // metadata filter — same operators you'd use without vectors { category: 'electronics', inStock: true, price: { $lte: 500 } }, { limit: 10, vector: { field: 'embedding', value: queryEmbedding, k: 10, metric: 'cosine', // optional — defaults to the index's metric minScore: 0.7, // optional — drop low-similarity results }, }, ); ``` ### Pure vector search — `findSimilar()` When there's no metadata filter, `findSimilar()` is a slightly cleaner shape: ```javascript const similar = await platform.DB.findSimilar('products', { field: 'embedding', value: queryEmbedding, k: 10, filter: { category: 'electronics' }, // optional minScore: 0.7, // optional }); ``` ### Query options | Option | Type | Description | |--------|------|-------------| | `field` | string | Must match a registered vector index field on the collection. | | `value` | `number[]` or `number[][]` | The query vector. Use a flat array for single-vector queries; use an array of arrays for late-interaction (see Multi-vector). | | `k` | number | Top-k result count. Must be greater than 0. | | `metric` | string | Per-query override: `'cosine'`, `'l2'`, or `'hamming'`. Defaults to the index's configured metric. | | `minScore` | number | Drop results below this normalized `_score`. Applied after scoring. | | `queryMode` | string | `'single'` (default) or `'late-interaction'` for ColBERT-style queries. | | `aggregation` | string | `'max-sim'` (default) or `'sum'` — how multi-vector distances combine per document. | ## Storage Options ### Float32 — default Full precision. 4 bytes per dimension. Highest accuracy. Use this unless you have a reason not to. ### Int8 quantization — 4× smaller, ~same quality ```javascript await platform.DB.createVectorIndex('products', { field: 'embedding', dimensions: 1536, metric: 'cosine', storage: 'int8', }); ``` You still insert and query with regular float arrays — Maravilla quantizes on write and compares correctly. Typical accuracy loss is under 2% for normalized embeddings. ### Bit quantization — 32× smaller For large-scale candidate retrieval where you can tolerate significant precision loss: ```javascript await platform.DB.createVectorIndex('articles', { field: 'embedding_bits', dimensions: 1536, metric: 'hamming', // required for bit storage storage: 'bit', }); ``` Common pattern: bit index for fast candidate retrieval, then rerank top candidates with a float32 model. ## Matryoshka Embeddings Matryoshka-trained embeddings (OpenAI `text-embedding-3-*`, Mistral, Nomic) let you truncate a vector to a shorter length without retraining. Maravilla supports this as an index-level opt-in: ```javascript await platform.DB.createVectorIndex('docs', { field: 'embedding', dimensions: 1536, matryoshka: true, }); // Query with any length <= 1536 — Maravilla slices the stored vectors to match const shorter = queryEmbedding.slice(0, 256); const hits = await platform.DB.findSimilar('docs', { field: 'embedding', value: shorter, k: 10, }); ``` Without `matryoshka: true`, query-length and index-length must match exactly. Use matryoshka when you want to trade a bit of accuracy for much smaller query vectors on the hot path. ## Multi-Vector (ColBERT-style) For late-interaction retrieval — where each document stores an array of vectors (one per chunk, sentence, or token) and queries are compared against every stored vector: ```javascript // Declare a multi-vector index await platform.DB.createVectorIndex('passages', { field: 'tokenEmbeddings', dimensions: 128, metric: 'cosine', multiVector: true, }); // Each document holds an array of vectors await platform.DB.insertOne('passages', { title: 'Introduction to Widgets', tokenEmbeddings: [ [0.1, 0.2, /* ... */], // chunk 1 [0.3, 0.1, /* ... */], // chunk 2 [0.0, 0.4, /* ... */], // chunk 3 // ... ], }); // Single-vector query — finds docs with any chunk close to the query const hits = await platform.DB.find('passages', {}, { vector: { field: 'tokenEmbeddings', value: queryVector, k: 10, aggregation: 'max-sim', // default — rank docs by their closest chunk }, }); // Late-interaction (true ColBERT) — compare each query token to every stored chunk const hits = await platform.DB.find('passages', {}, { vector: { field: 'tokenEmbeddings', value: queryTokenEmbeddings, // number[][] — one vector per query token k: 10, queryMode: 'late-interaction', aggregation: 'sum', // sum the per-token max-sim scores }, }); ``` ## Managing Indexes ```javascript // List every vector index on a collection const indexes = await platform.DB.listVectorIndexes('products'); // Drop a vector index (does not touch the documents — just the vector data) await platform.DB.dropVectorIndex('products', 'embedding'); // `listIndexes()` returns both document and vector indexes in one unified list const all = await platform.DB.listIndexes('products'); // → [{ kind: 'document', ... }, { kind: 'vector', ... }] ``` ## Working with the CLI ```bash # Create a vector index maravilla platform vector create-index products embedding \ --dimensions 1536 --metric cosine --storage int8 # Search from the terminal — handy for debugging maravilla platform vector search products embedding \ --vector '[0.1, 0.2, 0.3, ...]' --k 10 # List vector indexes maravilla platform vector list products # Drop an index maravilla platform vector drop-index products embedding ``` ## Metric Selection Guide | Metric | When to use | Storage required | |--------|-------------|------------------| | `cosine` | Text embeddings (OpenAI, Mistral, Cohere, sentence-transformers). The default. | `float32` or `int8` | | `l2` | Image or audio embeddings where magnitude carries meaning. | `float32` or `int8` | | `hamming` | Bit-quantized indexes for fast candidate retrieval. | `bit` (required) | If your embedding model is documented as working with "dot product" or "inner product" — normalize your vectors to unit length and use `cosine`. The ranking is equivalent. ## Limits | Parameter | Value | |-----------|-------| | Max dimensions per index | 4,096 | | Supported metrics | `cosine`, `l2`, `hamming` | | Supported storage precisions | `float32`, `int8`, `bit` | ## Best Practices 1. **Declare indexes in `maravilla.config.ts`** — keeps your vector schema versioned in git and provisions automatically on deploy. 2. **Match `dimensions` to your embedding model exactly** — there's no silent truncation or padding. 3. **Use `int8` storage** for text embeddings — 4× smaller, accuracy loss is usually negligible. 4. **Normalize vectors for cosine similarity** — if your model doesn't already produce unit vectors, normalize on the client side before insert. 5. **Combine metadata filters with vector search** — narrows the result set and usually improves relevance for domain-specific queries. 6. **Set `minScore`** — vector search always returns `k` results even when none are genuinely similar. A threshold stops spurious matches from reaching your UI. ## Next Steps - [Database API Reference](/docs/database) -- the full document-query API that hybrid search builds on - [Platform Services Overview](/docs/platform-overview) -- how all services fit together - [CLI Reference](/docs/cli-reference) -- `maravilla platform vector` commands --- # Storage Source: https://www.maravilla.cloud/docs/storage Section: Platform Description: Object storage and file upload API reference The Storage service provides object storage for files of any size. It supports direct server uploads, presigned URLs for browser-to-storage uploads, download URL generation, and file metadata. In development, the CLI stores files locally. In production, Maravilla Cloud handles everything. Your code works identically in both environments. ```javascript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); await platform.STORAGE.put('docs/report.pdf', pdfBuffer, { contentType: 'application/pdf' }); ``` ## API Reference ### `put(key, data, metadata?)` Uploads a file from the server. Use this for small files or when the data is already available on the server side. **Parameters:** - `key` (string) -- unique file path/key - `data` (Uint8Array | ArrayBuffer) -- the file content - `metadata` (object, optional): - `contentType` (string) -- MIME type - `filename` (string) -- original filename - `tags` (object) -- custom tags ```javascript await platform.STORAGE.put('uploads/photo.jpg', imageBuffer, { contentType: 'image/jpeg', filename: 'vacation.jpg', tags: { uploadedBy: 'user-123' } }); ``` ### `get(key)` Retrieves a file and its metadata. **Returns:** `{ data: Uint8Array, metadata: { ... } }` ```javascript const file = await platform.STORAGE.get('uploads/photo.jpg'); // file.data -- Uint8Array of the file content // file.metadata -- associated metadata object ``` ### `delete(key)` Deletes a file from storage. ```javascript await platform.STORAGE.delete('uploads/photo.jpg'); ``` ### `confirm(key)` Belt-and-braces idempotent re-verifier. **Most apps never need this** — every upload path through Maravilla's API (direct `put()`, token upload, presigned URL) publishes a `storage.put` event on its own the moment the file lands, and `onStorage` handlers + declarative `transforms` blocks fire automatically. `confirm()` exists for edge cases where a file was placed into the bucket outside the platform's API (e.g. by a bulk migration script) and you want to retroactively announce it. It HEADs the object to verify it actually exists, then publishes the same `storage.put` event a regular upload would have produced. ```javascript await platform.STORAGE.confirm('imports/backfill/2024-q1.csv'); ``` ### `list(prefix?, limit?)` Lists files in storage, optionally filtered by prefix. **Parameters:** - `prefix` (string, optional, default: `''`) -- only return files whose keys start with this prefix - `limit` (number, optional, default: 100, max: 1000) -- maximum number of results ```javascript const files = await platform.STORAGE.list('uploads/', 50); ``` ### `getMetadata(key)` Returns only the metadata for a file, without downloading the file content. Useful for checking file details without transferring the data. ```javascript const metadata = await platform.STORAGE.getMetadata('uploads/photo.jpg'); // { contentType, filename, size, ... } ``` ### `generateUploadUrl(key, contentType, size)` Generates a presigned URL that allows a client (typically a browser) to upload a file directly to storage, bypassing your server. This is the recommended approach for large files. **Parameters:** - `key` (string) -- the storage key the file will be saved under - `contentType` (string) -- the MIME type of the file being uploaded - `size` (number) -- maximum allowed file size in bytes **Returns:** `{ url, method, headers }` ```javascript const uploadUrl = await platform.STORAGE.generateUploadUrl( 'uploads/avatar.png', 'image/png', 5 * 1024 * 1024 // 5 MB max ); // Return uploadUrl to the client for direct upload // uploadUrl.url -- the presigned URL // uploadUrl.method -- HTTP method to use (typically "PUT") // uploadUrl.headers -- headers the client must include ``` ### `generateDownloadUrl(key, expiresInSeconds?)` Generates a temporary presigned URL for downloading a file. Useful for serving private files to authenticated users without exposing your storage credentials. **Parameters:** - `key` (string) -- the file key - `expiresInSeconds` (number, optional) -- seconds until the URL expires (default: 900 / 15 minutes) **Returns:** `{ url, method, headers, expiresIn }` ```javascript const download = await platform.STORAGE.generateDownloadUrl( 'reports/q1.pdf', 3600 // 1 hour ); // Redirect user to download.url ``` ## Upload Patterns ### Server Upload -- Recommended The simplest and most common pattern. The file goes through your server, where you can validate, preview, or process it before storing. ```javascript // Server endpoint (e.g., SvelteKit +server.ts) import { json } from '@sveltejs/kit'; import { getPlatform } from '@maravilla-labs/platform'; export const POST = async ({ request }) => { const platform = getPlatform(); const formData = await request.formData(); const file = formData.get('file'); const key = `uploads/${Date.now()}-${file.name}`; // Validate if (file.size > 10 * 1024 * 1024) { return json({ error: 'File too large' }, { status: 400 }); } // Store await platform.STORAGE.put(key, new Uint8Array(await file.arrayBuffer()), { contentType: file.type, filename: file.name }); return json({ success: true, key }); }; ``` This approach lets you: - Validate file type and size before storing - Generate previews or thumbnails - Transform or resize images - Apply business logic (permissions, quotas) ### Token Upload For larger files, generate an upload token on the server and let the client upload directly to the platform's upload endpoint. This avoids loading the full file into your server's memory. **Step 1: Server generates the upload token** ```javascript // Server endpoint (e.g., SvelteKit +server.ts) import { json } from '@sveltejs/kit'; import { getPlatform } from '@maravilla-labs/platform'; export const POST = async ({ request }) => { const platform = getPlatform(); const { key, content_type, size } = await request.json(); const uploadUrl = await platform.STORAGE.generateUploadUrl( key, content_type, size || 10 * 1024 * 1024 ); return json({ upload_url: uploadUrl.url, method: uploadUrl.method, headers: uploadUrl.headers }); }; ``` **Step 2: Client uploads using the token** The returned `upload_url` points to the platform's upload endpoint (e.g., `/api/storage/upload/{token}`). The client uploads to this URL: ```javascript async function uploadFile(file) { const key = `uploads/${Date.now()}-${file.name}`; // Get upload token from your server const res = await fetch('/api/uploads/token', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ key, content_type: file.type, size: file.size }) }); const { upload_url, method, headers } = await res.json(); // Upload to the platform's upload endpoint await fetch(upload_url, { method: method || 'PUT', headers: headers || {}, body: file }); return key; } ``` The token is validated server-side (expiration, size limit, content type) before the file is stored. Tokens expire after 15 minutes by default. ### Preview Before Storing When you need users to preview a file (e.g., crop an image, confirm a document) before committing it to storage: ```javascript // 1. Client previews locally using a blob URL const previewUrl = URL.createObjectURL(file); // Show preview in the UI... // 2. User confirms — then upload via server const formData = new FormData(); formData.append('file', file); await fetch('/api/uploads', { method: 'POST', body: formData }); // 3. Clean up preview URL.revokeObjectURL(previewUrl); ``` This keeps the file in the browser until the user is ready, avoiding unnecessary storage writes and cleanup. ### Combining Storage with KV for Metadata Store file metadata in the KV Store for fast lookups without reading the file: ```javascript const platform = getPlatform(); // Upload file to storage const key = `uploads/${crypto.randomUUID()}-${file.name}`; await platform.STORAGE.put(key, fileData, { contentType: file.type, filename: file.name }); // Store metadata in KV for quick access await platform.KV.files.put(`metadata:${key}`, JSON.stringify({ filename: file.name, contentType: file.type, size: file.size, uploadedAt: new Date().toISOString() })); ``` ## Security Best Practices ### File Validation Always validate uploads on the server side, even when using presigned URLs: ```javascript const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp']; const MAX_SIZE = 10 * 1024 * 1024; // 10 MB if (!ALLOWED_TYPES.includes(file.type)) { throw new Error('Invalid file type'); } if (file.size > MAX_SIZE) { throw new Error('File too large'); } ``` ### Secure Key Generation Use unique, collision-resistant keys: ```javascript import { randomUUID } from 'crypto'; const key = `uploads/${userId}/${randomUUID()}-${file.name}`; ``` ### Presigned URL Expiration Keep presigned URL lifetimes short: ```javascript const uploadUrl = await platform.STORAGE.generateUploadUrl( key, contentType, { sizeLimit: 10 * 1024 * 1024, // enforce size limit expires_in: 300 // 5 minutes } ); ``` ## Environment Details | Setting | Value | |---------|-------| | Max file size (Free) | 10 MB | | Max file size (Pro) | 50 MB | | Max file size (Enterprise) | 500 MB | The platform automatically enforces upload rate limits per tenant. Default: 60 uploads per minute. ## Reacting to Uploads Every upload — whether via `put()`, a token upload, or a presigned URL — fires a `storage.put` event automatically. React to it with an [event handler](/docs/event-handlers#onstorage) — run any code you like when a file lands — or let Maravilla run [media transforms](/docs/media-transforms) automatically via a declarative `transforms` block in `maravilla.config.ts`. ```typescript // events/on-photo-upload.ts import { onStorage } from '@maravilla-labs/platform/events'; export const generateThumb = onStorage( { keyPattern: 'uploads/photos/*', op: 'put' }, async ({ key, platform }) => { await platform.media.transforms.resize(key, { width: 400, format: 'webp' }); }, ); ``` ## Next Steps - [Platform Services Overview](/docs/platform-overview) -- how all three services fit together - [KV Store API Reference](/docs/kv-store) -- for key-value storage - [Database API Reference](/docs/database) -- for document queries - [Media Transforms](/docs/media-transforms) -- transcode, thumbnail, resize, OCR uploaded files - [Event Handlers](/docs/event-handlers) -- react to uploads and other platform events --- # Real-Time Events (REN) Source: https://www.maravilla.cloud/docs/realtime Section: Platform Description: Resource Event Notifications for real-time updates via Server-Sent Events REN (Resource Event Notifications) is Maravilla's built-in real-time event system. It delivers platform resource mutations — KV writes, database changes, storage uploads — to connected clients via Server-Sent Events (SSE). Use REN to build live dashboards, collaborative features, or any UI that needs to react to data changes without polling. ## How It Works 1. Your client opens an SSE connection to `/api/maravilla/ren` 2. When any platform service mutates data (KV put, DB insert, storage upload), an event is published 3. Connected clients receive the event in real time 4. Clients can filter by resource type and distinguish their own mutations from others In multi-node deployments, events are distributed across all nodes automatically. ## Quick Start ### Using the REN Client Import `RenClient` from the platform package: ```typescript import { RenClient } from '@maravilla-labs/platform/ren'; const ren = new RenClient({ subscriptions: ['kv', 'db'], // only receive KV and database events }); const unsubscribe = ren.on((event) => { console.log(event.t, event.k); // e.g. "kv.put", "todo:abc123" }); // Later: clean up unsubscribe(); ren.close(); ``` ### Using Native EventSource You can also use the browser's `EventSource` API directly: ```javascript const es = new EventSource('/api/maravilla/ren?s=kv,storage'); es.addEventListener('kv.put', (e) => { const event = JSON.parse(e.data); console.log('KV updated:', event.k); }); es.addEventListener('storage.object.created', (e) => { const event = JSON.parse(e.data); console.log('File uploaded:', event.k); }); ``` ## Event Schema Every REN event has the following structure: ```typescript interface RenEvent { t: string; // event type, e.g. "kv.put", "db.document.created" r: string; // resource domain: "kv", "db", "storage", "realtime", "presence" k?: string; // resource key (e.g. object key, document ID) ns?: string; // namespace, collection, or bucket name v?: string; // version or etag ts?: number; // timestamp in unix milliseconds src?: string; // origin client/node ID (for self-filtering) ch?: string; // channel name (for realtime channels) data?: any; // arbitrary payload (for realtime messages) uid?: string; // user identity (for presence events) } ``` ## Event Types ### KV Store Events | Event Type | Fired When | |------------|------------| | `kv.put` | A key-value pair is created or updated | | `kv.delete` | A key is deleted | | `kv.expired` | A key expires via TTL | ### Database Events | Event Type | Fired When | |------------|------------| | `db.document.created` | A document is inserted | | `db.document.updated` | A document is updated | | `db.document.deleted` | A document is deleted | ### Storage Events | Event Type | Fired When | |------------|------------| | `storage.put` | A file is uploaded or overwritten (or a presigned upload is confirmed) | | `storage.delete` | A file is deleted | ### Media Transform Events Fired during [media transform](/docs/media-transforms) jobs (transcode, thumbnail, resize, OCR). Subscribe to update loading states, progress bars, or swap placeholder posters the moment the output lands. | Event Type | Fired When | |------------|------------| | `transform.queued` | A transform job has been accepted; `outputKey` is already known | | `transform.started` | A worker picked up the job | | `transform.progress` | Periodic update during long-running work (`{ percent, stage }`) | | `transform.complete` | Output is written to storage | | `transform.failed` | Terminal failure after retries are exhausted | ### Realtime Channel Events | Event Type | Fired When | |------------|------------| | `realtime.message` | A message is published to a channel | ### Presence Events | Event Type | Fired When | |------------|------------| | `presence.join` | A user joins a channel | | `presence.update` | A user's metadata changes | | `presence.leave` | A user leaves a channel | > For channels and presence, see the full [Realtime Channels](/docs/channels) documentation. ## RenClient Options ```typescript const ren = new RenClient({ endpoint: '/api/maravilla/ren', // SSE endpoint (auto-detected) subscriptions: ['kv', 'db'], // resource filters, ['*'] = all (default) clientId: 'my-client-id', // optional, auto-generated if omitted autoReconnect: true, // reconnect on disconnect (default: true) maxBackoffMs: 15000, // max reconnect delay (default: 15s) debug: false, // enable console debug logging }); ``` ### Subscription Filtering Filter which resource domains you receive events for: ```typescript // All events (default) new RenClient({ subscriptions: ['*'] }); // Only KV and storage events new RenClient({ subscriptions: ['kv', 'storage'] }); // Only database events new RenClient({ subscriptions: ['db'] }); ``` You can also filter via the SSE URL query parameter: ``` /api/maravilla/ren?s=kv,storage /api/maravilla/ren?s=* ``` ### Self-Filtering Use the `src` field to distinguish your own mutations from others. Pass a client ID header on mutations using `renFetch`: ```typescript import { renFetch, RenClient } from '@maravilla-labs/platform/ren'; const ren = new RenClient(); // Use renFetch for mutations — it adds X-Ren-Client header await renFetch('/api/todos', { method: 'POST', body: JSON.stringify({ text: 'Buy groceries' }), }); // In your event handler, filter out self-originated events ren.on((event) => { if (event.src === ren.getClientId()) return; // skip own mutations // Handle event from another client/tab refreshUI(); }); ``` ### Auto-Reconnect The REN client automatically reconnects with exponential backoff when the connection drops: - First retry: 1 second - Subsequent retries: doubles each time - Maximum delay: 15 seconds (configurable via `maxBackoffMs`) - Backoff resets on successful connection The client ID persists in `localStorage` across reconnects and page reloads. ## SSE Endpoint Reference ### `GET /api/maravilla/ren` Opens a Server-Sent Events stream. **Query Parameters:** - `s` -- comma-separated resource filters (e.g., `kv,storage`). Use `*` or omit for all events. - `cid` -- client ID for correlation **Response:** `text/event-stream` with events formatted as: ``` event: kv.put data: {"t":"kv.put","r":"kv","k":"todo:abc","ns":"demo","ts":1710000000000} ``` Heartbeat pings (`:ping` comments) are sent every 15 seconds to keep the connection alive. ## Example: Live Todo List Build a todo list that updates in real time across browser tabs: ```typescript // In your Svelte component or page script import { RenClient } from '@maravilla-labs/platform/ren'; import { invalidateAll } from '$app/navigation'; const ren = new RenClient({ subscriptions: ['kv'], }); ren.on((event) => { if (event.src === ren.getClientId()) return; // Another tab/user modified a todo — refresh the list if (event.k?.startsWith('todo:')) { invalidateAll(); // SvelteKit: re-run all load functions } }); ``` ## Development vs. Production | | Development | Production | |---|---|---| | **Endpoint** | `http://localhost:3001/api/maravilla/ren` | `/api/maravilla/ren` | | **Fan-out** | Single-process | Distributed across all nodes | | **Detection** | Automatic (Vite port 5173 → dev server port 3001) | Automatic (relative URL) | The `RenClient` auto-detects the correct endpoint based on the runtime environment. ## Debugging Enable debug logging to see connection state, events, and reconnect attempts: ```typescript new RenClient({ debug: true }); ``` Or set `localStorage.setItem('REN_DEBUG', '1')` in the browser console. ## Next Steps - [Realtime Channels](/docs/channels) -- pub/sub channels, presence tracking, and WebSocket API for building chat, collaboration, and live features - [KV Store](/docs/kv-store) -- key-value storage that triggers REN events on writes - [Database](/docs/database) -- document database with REN change notifications --- # Event Handlers Source: https://www.maravilla.cloud/docs/event-handlers Section: Platform Description: Run code in response to database changes, auth events, queues, and schedules Event Handlers let you run code automatically when something happens in your app. A user signs up, a row changes in the database, a message lands in a queue, or a cron timer fires — your handler runs. Create an `events/` folder in your project, drop in a file, export a handler, and Maravilla wires it up on deploy. No infrastructure, no subscribe/unsubscribe calls, no background process to keep alive. ```typescript // events/hello.ts import { onAuth, onDbChange, onSchedule } from '@maravilla-labs/platform/events'; export const sendWelcome = onAuth({ op: 'registered' }, async (event) => { console.log(`Welcome ${event.data?.email}!`); }); export const auditUsers = onDbChange({ collection: 'users' }, async (event) => { console.log(`[${event.op}] user ${event.id}`); }); export const heartbeat = onSchedule('*/10 * * * * *', async () => { console.log('tick'); }); ``` ## Quick Start **1. Install the events builder.** The adapter discovers `events/` at build time via `@maravilla-labs/functions`. Without it, your `events/` folder builds silently as a no-op and nothing gets wired into the deployment manifest. ```bash npm install --save-dev @maravilla-labs/functions ``` > **Check your build output.** Run `npm run build` and look for a line like `✅ Events build completed: 1 handler` in the output, and for an `events` section in `build/manifest.json`. If you don't see either, the dev dependency is missing — the build pipeline silently skips event bundling when `@maravilla-labs/functions` isn't resolvable. **2. Create an `events/` folder** in your project (next to `package.json`). Or drop a single `events.ts` at the project root. Split handlers across as many files as you like — every `.ts`/`.js` file inside is picked up automatically. **3. Export named handlers** using any of the `on*` helpers. Each exported name becomes the handler's identity — keep them stable. **4. Deploy.** Handlers are discovered at build time, registered with the runtime on first request, and start firing immediately. ```typescript // events/kv.ts import { onKvChange } from '@maravilla-labs/platform/events'; export const logKvChange = onKvChange( { namespace: 'demo', keyPattern: 'item:*' }, async (event) => { console.log(`kv ${event.op} on ${event.namespace}/${event.key}`); }, ); ``` That's it. No config, no registration step, no server process to keep running. ## Handler Signature Every handler receives the event payload and a context object. `ctx` exposes the same platform services you use on the request path (`getPlatform()`), so you rarely need to reach back into `getPlatform()` from a handler. ```typescript async (event, ctx) => { // ── Metadata ──────────────────────────────── ctx.env; // per-tenant env vars (Record) ctx.traceId; // correlation ID — include in every log line ctx.tenant; // your tenant ID ctx.handlerId; // the handler's identity (the exported name) // ── Platform services ────────────────────── ctx.kv; // KV store — get / put / delete / list ctx.database; // MongoDB-style document DB — find / insertOne / updateOne / etc. ctx.storage; // object storage — putObject / getObject / deleteObject ctx.queue; // durable queue producer — send(name, payload, opts?) ctx.auth; // auth service — getUser / register / listUsers / updateUser / etc. ctx.push; // Web Push — send notifications to subscribed clients ctx.platform; // full platform object — escape hatch for anything above } ``` > **Version note.** The service shortcuts on `ctx` (`kv`, `database`, `auth`, `queue`, `storage`, `push`) landed in `@maravilla-labs/platform@0.2.4`. On earlier versions the only shortcut was `ctx.env` and you had to reach through `ctx.platform` for everything else — bump to `^0.2.4` before relying on them. ### `ctx.kv` Same shape as `getPlatform().kv`. Namespace is the first argument: ```typescript const raw = await ctx.kv.get('demo', `user:${id}`); await ctx.kv.put('demo', `user:${id}`, JSON.stringify(user)); await ctx.kv.delete('demo', `user:${id}`); ``` ### `ctx.database` Same shape as `getPlatform().database`. MongoDB-style API: ```typescript const id = await ctx.database.insertOne('audit_log', { userId: event.userId, op: event.op, ts: event.ts, }); const user = await ctx.database.findOne('users', { _id: userId }); await ctx.database.updateOne('users', { _id: userId }, { $set: { lastSeen: Date.now() } }); ``` ### `ctx.auth` Same shape as `getPlatform().auth`. Useful for looking up the authenticated user inside an `onAuth` or `onDbChange` handler: ```typescript export const audit = onAuth({}, async (event, ctx) => { const user = await ctx.auth.getUser(event.userId); await ctx.database.insertOne('audit_log', { userId: event.userId, email: user?.email, op: event.op, ts: event.ts, }); await ctx.kv.delete('cache', `user:${event.userId}`); }); ``` ### `ctx.queue` Fire-and-forget producer — hand off expensive work to a durable queue so the handler returns fast: ```typescript await ctx.queue.send('emails', { to: event.data?.email, subject: 'Welcome!', }, { delayMs: 5000, maxAttempts: 5, }); ``` ## Available Triggers | Trigger | Fires when | Durable | |---|---|---| | `onKvChange` | A KV key is put, deleted, or expires | No | | `onDbChange` | A document is created, updated, or deleted | No | | `onAuth` | A user registers, logs in, logs out, or is deleted | No | | `onStorage` | A file is uploaded (or confirmed) or deleted | No | | `onQueue` | A message is enqueued with `platform.queue.send()` | ✅ at-least-once | | `onSchedule` | A cron expression matches | ✅ missed ticks catch up | | `onChannel` | A message is published to a realtime channel | No | | `onDeploy` | A new deployment becomes ready (or drains/stops) | No | | `defineEvent` | Escape hatch for matching any platform `RenEvent` | No | Durable triggers persist their work. If your server restarts mid-way, pending queue messages and overdue cron ticks are picked up where they left off. ## `onKvChange` React to [KV Store](/docs/kv-store) writes. Filter by namespace, key pattern (glob), and/or operation. ```typescript // events/kv.ts import { onKvChange } from '@maravilla-labs/platform/events'; export const logKvChange = onKvChange( { namespace: 'demo', keyPattern: 'item:*', op: 'put' }, async (event) => { // event.op — 'put' | 'delete' | 'expired' // event.namespace — e.g. 'demo' // event.key — the key that changed // event.value — present on 'put' // event.ts — unix ms timestamp console.log(`kv ${event.op} on ${event.namespace}/${event.key}`); }, ); ``` All filter fields are optional. Omit `namespace` to match any namespace, omit `op` to match all three operations, omit `keyPattern` to match every key. ## `onDbChange` React to [Database](/docs/database) mutations on a specific collection. ```typescript // events/db.ts import { onDbChange } from '@maravilla-labs/platform/events'; export const auditUsers = onDbChange( { collection: 'users' }, async (event) => { // event.op — 'insert' | 'update' | 'delete' // event.collection — 'users' // event.id — the document ID // event.doc — the current document (insert/update) // event.before — the previous document (update/delete), when available // event.after — the new document (update), when available // event.ts console.log(`[${event.op}] users/${event.id}`); }, ); ``` Scope to one operation with `{ collection: 'users', op: 'insert' }`, or omit `op` to react to all three. ## `onAuth` React to [Authentication](/docs/auth) events. Common use: send a welcome email, audit sign-ins, clean up data when a user is deleted. ```typescript // events/auth.ts import { onAuth } from '@maravilla-labs/platform/events'; export const sendWelcome = onAuth({ op: 'registered' }, async (event) => { // event.op — 'registered' | 'logged_in' | 'logged_out' // | 'logged_out_all' | 'deleted' | 'updated' // event.userId // event.data?.email // event.data?.provider — on OAuth registrations // event.data?.profile — custom fields you passed to register() const profile = event.data?.profile ?? {}; console.log(`welcome ${event.data?.email}, profile=${JSON.stringify(profile)}`); }); // Audit everything — pass an empty config export const auditAuth = onAuth({}, async (event) => { console.log(`auth.${event.op} user=${event.userId}`); }); ``` Custom fields you pass to `platform.auth.register({ profile: {...} })` come through on `event.data.profile`, so you can build signup flows that collect extra data (display name, referral code, etc.) and react to them in one place. ## `onStorage` React to [Storage](/docs/storage) writes and deletes. Filter by key pattern (glob) and/or operation. Every upload path — direct `put()`, token upload, presigned URL — fires the event automatically. ```typescript // events/on-upload.ts import { onStorage } from '@maravilla-labs/platform/events'; export const onPhotoUpload = onStorage( { keyPattern: 'uploads/photos/*', op: 'put' }, async (event) => { // event.op — 'put' | 'delete' // event.key — the storage key that changed // event.data — { size, contentType } on 'put' // event.ts — unix ms timestamp console.log(`photo landed: ${event.key} (${event.data?.size} bytes)`); }, ); ``` Common pattern: kick off a [media transform](/docs/media-transforms) as soon as the file arrives: ```typescript // events/on-video-upload.ts import { onStorage } from '@maravilla-labs/platform/events'; export const transcodeVideo = onStorage( { keyPattern: 'uploads/videos/*', op: 'put' }, async ({ key, platform }) => { await Promise.all([ platform.media.transforms.transcode(key, { format: 'mp4' }), platform.media.transforms.thumbnail(key, { at: '1s', width: 640, format: 'jpg' }), ]); }, ); ``` If you only need the transform — no custom logic around it — skip the handler entirely and use the declarative [`transforms` block](/docs/media-transforms#declarative-config) in `maravilla.config.ts` instead. It compiles into exactly the same `onStorage` shape at build time. Omit `keyPattern` to match every key, omit `op` to match both put and delete. ## `onQueue` + `platform.queue.send` A durable, at-least-once queue. Send a message from anywhere, process it in a handler. Messages survive restarts. **Enqueue:** ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); await platform.queue.send('emails', { to: 'jane@example.com', subject: 'Welcome!', }, { delayMs: 5000, // optional — delay before the message becomes visible maxAttempts: 5, // optional — move to 'dead' after this many failures }); ``` **Handle:** ```typescript // events/emails.ts import { onQueue } from '@maravilla-labs/platform/events'; export const sendEmails = onQueue<{ to: string; subject: string }>( 'emails', { batch: 1 }, async (messages, ctx) => { for (const msg of messages) { // msg.id — message ID // msg.payload — your payload // msg.attempt — retry count (starts at 1) // msg.enqueuedAt — unix ms await sendEmail(msg.payload); } }, ); ``` The handler receives an **array** of messages — even when `batch: 1`, you get a one-element array. Loop over them. **Delivery guarantees:** - At-least-once — your handler may run more than once for the same message. Make it idempotent. - Messages are leased while running. If the handler throws, the message is retried with exponential backoff. - After `maxAttempts` failures (default configured per queue), the message moves to a `dead` state so you can inspect it instead of retrying forever. - Successful handlers ack the message and it's removed from the queue. ## `onSchedule` Cron-driven, durable. If your server was down at the scheduled time, the tick fires as soon as it comes back — no missed runs. ```typescript // events/cron.ts import { onSchedule } from '@maravilla-labs/platform/events'; // Every 10 seconds export const heartbeat = onSchedule('*/10 * * * * *', async (event) => { // event.cron — the cron expression // event.scheduledAt — when this tick was due (unix ms) // event.firedAt — when it actually fired console.log(`tick scheduled=${event.scheduledAt} fired=${event.firedAt}`); }); // Every day at 3am export const nightlyCleanup = onSchedule('0 0 3 * * *', async () => { await cleanupOldSessions(); }); ``` Cron expressions use 6 fields: `sec min hour day month weekday`. ## `onChannel` React to messages published to a [Realtime Channel](/docs/channels). ```typescript // events/chat.ts import { onChannel } from '@maravilla-labs/platform/events'; export const onChatMessage = onChannel( { channel: 'chat:*', type: 'message' }, async (event) => { // event.channel // event.type // event.data — the payload // event.uid — publisher's user ID, if any // event.ts console.log(`${event.type} on ${event.channel}`); }, ); ``` `type` is optional — omit it to match every message on the channel. ## `onDeploy` Runs once when a deployment transitions through a lifecycle phase. Use `ready` for warm-up tasks or one-shot migrations; `draining` and `stopped` for graceful shutdown. ```typescript // events/deploy.ts import { onDeploy } from '@maravilla-labs/platform/events'; export const warmup = onDeploy('ready', async (event) => { // event.phase — 'ready' | 'draining' | 'stopped' // event.ts console.log(`deployment ${event.phase}`); }); ``` ## `defineEvent` — Escape Hatch Everything above is a typed shortcut over the platform's internal event stream (`RenEvent`). When a trigger doesn't cover your case, `defineEvent` lets you match any event the platform produces by its resource (`r`), type (`t`), and/or namespace (`ns`). Example — react when a file upload finalizes in storage: ```typescript // events/storage.ts import { defineEvent } from '@maravilla-labs/platform/events'; export const onObjectFinalized = defineEvent( { match: { r: 'storage', t: 'object.finalized' } }, async (event) => { // event is the raw RenEvent — see /docs/realtime for the shape console.log(`storage finalized: ${event.k}`); }, ); ``` See [Real-Time Events (REN)](/docs/realtime) for the full list of resources and event types. ## Inspecting Handlers List handlers discovered in your build output: ```bash maravilla events list ``` Stream live events from a running server — handy during development to see what's actually firing: ```bash maravilla events tail --url http://localhost:3001 # Filter by resource kind maravilla events tail --url http://localhost:3001 --kind kv,db ``` ## Best Practices - **Keep handlers small.** A handler that does one thing is easier to retry and reason about than one that does five. - **Make queue handlers idempotent.** At-least-once delivery means you may see the same message twice. Check whether the work is already done before doing it again. - **Don't block.** Handlers run in your app's worker pool. Long-running work should either be chunked into queue messages or kicked off as a background task. - **Use stable export names.** The export name is the handler's identity. Renaming it is equivalent to deleting the old handler and creating a new one. - **Prefer queues for retriable work.** If something can fail transiently (network, third-party API), enqueue a message instead of calling the service directly from a synchronous handler. - **Propagate `ctx.traceId`.** Include it in your log lines so you can correlate a handler run with the request or event that triggered it. ## When to Reach for a Workflow Instead Event handlers are great for quick, synchronous reactions — send an email, update a cache, write an audit row. They're not the right fit when the reaction spans multiple steps, needs to sleep for hours or days, or has to survive a restart mid-way. For those cases, use a [Workflow](/docs/workflows). A workflow handler uses the same `ctx` services, but each `step.*` call is durably memoized — so the work resumes exactly where it left off if anything goes wrong. You can even start a workflow from an event handler for a clean hand-off: ```typescript import { onAuth } from '@maravilla-labs/platform/events'; export const startOnboarding = onAuth({ op: 'registered' }, async (event, ctx) => { await ctx.platform.workflows.start('onboarding', { userId: event.userId, email: event.data?.email, }); }); ``` ## Next Steps - [Workflows](/docs/workflows) — durable, multi-step business logic - [Real-Time Events (REN)](/docs/realtime) — push the same events to browsers via SSE - [KV Store](/docs/kv-store), [Database](/docs/database), [Authentication](/docs/auth) — the sources that feed the event bus - [Realtime Channels](/docs/channels) — pub/sub channels that `onChannel` listens to --- # Workflows Source: https://www.maravilla.cloud/docs/workflows Section: Platform Description: Durable, multi-step business logic that survives crashes, sleeps for days, and waits for events Workflows let you write multi-step business logic as a plain async function — and have the platform make every step durable for you. A workflow can sleep for a week, wait for a webhook, retry a flaky API call, kick off a child workflow, and resume exactly where it left off if the server restarts in the middle. Create a `workflows/` folder (or a single `workflows.ts` at your project root), declare a workflow with `defineWorkflow`, and use the `step.*` API for anything you want to be durable. ```typescript // workflows/onboarding.ts import { defineWorkflow } from '@maravilla-labs/platform/workflows'; export const onboarding = defineWorkflow( { id: 'onboarding', options: { retries: 5, timeoutSecs: 86400 } }, async (input: { userId: string; email: string }, step, ctx) => { const user = await step.run('fetch-user', () => ctx.database.findOne('users', { _id: input.userId }), ); await step.run('send-welcome', () => ctx.queue.send('emails', { to: input.email, template: 'welcome' }), ); await step.sleep('grace-period', '24h'); const survey = await step.waitForEvent('survey-reply', { type: 'survey.submitted', match: { userId: input.userId }, timeout: '7d', }); return { completed: true, surveyed: survey !== null }; }, ); ``` ## Quick Start **1. Install the workflows builder.** Workflows ship as part of `@maravilla-labs/functions` — the same dev dependency used for event handlers. Without it, your `workflows/` folder builds silently as a no-op. ```bash npm install --save-dev @maravilla-labs/functions ``` > **Check your build output.** Run `npm run build` and look for a line like `✅ Workflows build completed: 1 workflow` in the output, and for a `workflows` section in `build/manifest.json`. If you don't see either, the dev dependency is missing. **2. Create a `workflows/` folder** (next to `package.json`), or drop a single `workflows.ts` at the project root. Split across as many files as you like — every `.ts`/`.js` file inside is picked up automatically. **3. Export named workflows** with `defineWorkflow`. The `id` inside the descriptor is the workflow's stable identity — keep it stable across deploys. **4. Deploy.** Workflows are discovered at build time and registered with the runtime. Start a run from anywhere using `platform.workflows.start()`, or let a trigger start them for you. ## Starting a Workflow Kick off a run from a request handler, event handler, or another workflow: ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); const handle = await platform.workflows.start('onboarding', { userId: 'u_123', email: 'jane@example.com', }); handle.runId; // unique run ID await handle.status(); // WorkflowRun | null await handle.history(); // full step-by-step ledger await handle.result(); // waits for terminal state, returns the output await handle.cancel(); // best-effort cancellation ``` `handle.result()` resolves with the workflow's return value when the run completes, and throws if it failed or was cancelled. For a fire-and-forget start, just don't await `result()`. Need a handle to an existing run? `platform.workflows.handle(runId)` gives you one without starting anything. ## The Handler Signature Every workflow handler receives three arguments: the input you passed to `start()`, a `step` API for durable operations, and a `ctx` object that mirrors the one in event handlers. ```typescript async (event, step, ctx) => { // ── Metadata ──────────────────────────────── ctx.runId; // this run's unique ID ctx.workflowId; // the workflow's declared id ctx.attempt; // which retry attempt this is (1 on first run) // ── Platform services ────────────────────── ctx.kv; // KV store ctx.database; // document DB ctx.storage; // object storage ctx.queue; // durable queue producer ctx.workflows; // start / signal other workflows ctx.platform; // full platform object — escape hatch } ``` The handler runs inside a **replay model**: every time the workflow resumes (after a sleep, an event, or a crash), the platform re-invokes the handler with the full history of completed steps. The `step.*` API either returns the persisted result of a past step or runs it for the first time — you never see the difference. > **Keep the code outside `step.*` calls deterministic.** Anything with a side effect, a random value, or a network call should live inside `step.run(...)`. Plain variable assignments and branching on step output are fine. ## The `step` API ### `step.run(name, fn)` Execute `fn` once and memoize its output under `name`. On replay, the stored value is returned synchronously — your `fn` doesn't run again. ```typescript const charge = await step.run('charge-card', async () => { return await stripe.charges.create({ amount: 2000, customer: input.customerId }); }); await step.run('record-charge', () => ctx.database.insertOne('charges', { id: charge.id, userId: input.userId }), ); ``` Step names must be unique within a run. If `fn` throws, the run is marked failed; if retries are configured, the whole workflow restarts from the first incomplete step on the next attempt. ### `step.sleep(name, duration)` / `step.sleepUntil(name, epochMs)` Durable sleep. The run is parked; the worker moves on. When the wake time arrives — even if the server was down when it was scheduled — the workflow resumes on the next line. ```typescript await step.sleep('grace-period', '24h'); await step.sleep('retry-delay', 30_000); // milliseconds await step.sleepUntil('midnight-utc', Date.UTC(2026, 0, 1)); ``` Durations accept a number of milliseconds or a shorthand string: `"500ms"`, `"30s"`, `"5m"`, `"1h"`, `"7d"`. ### `step.waitForEvent(name, options)` Pause until a matching event is signaled. Returns the event payload, or `null` if the timeout elapses first. ```typescript const approval = await step.waitForEvent('manager-approval', { type: 'expense.approved', match: { expenseId: input.expenseId }, timeout: '3d', }); if (approval === null) { await step.run('escalate', () => notifyFinanceTeam(input.expenseId)); } ``` Signal waiting runs from anywhere using `platform.workflows.sendEvent()`: ```typescript // From an HTTP handler, event handler, or another workflow await platform.workflows.sendEvent('expense.approved', { expenseId: 'exp_42', approvedBy: 'mgr_7', }); ``` Matching rules: `eventType` must equal the filter's `type`; every key in `match` must appear in the payload with an equal value. The number of runs resumed is returned. ### `step.invoke(name, workflowId, input?, options?)` Start a child workflow, wait for it to finish, and return its output. The child is recorded as a single step in the parent's history, so it doesn't re-spawn on replay. ```typescript const invoice = await step.invoke('generate-invoice', 'invoice-builder', { orderId: input.orderId, }); await step.run('email-invoice', () => ctx.queue.send('emails', { to: input.email, invoiceId: invoice.id }), ); ``` `options.timeout` bounds the wait (default: 1 hour). A child that ends `failed` or `cancelled` throws in the parent. ## Triggers Workflows can start automatically on a schedule or in response to a platform event. Add a `trigger` to the descriptor to opt in — omit it to require explicit `platform.workflows.start()` calls. ### Schedule Trigger Run a workflow on a cron schedule. Standard 6-field cron (`sec min hour day month weekday`). ```typescript export const nightlyReconcile = defineWorkflow( { id: 'nightly-reconcile', trigger: { kind: 'schedule', cron: '0 0 3 * * *' }, // 3:00 every day }, async (_input, step, ctx) => { const orders = await step.run('fetch-pending', () => ctx.database.find('orders', { status: 'pending_reconcile' }), ); for (const order of orders) { await step.run(`reconcile-${order._id}`, () => reconcile(order)); } }, ); ``` Missed ticks (e.g. the server was down) catch up on the next tick cycle. ### Event Trigger Start a run for every matching event on the platform's event stream. Useful for long-running reactions that you don't want to run inline in an event handler. ```typescript export const onSignup = defineWorkflow( { id: 'onboarding-after-signup', trigger: { kind: 'event', t: 'auth.user.registered' }, }, async (event, step, ctx) => { await step.sleep('wait-before-nudge', '2h'); await step.run('send-tips', () => ctx.queue.send('emails', { to: event.data.email, template: 'tips' }), ); }, ); ``` The `event` argument is the matching event's payload. Filter further with `r` (resource) and `ns` (namespace) if you need a narrower match. > **Triggers vs. event handlers.** An `onX` handler is the right place for quick, synchronous reactions. Reach for a workflow when the reaction needs to survive restarts, span multiple steps, or wait for something. ### Starting a Workflow From an Event Handler The event trigger matches on `r` / `t` / `ns`, which is deliberately coarse. When you need precise filtering — a KV key glob, a specific collection + operation, a document-shape predicate — do it in an event handler and hand off to a workflow with `platform.workflows.start()`. You get full filter expressiveness on the entry side and full durability on the work side. **KV change → workflow.** Trigger a nudge workflow only when keys matching `inv:*` are written: ```typescript // events/invites.ts import { onKvChange } from '@maravilla-labs/platform/events'; export const nudgeOnInvite = onKvChange( { namespace: 'invites', keyPattern: 'inv:*', op: 'put' }, async (event, ctx) => { await ctx.platform.workflows.start('nudge-followup', { inviteKey: event.key, ts: event.ts, }); }, ); ``` **Database change → workflow.** Kick off onboarding the moment a new user document lands — filter by collection and operation, then pass the document into the workflow: ```typescript // events/users.ts import { onDbChange } from '@maravilla-labs/platform/events'; export const onboardNewUser = onDbChange( { collection: 'users', op: 'insert' }, async (event, ctx) => { if (!event.doc?.email) return; await ctx.platform.workflows.start('onboarding', { userId: event.id, email: event.doc.email, }); }, ); ``` The event handler stays a thin dispatcher; the workflow owns the multi-step, durable part (welcome email, 24h grace period, survey wait, follow-up). If the handler runs twice for the same event, make the workflow idempotent by including a stable id in the input and checking it on the first step. ## Options ```typescript defineWorkflow( { id: 'checkout', options: { retries: 3, // whole-run retry budget (default: 3) timeoutSecs: 600, // overall wall-clock deadline }, }, async (input, step, ctx) => { /* ... */ }, ); ``` - `retries` — how many times the whole run may restart when a step throws. Retries replay from the first incomplete step; completed steps are never re-executed. - `timeoutSecs` — hard wall-clock deadline for the whole run. Exceeding it marks the run `failed`. ## A Fuller Example — Order Fulfillment ```typescript // workflows/fulfillment.ts import { defineWorkflow } from '@maravilla-labs/platform/workflows'; interface FulfillmentInput { orderId: string; customerEmail: string; } export const fulfillOrder = defineWorkflow( { id: 'fulfill-order', options: { retries: 5, timeoutSecs: 7 * 24 * 3600 } }, async (input: FulfillmentInput, step, ctx) => { const order = await step.run('load-order', () => ctx.database.findOne('orders', { _id: input.orderId }), ); const charge = await step.run('charge-payment', async () => { return await chargeCard(order.customerId, order.total); }); const shipment = await step.invoke('arrange-shipment', 'shipping', { orderId: input.orderId, address: order.shippingAddress, }); await step.run('mark-shipped', () => ctx.database.updateOne( 'orders', { _id: input.orderId }, { $set: { status: 'shipped', trackingId: shipment.trackingId } }, ), ); await step.run('email-tracking', () => ctx.queue.send('emails', { to: input.customerEmail, template: 'shipped', trackingId: shipment.trackingId, }), ); const delivered = await step.waitForEvent('delivery-confirmation', { type: 'shipment.delivered', match: { trackingId: shipment.trackingId }, timeout: '14d', }); if (!delivered) { await step.run('open-support-ticket', () => openTicket(order.customerId, `No delivery after 14d for ${input.orderId}`), ); } return { chargeId: charge.id, trackingId: shipment.trackingId }; }, ); ``` Start it from a route handler: ```typescript const handle = await platform.workflows.start('fulfill-order', { orderId: 'ord_42', customerEmail: 'jane@example.com', }); return Response.json({ runId: handle.runId }); ``` The checkout endpoint returns immediately. The workflow charges the card, hands off to a child workflow, emails the customer, waits up to 14 days for the carrier event, and opens a support ticket if it times out — surviving any number of deploys along the way. ## Best Practices - **Put side effects inside `step.run`.** Everything else (branches, loops, variable assignments) is safe at the top level. If you make a network call or read from a service outside `step.run`, it will run again on every replay. - **Use stable step names and workflow IDs.** The `id` on `defineWorkflow` and the `name` on each `step.*` call are the identity keys the platform uses to find prior work. Renaming either is equivalent to deleting the old and creating a new one. - **Keep `step.run` bodies short and idempotent where possible.** A retried run replays from the first incomplete step; a completed step's output is memoized. If `run` is long, break it up so you don't redo expensive work after a transient failure. - **Pass data through steps, not through closures.** Return values from `step.run` and use them downstream. Values captured in closures only survive within a single invocation. - **Reach for `step.waitForEvent` instead of polling.** A workflow parked on `waitForEvent` uses no resources. Polling with `step.sleep` in a loop works but wastes ledger space. - **Prefer `step.invoke` over deep nesting.** Large workflows become hard to reason about. Split them into a parent that orchestrates and children that do the work. ## Next Steps - [Event Handlers](/docs/event-handlers) — quick synchronous reactions to platform events - [KV Store](/docs/kv-store), [Database](/docs/database), [Storage](/docs/storage) — services available on `ctx` - [Realtime Channels](/docs/channels) — pub/sub used by `step.waitForEvent` signals --- # Realtime Channels Source: https://www.maravilla.cloud/docs/channels Section: Platform Description: Pub/sub channels, presence tracking, and WebSocket API for real-time applications Realtime Channels give your application bidirectional pub/sub messaging and presence tracking. Your server-side code publishes messages and manages presence; browsers connect via WebSocket and receive updates instantly. ```typescript // Server-side: publish a message to a channel await platform.realtime.publish('chat:lobby', { text: 'Hello everyone!', }, { userId: 'alice' }); // Server-side: check who's online const members = await platform.realtime.presence.members('chat:lobby'); ``` ## How It Works 1. Your server-side code (SvelteKit, React Router, etc.) uses `platform.realtime` to publish messages and track presence 2. Browsers connect to `/_rt/ws` via WebSocket and subscribe to channels 3. When a message is published, all subscribed clients receive it instantly 4. Presence tracks which users are in which channels, with automatic heartbeat expiry Each project gets its own isolated set of channels. Messages are delivered reliably regardless of how many users are connected. ## Quick Start ### Server Side Publish messages from your server routes or API handlers: ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // In your API route handler export async function POST({ request }) { const { text, roomId, userId } = await request.json(); // Publish to the channel — all connected browsers will receive this await platform.realtime.publish(`room:${roomId}`, { type: 'chat', text, userId, timestamp: Date.now(), }); return new Response(JSON.stringify({ ok: true })); } ``` ### Client Side Connect from the browser and subscribe to channels: ```typescript import { RealtimeClient } from '@maravilla-labs/platform'; const client = new RealtimeClient(); client.connect(); // Subscribe to a channel const unsubscribe = client.subscribe('room:general', (event) => { console.log(event.data.text); // "Hello everyone!" console.log(event.userId); // "alice" }); // Publish from the client (relayed through the server) client.publish('room:general', { text: 'Hi from browser!' }); // Later: clean up unsubscribe(); client.disconnect(); ``` ## Server-Side API Access realtime features through `platform.realtime` in your server-side code. ### `publish(channel, data, options?)` Publishes a message to a channel. All connected clients subscribed to this channel will receive it. **Parameters:** - `channel` (string) -- the channel name (e.g. `chat:lobby`, `game:room-42`) - `data` (any) -- the message payload (serialized as JSON) - `options` (object, optional): - `userId` (string) -- identifies who sent the message ```typescript // Simple message await platform.realtime.publish('notifications', { title: 'New comment', body: 'Alice replied to your post', }); // Message with sender identity await platform.realtime.publish('chat:general', { text: 'Hello!', }, { userId: 'alice' }); ``` ### `channels()` Returns a list of currently active channel names for the current tenant. ```typescript const activeChannels = await platform.realtime.channels(); // ['chat:general', 'chat:support', 'game:room-1'] ``` ### `presence.join(channel, userId, metadata?)` Marks a user as present in a channel. Broadcasts a `presence.join` event to all subscribers. **Parameters:** - `channel` (string) -- the channel name - `userId` (string) -- unique user identifier - `metadata` (any, optional) -- arbitrary data attached to the user's presence (e.g. display name, avatar) **Returns:** `true` if this is a new join, `false` if the user was already present. ```typescript const isNew = await platform.realtime.presence.join('chat:lobby', 'alice', { name: 'Alice', avatar: '/avatars/alice.png', }); ``` ### `presence.update(channel, userId, metadata)` Updates the metadata for a user already present in a channel. Broadcasts a `presence.update` event without triggering a leave/join cycle. **Returns:** `true` if the user was present and updated, `false` if they weren't in the channel. ```typescript // Update a user's status or state await platform.realtime.presence.update('chat:lobby', 'alice', { name: 'Alice', status: 'away', }); ``` ### `presence.leave(channel, userId)` Removes a user from a channel's presence. Broadcasts a `presence.leave` event. **Returns:** `true` if the user was present, `false` if they weren't. ```typescript await platform.realtime.presence.leave('chat:lobby', 'alice'); ``` ### `presence.members(channel)` Returns all users currently present in a channel. Stale members (no heartbeat within 30 seconds) are automatically expired. **Returns:** Array of presence members. ```typescript const members = await platform.realtime.presence.members('chat:lobby'); // [ // { userId: 'alice', metadata: { name: 'Alice' }, lastSeen: 1710000000 }, // { userId: 'bob', metadata: null, lastSeen: 1710000005 } // ] ``` ## Client-Side API Import `RealtimeClient` from the platform package to connect via WebSocket. ### Creating a Client ```typescript import { RealtimeClient } from '@maravilla-labs/platform'; const client = new RealtimeClient({ debug: false, // enable console logging autoReconnect: true, // reconnect on disconnect (default: true) maxBackoffMs: 15000, // max reconnect delay (default: 15s) }); client.connect(); ``` The client auto-detects the correct WebSocket endpoint based on the runtime environment. ### `subscribe(channel, callback)` Subscribes to messages on a channel. Returns an unsubscribe function. ```typescript const unsubscribe = client.subscribe('chat:general', (event) => { console.log(event.event); // "realtime.message" console.log(event.channel); // "chat:general" console.log(event.data); // { text: "Hello!" } console.log(event.userId); // "alice" console.log(event.ts); // 1710000000000 }); // Stop listening unsubscribe(); ``` ### `publish(channel, data, options?)` Publishes a message through the WebSocket connection. ```typescript client.publish('chat:general', { text: 'Hello from browser!', }, { userId: 'bob' }); ``` ### `presence(channel)` Returns a presence handle for managing and observing channel presence. ```typescript const presence = client.presence('chat:lobby'); // Join with metadata presence.join('alice', { name: 'Alice', status: 'online' }); // Update metadata in-place (no leave/join cycle) presence.update({ name: 'Alice', status: 'away' }); // Listen for others joining const offJoin = presence.onJoin((member) => { console.log(`${member.userId} joined`, member.metadata); }); // Listen for metadata updates const offUpdate = presence.onUpdate((member) => { console.log(`${member.userId} updated`, member.metadata); }); // Listen for others leaving const offLeave = presence.onLeave((member) => { console.log(`${member.userId} left`); }); // Leave when done presence.leave(); offJoin(); offUpdate(); offLeave(); ``` ### `onAny(callback)` Listens to all events across all subscribed channels: ```typescript client.onAny((event) => { console.log(`[${event.channel}] ${event.event}:`, event.data); }); ``` ### `isConnected()` Returns `true` if the WebSocket connection is currently open. ### `disconnect()` Closes the WebSocket connection and stops reconnecting. ## Presence Presence tracks which users are currently active in a channel. It works across both server and client: - **Server-side**: Call `presence.join()` when a user enters a page or starts an action. Call `presence.update()` to change their metadata (e.g. status, activity). Call `presence.leave()` when they leave. Query `presence.members()` to render an online users list. - **Client-side**: Use the `presence()` handle to join, update metadata, leave, and listen for join/update/leave events from others. ### Heartbeat Expiry Presence members are automatically expired if they haven't been seen within 30 seconds. This handles cases where a client disconnects without sending a leave message (e.g. browser crash, network loss). The WebSocket connection sends periodic pings that keep the presence alive. If the connection drops, the member is expired after the TTL. ## WebSocket Protocol Reference ### `GET /_rt/ws` Upgrades to a WebSocket connection. **Query Parameters:** - `cid` -- client ID for correlation (auto-generated if omitted) - `tenant` -- tenant identifier (auto-resolved in production) ### Client Messages All messages are JSON objects with an `action` field: ```typescript // Subscribe to a channel { "action": "subscribe", "channel": "chat:lobby" } // Unsubscribe from a channel { "action": "unsubscribe", "channel": "chat:lobby" } // Publish a message { "action": "publish", "channel": "chat:lobby", "data": { "text": "hello" }, "userId": "alice" } // Join presence { "action": "presence:join", "channel": "chat:lobby", "userId": "alice", "metadata": { "name": "Alice" } } // Update presence metadata { "action": "presence:update", "channel": "chat:lobby", "userId": "alice", "metadata": { "name": "Alice", "status": "away" } } // Leave presence { "action": "presence:leave", "channel": "chat:lobby" } // Keepalive { "action": "ping" } ``` ### Server Messages ```typescript // Connection established { "event": "connected", "data": { "clientId": "abc123" }, "ts": 1710000000000 } // Subscription confirmed { "event": "subscribed", "channel": "chat:lobby", "ts": 1710000000000 } // Channel message { "event": "realtime.message", "channel": "chat:lobby", "data": { "text": "hello" }, "userId": "alice", "ts": 1710000000000 } // Presence events { "event": "presence.join", "channel": "chat:lobby", "userId": "alice", "metadata": { "name": "Alice" }, "ts": 1710000000000 } { "event": "presence.update", "channel": "chat:lobby", "userId": "alice", "metadata": { "name": "Alice", "status": "away" }, "ts": 1710000000000 } { "event": "presence.leave", "channel": "chat:lobby", "userId": "alice", "ts": 1710000000000 } // Keepalive response { "event": "pong", "ts": 1710000000000 } ``` ## Example: Live Chat A complete chat feature using SvelteKit with server-side message handling and client-side real-time updates. ### Server Route ```typescript // src/routes/api/chat/[room]/+server.ts import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); export async function POST({ params, request }) { const { text, userId, userName } = await request.json(); // Store in database await platform.DB.messages.insertOne({ room: params.room, text, userId, userName, createdAt: Date.now(), }); // Broadcast to channel — all connected clients see it instantly await platform.realtime.publish(`chat:${params.room}`, { type: 'message', text, userId, userName, createdAt: Date.now(), }); return new Response(JSON.stringify({ ok: true })); } export async function GET({ params }) { const messages = await platform.DB.messages.find( { room: params.room }, { limit: 50, sort: { createdAt: -1 } } ); const members = await platform.realtime.presence.members(`chat:${params.room}`); return new Response(JSON.stringify({ messages, members })); } ``` ### Server Presence Hook ```typescript // src/routes/api/chat/[room]/join/+server.ts import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); export async function POST({ params, request }) { const { userId, userName } = await request.json(); await platform.realtime.presence.join(`chat:${params.room}`, userId, { name: userName, }); return new Response(JSON.stringify({ ok: true })); } ``` ### Client Component ```typescript // src/lib/Chat.svelte (script section) import { RealtimeClient } from '@maravilla-labs/platform'; import { onMount, onDestroy } from 'svelte'; let messages = $state([]); let onlineMembers = $state([]); let input = $state(''); const roomId = 'general'; const client = new RealtimeClient(); onMount(async () => { // Load initial messages const res = await fetch(`/api/chat/${roomId}`); const data = await res.json(); messages = data.messages; onlineMembers = data.members; // Connect and subscribe client.connect(); client.subscribe(`chat:${roomId}`, (event) => { if (event.data?.type === 'message') { messages = [...messages, event.data]; } }); // Track presence const presence = client.presence(`chat:${roomId}`); presence.onJoin((member) => { onlineMembers = [...onlineMembers, member]; }); presence.onLeave((member) => { onlineMembers = onlineMembers.filter(m => m.userId !== member.userId); }); // Join presence via server await fetch(`/api/chat/${roomId}/join`, { method: 'POST', body: JSON.stringify({ userId: 'current-user', userName: 'You' }), }); }); onDestroy(() => { client.disconnect(); }); async function sendMessage() { if (!input.trim()) return; await fetch(`/api/chat/${roomId}`, { method: 'POST', body: JSON.stringify({ text: input, userId: 'current-user', userName: 'You' }), }); input = ''; } ``` ## Rich Messages The `data` field in every message is arbitrary JSON — you define your own message types. For images, files, or other binary content, upload via [Storage](/docs/storage) first, then publish a message with the URL reference. ```typescript // Server-side: handle image upload and broadcast export async function POST({ params, request }) { const platform = getPlatform(); const formData = await request.formData(); const file = formData.get('image'); const userId = formData.get('userId'); // Upload to storage const key = `chat/${params.room}/${Date.now()}.jpg`; await platform.storage.put(key, await file.arrayBuffer(), { contentType: file.type, }); const url = await platform.storage.generateDownloadUrl(key); // Publish rich message — any JSON structure you want await platform.realtime.publish(`chat:${params.room}`, { type: 'image', url, caption: formData.get('caption') || '', sender: userId, }); return new Response(JSON.stringify({ ok: true })); } ``` Common message type patterns: ```typescript // Text { type: 'text', text: 'Hello!', sender: 'alice' } // Image { type: 'image', url: 'https://...', caption: 'Check this out', sender: 'alice' } // File attachment { type: 'file', url: 'https://...', name: 'report.pdf', size: 12345, sender: 'bob' } // Reaction { type: 'reaction', emoji: '👍', targetId: 'msg-123', sender: 'alice' } // Typing indicator { type: 'typing', sender: 'bob' } ``` ## Private Channels Channels with a `private-` prefix require a token to subscribe or publish. Public channels (any other name) are open to all. ### Server-side: generate a token ```typescript const platform = getPlatform(); // Generate a token that lets alice subscribe and publish to this channel const token = await platform.realtime.createChannelToken('private-room:vip', { userId: 'alice', permissions: ['subscribe', 'publish'], expiresIn: 3600, // 1 hour }); // Return token to the client return new Response(JSON.stringify({ token })); ``` ### Client-side: pass the token ```typescript const client = new RealtimeClient(); client.connect(); // Fetch token from your server const { token } = await fetch('/api/chat/vip/token').then(r => r.json()); // Subscribe with the token client.subscribe('private-room:vip', (event) => { console.log(event.data); }, { token }); ``` Without a valid token, subscribing to a `private-` channel returns an error event: ```json { "event": "error", "channel": "private-room:vip", "data": { "code": "unauthorized" } } ``` ## Message History When history is enabled, channel messages are persisted and can be replayed. Useful for chat catch-up, audit trails, or reconnecting clients. ### Server-side: query history ```typescript const platform = getPlatform(); // Get the last 50 messages from a channel const messages = await platform.realtime.history('chat:general', { limit: 50, }); // Get messages after a specific sequence number (for pagination) const older = await platform.realtime.history('chat:general', { limit: 20, after: lastSeenSeq, }); ``` ### Client-side: catch-up on reconnect When subscribing with a `lastEventId`, the server replays missed messages before delivering live events: ```typescript // Store the last sequence number you received let lastSeq = 0; client.subscribe('chat:general', (event) => { if (event.data?.seq) lastSeq = event.data.seq; renderMessage(event.data); }, { lastEventId: lastSeq }); ``` History is stored in the platform database (same as `platform.DB`) and automatically trimmed to the most recent 1,000 messages per channel. ## Development vs. Production | | Development | Production | |---|---|---| | **WebSocket endpoint** | `ws://localhost:3001/_rt/ws` | `wss://yourapp.maravilla.page/_rt/ws` | | **Detection** | Automatic (Vite port 5173 connects to dev server port 3001) | Automatic (relative URL) | | **Setup** | Works out of the box with `maravilla dev` | Works out of the box on Maravilla Cloud | The `RealtimeClient` auto-detects the correct endpoint. Your code works the same in both environments. ## Limits These limits are enforced automatically. Exceeding them returns an error event to the client. | | Free | Builder | Pro | Enterprise | |---|---|---|---|---| | **Max channels** | 10 | 50 | 200 | Unlimited | | **Max message size** | 64 KB | 128 KB | 512 KB | 1 MB | | **Publishes per minute** | 500 | 5,000 | 50,000 | Unlimited | | **Message history** | 100 messages | 1,000 | 1,000 | 1,000 | ## Next Steps - [Real-Time Events (REN)](/docs/realtime) -- SSE-based resource change notifications (KV, DB, storage) - [KV Store](/docs/kv-store) -- store chat messages, user profiles, or session data - [Database](/docs/database) -- persist messages with full query support --- # Media Rooms Source: https://www.maravilla.cloud/docs/media Section: Platform Description: Video and audio room management and token generation API Media Rooms let you add video and audio calling to your application. Your server creates rooms and generates access tokens; clients connect directly to the media server using any WebRTC-compatible library. ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // Create a room and generate a token for a participant const room = await platform.media.createRoom('standup', { maxParticipants: 10 }); const { token, url } = await platform.media.generateToken('standup', { identity: 'alice', name: 'Alice', }); ``` ## How It Works 1. Your server-side code uses `platform.media.generateToken()` to create a token — this is where you check if the user is allowed to join 2. The browser receives the token from your server and connects directly to the SFU (Selective Forwarding Unit) — no manual WebRTC signaling needed 3. The SFU handles all media routing: video, audio, and screen sharing between participants Each project gets its own isolated set of rooms. Tokens are scoped to a specific room and participant identity. Token generation always happens server-side so you control who can join which rooms. | Endpoint | Method | Purpose | |----------|--------|---------| | `/_rt/ws` | GET | WebSocket for realtime channels | | `/_rt/rooms` | GET | List active media rooms | ## Server-Side API Access media features through `platform.media` in your server-side code. ### `createRoom(roomId, settings?)` Creates a new media room. If a room with the same ID already exists, returns the existing room. **Parameters:** - `roomId` (string) -- unique room identifier (e.g. `standup`, `interview-42`) - `settings` (object, optional): - `maxParticipants` (number) -- maximum number of concurrent participants (default: 20) - `emptyTimeout` (number) -- seconds before an empty room is automatically deleted (default: 300) **Returns:** `{ roomId, maxParticipants, emptyTimeout, createdAt }` ```typescript // Create a room with defaults const room = await platform.media.createRoom('team-standup'); // Create a room with limits const room = await platform.media.createRoom('webinar', { maxParticipants: 100, emptyTimeout: 600, }); ``` ### `deleteRoom(roomId)` Deletes a room and disconnects all participants. No error is thrown if the room does not exist. **Parameters:** - `roomId` (string) -- the room to delete ```typescript await platform.media.deleteRoom('team-standup'); ``` ### `listRooms()` Returns all active rooms for the current project. **Returns:** Array of room objects with `roomId`, `numParticipants`, and `createdAt`. ```typescript const rooms = await platform.media.listRooms(); // [ // { roomId: 'standup', numParticipants: 3, createdAt: 1710000000 }, // { roomId: 'interview-42', numParticipants: 2, createdAt: 1710000500 } // ] ``` ### `generateToken(roomId, participant)` Generates an access token that allows a participant to join a specific room. Tokens are short-lived and scoped to one room. **Parameters:** - `roomId` (string) -- the room the participant will join - `participant` (object): - `identity` (string) -- unique participant identifier (e.g. user ID) - `name` (string) -- display name shown to other participants - `canPublish` (boolean, optional) -- whether the participant can publish audio/video (default: true) - `canSubscribe` (boolean, optional) -- whether the participant can receive others' media (default: true) **Returns:** `{ token, url }` - `token` -- the access token to pass to the client - `url` -- the media server WebSocket URL ```typescript // Full participant (can send and receive) const { token, url } = await platform.media.generateToken('standup', { identity: 'alice', name: 'Alice', }); // View-only participant (can only receive) const { token, url } = await platform.media.generateToken('webinar', { identity: 'viewer-bob', name: 'Bob', canPublish: false, }); ``` ### `mediaUrl()` Returns the media server URL for the current environment. Useful when you need the URL separately from token generation. **Returns:** `string` -- the media server WebSocket URL. ```typescript const url = await platform.media.mediaUrl(); // Development: "ws://localhost:7880" // Production: "wss://media.yourapp.maravilla.page" ``` ## Client-Side Integration Use `MediaRoom` from the platform package to connect to rooms in the browser: ```typescript import { MediaRoom, MediaRoomEvent, attachTrack } from '@maravilla-labs/platform'; const room = new MediaRoom(); room.on(MediaRoomEvent.TrackSubscribed, (track, participant) => { // Attach remote audio/video to the DOM const el = document.createElement(track.kind === 'video' ? 'video' : 'audio'); attachTrack(track, el); document.getElementById('videos').appendChild(el); }); room.on(MediaRoomEvent.ParticipantLeft, (participant) => { console.log(`${participant.identity} left the room`); }); // Connect with the token and URL from your server await room.connect(url, token); // Publish your camera and microphone await room.localParticipant.enableCamera(); await room.localParticipant.enableMicrophone(); ``` ### Device Selection Let users pick their camera, microphone, or speaker: ```typescript // List available devices const cameras = await MediaRoom.getCameras(); const mics = await MediaRoom.getMicrophones(); const speakers = await MediaRoom.getSpeakers(); // Switch to a specific device await room.switchCamera(cameras[1].deviceId); await room.switchMicrophone(mics[0].deviceId); await room.switchSpeaker(speakers[0].deviceId); // Enable camera with a specific device await room.localParticipant.enableCamera({ deviceId: cameras[0].deviceId }); // Enable camera with custom resolution await room.localParticipant.enableCamera({ resolution: { width: 1280, height: 720, frameRate: 30 }, }); ``` ### Screen Sharing ```typescript // Start screen share (optionally with audio) await room.localParticipant.enableScreenShare({ audio: true }); // Stop screen share await room.localParticipant.disableScreenShare(); ``` ### All Events ```typescript room.on(MediaRoomEvent.Connected, () => { }); room.on(MediaRoomEvent.Reconnecting, () => { }); room.on(MediaRoomEvent.Reconnected, () => { }); room.on(MediaRoomEvent.Disconnected, (reason) => { }); room.on(MediaRoomEvent.ParticipantJoined, (participant) => { }); room.on(MediaRoomEvent.ParticipantLeft, (participant) => { }); room.on(MediaRoomEvent.TrackSubscribed, (track, participant) => { }); room.on(MediaRoomEvent.TrackUnsubscribed, (track, participant) => { }); room.on(MediaRoomEvent.TrackMuted, (participant) => { }); room.on(MediaRoomEvent.TrackUnmuted, (participant) => { }); room.on(MediaRoomEvent.ActiveSpeakersChanged, (speakers) => { }); room.on(MediaRoomEvent.DataReceived, (data, participant) => { }); room.on(MediaRoomEvent.RecordingStatusChanged, (isRecording) => { }); room.on(MediaRoomEvent.MediaDevicesChanged, () => { }); ``` ## Example: Video Call A complete video call using SvelteKit. The server route handles auth and generates tokens; the client connects and manages media. ### Server Route ```typescript // src/routes/api/rooms/[roomId]/join/+server.ts import { json, error } from '@sveltejs/kit'; import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); export async function POST({ params, request, locals }) { // Your auth logic — control who can join if (!locals.user) throw error(401, 'Not authenticated'); // Create room if needed + generate token await platform.media.createRoom(params.roomId, { maxParticipants: 10 }); const { token, url } = await platform.media.generateToken(params.roomId, { identity: locals.user.id, name: locals.user.name, }); return json({ token, url }); } ``` ### Client Page ```typescript // src/routes/call/[roomId]/+page.svelte (script section) import { MediaRoom, MediaRoomEvent, attachTrack, detachTrack } from '@maravilla-labs/platform'; import { onMount, onDestroy } from 'svelte'; import { page } from '$app/stores'; const room = new MediaRoom(); onMount(async () => { const roomId = $page.params.roomId; // Get token from your server (which handles auth) const { token, url } = await fetch(`/api/rooms/${roomId}/join`, { method: 'POST', }).then(r => r.json()); // Handle remote tracks room.on(MediaRoomEvent.TrackSubscribed, (track, participant) => { const el = document.createElement(track.kind === 'video' ? 'video' : 'audio'); el.dataset.participantId = participant.identity; attachTrack(track, el); document.getElementById('remote-videos').appendChild(el); }); room.on(MediaRoomEvent.TrackUnsubscribed, (track, participant) => { document .querySelectorAll(`[data-participant-id="${participant.identity}"]`) .forEach((el) => el.remove()); }); room.on(MediaRoomEvent.ParticipantLeft, (participant) => { document .querySelectorAll(`[data-participant-id="${participant.identity}"]`) .forEach((el) => el.remove()); }); // Connect and publish local media await room.connect(url, token); await room.localParticipant.enableCamera(); await room.localParticipant.enableMicrophone(); }); onDestroy(() => { room.disconnect(); }); ``` ## Development vs. Production | | Development | Production | |---|---|---| | **Media server** | `ws://localhost:7880` | `wss://media.yourapp.maravilla.page` | | **Detection** | Automatic (dev server routes to local media server) | Automatic (token includes production URL) | | **Setup** | Works out of the box with `maravilla dev` | Works out of the box on Maravilla Cloud | The `generateToken` response always includes the correct `url` for the current environment. Your code works the same in both. ## Limits | | Free | Builder | Pro | Enterprise | |---|---|---|---|---| | **Max rooms** | 5 | 20 | 100 | Unlimited | | **Max participants per room** | 10 | 25 | 50 | Unlimited | | **Media feature** | -- | Included | Included | Included | ## Next Steps - [Realtime Channels](/docs/channels) -- pub/sub messaging, presence tracking, and WebSocket API - [KV Store](/docs/kv-store) -- store participant metadata, room state, or chat history - [Database](/docs/database) -- persist call logs and room metadata --- # Media Transforms Source: https://www.maravilla.cloud/docs/media-transforms Section: Platform Description: Transcode video, extract thumbnails, resize images, OCR, convert and template documents from any uploaded file Media Transforms turn any file the user uploads into the shape you actually want to serve. Normalize a wobbly phone-recorded webm into a clean mp4, pull a poster frame out of it, resize an oversized photo down to a thumbnail variant, extract text from a scanned PDF, convert a Word doc to PDF, render a single-file HTML for an email, or fill a `.docx` template with the user's logo. Your code asks for the transform; the platform queues, runs, and stores the result. In development, the CLI processes files locally. In production, Maravilla Cloud handles everything. Your code works identically in both environments. ```typescript import { getPlatform } from '@maravilla-labs/platform'; const platform = getPlatform(); // Extract a poster frame one second into the video const job = await platform.media.transforms.thumbnail('uploads/videos/clip.webm', { at: '1s', width: 640, format: 'jpg', }); // Render the derived file at `/api/v/{job.outputKey}` — no round-trip needed. ``` ## How It Works 1. Your server (or a handler running in response to an upload) calls a transform method like `transcode` or `thumbnail`. The call returns immediately with a **JobHandle** — no blocking on a multi-minute encode. 2. The job is queued and processed in the background. The derived file's storage key is deterministic — computed from the source key and the options — so you can render the final URL the moment you enqueue the job. 3. Clients subscribe to lifecycle events (`transform.queued`, `transform.started`, `transform.progress`, `transform.complete`, `transform.failed`) via [Real-Time Events](/docs/realtime) and update the UI as work progresses. 4. The output lands in Storage under the `__derived/` prefix, served by the same `/api/v/...` route as any other file. Two entry points: the explicit `platform.media.transforms.*` API in any server code, and a declarative [`transforms` block](#declarative-config) in your `maravilla.config.ts` that reacts to new uploads automatically. ## Transform Operations ### `transcode(srcKey, opts)` Re-encode a video into a consistent container and codec. Use this to normalize the output of mobile browsers (which write whatever their OS supports) into something every device plays. **Parameters:** - `srcKey` (string) -- storage key of the source video - `opts` (object): - `format` (`'mp4' | 'webm'`) -- target container - `codec` (string, optional) -- specific video codec (e.g. `'h264'`, `'vp9'`) - `maxWidth` (number, optional) -- cap on output width; aspect ratio preserved - `maxHeight` (number, optional) -- cap on output height - `audioCodec` (string, optional) -- `'aac'`, `'opus'`, etc. - `bitrateKbps` (number, optional) -- target bitrate **Returns:** `JobHandle` -- `{ id, srcKey, outputKey, status }` ```typescript const job = await platform.media.transforms.transcode('uploads/videos/raw.webm', { format: 'mp4', maxWidth: 1080, }); // Serve the result at `/api/v/${job.outputKey}` as soon as `transform.complete` fires. ``` ### `thumbnail(srcKey, opts)` Extract a single frame from a video as an image. Perfect for poster frames in `