--- url: /reference/actions.md description: >- Action class definition, transport behavior across HTTP/WebSocket/CLI/tasks, type helpers, and middleware. --- # Action Source: `backend/classes/Action.ts` The `Action` class is the foundation of Keryx. Every controller — whether it handles HTTP, WebSocket, CLI, or background tasks — is an action. You write the logic once, and the framework handles the transport plumbing. ## Class Definition ```ts abstract class Action { /** Unique identifier, e.g. "user:create" */ name: string; /** Human-readable description — shows up in CLI --help and Swagger */ description?: string; /** Zod schema for input validation */ inputs?: z.ZodType; /** Middleware to run before/after this action */ middleware?: ActionMiddleware[]; /** HTTP routing — route can be a string with :params or a RegExp */ web?: { route: RegExp | string; method: HTTP_METHOD; }; /** Background task config — queue is required, frequency makes it recurring */ task?: { frequency?: number; queue: string; }; /** * The handler. Return data to send to the client. * Throw TypedError for error responses. */ abstract run( params: ActionParams, connection?: Connection, ): Promise; } ``` ## How Actions Work Across Transports This is the core idea. You define an action once — its name, inputs, and `run()` method — and the framework routes it through every transport automatically. The same Zod validation, the same middleware chain, the same `run()` method, the same response shape. The only thing that changes is how the request arrives and how the response is delivered. Here's what a single action looks like from each transport: ### HTTP Add a `web` property to expose an action as an HTTP endpoint. The web server matches incoming requests by route and method, extracts params from the URL path, query string, and request body, validates them against the Zod schema, and calls `run()`. ```ts export class UserCreate implements Action { name = "user:create"; web = { route: "/user", method: HTTP_METHOD.PUT }; inputs = z.object({ name: z.string().min(3), email: z.string().email(), password: secret(z.string().min(8)), }); async run(params: ActionParams) { // ... return { user: serializeUser(user) }; } } ``` ```bash curl -X PUT http://localhost:8080/api/user \ -H "Content-Type: application/json" \ -d '{"name":"Evan","email":"evan@example.com","password":"secret123"}' # → { "user": { "id": 1, "name": "Evan", ... } } ``` Params are loaded in this order (later sources override earlier ones): path params → URL query params → request body. Routes support `:param` path parameters (`/user/:id`) and RegExp patterns. ### WebSocket WebSocket clients send JSON messages with `messageType: "action"`, the action name, and params. The server finds the matching action, validates params through the same Zod schema, and sends the response back over the socket. ```json // Client sends: { "messageType": "action", "action": "user:create", "messageId": "abc-123", "params": { "name": "Evan", "email": "evan@example.com", "password": "secret123" } } // Server responds: { "messageId": "abc-123", "response": { "user": { "id": 1, "name": "Evan" } } } ``` The `messageId` is echoed back so the client can match responses to requests. WebSocket connections are long-lived — they maintain session state and can subscribe to [channels](/guide/channels) for real-time PubSub. ### CLI Every action is automatically registered as a CLI command via [Commander](https://github.com/tj/commander.js). The Zod schema's field names become `--flags`, descriptions become help text, and required vs optional fields are enforced. ```bash ./keryx.ts "user:create" \ --name Evan \ --email evan@example.com \ --password secret123 \ -q | jq # → { "response": { "user": { "id": 1, "name": "Evan", ... } } } ``` The `-q` flag suppresses server logs so you get clean JSON output. Use `--help` on any action to see its params: ```bash ./keryx.ts "user:create" --help ``` The server boots in `CLI` mode — initializers that don't apply (like the web server) are skipped based on their `runModes` setting. ### Background Tasks Add a `task` property to schedule an action as a background job. The Resque worker calls `run()` with the same params and validation — the action doesn't know or care whether it was triggered by HTTP, a cron schedule, or a fan-out parent. ```ts export class MessagesCleanup implements Action { name = "messages:cleanup"; task = { queue: "default", frequency: 1000 * 60 * 60 }; // hourly inputs = z.object({ age: z.coerce.number().default(1000 * 60 * 60 * 24), }); async run(params: ActionParams) { // same run() — called by the task worker, not by HTTP return { messagesDeleted: deleted.length }; } } ``` See [Tasks](/guide/tasks) for fan-out patterns and queue configuration. ## HTTP\_METHOD ```ts enum HTTP_METHOD { GET = "GET", POST = "POST", PUT = "PUT", DELETE = "DELETE", PATCH = "PATCH", OPTIONS = "OPTIONS", } ``` ## Type Helpers These two types are used throughout the codebase — in actions, tests, ops, and on the frontend: ```ts /** Infers the validated input type from an action's Zod schema */ type ActionParams = A["inputs"] extends z.ZodType ? z.infer : Record; /** Infers the return type of an action's run() method */ type ActionResponse = Awaited> & Partial<{ error?: TypedError }>; ``` `ActionResponse` includes an optional `error` field because the framework catches `TypedError` throws and adds them to the response automatically. The frontend imports `ActionResponse` to get type-safe API responses without any code generation step. ## ActionMiddleware Middleware intercepts action execution. Both methods are optional — you can have auth-only middleware (just `runBefore`) or logging-only middleware (just `runAfter`): ```ts type ActionMiddleware = { runBefore?: ( params: ActionParams, connection: Connection, ) => Promise; runAfter?: ( params: ActionParams, connection: Connection, ) => Promise; }; type ActionMiddlewareResponse = { /** Replace the params before the action runs */ updatedParams?: ActionParams; /** Replace the response after the action runs */ updatedResponse?: any; }; ``` Throw from `runBefore` to halt execution — the action's `run()` method won't be called. Return `updatedParams` or `updatedResponse` to modify the data flowing through the pipeline. Middleware runs in the same order regardless of transport. HTTP, WebSocket, CLI, tasks — same middleware chain, same behavior. --- --- url: /guide/actions.md description: >- Actions are the universal controller — one class handles HTTP, WebSocket, CLI, background tasks, and MCP. --- # Actions If there's one idea that defines Keryx, it's this: **actions are the universal controller**. In the original ActionHero, we had actions, tasks, and CLI commands as separate concepts. That always felt like unnecessary duplication — you'd write the same validation logic three times for three different entry points. So in this version, we've collapsed them all into one thing. An action is a class with a `name`, a Zod schema for `inputs`, and a `run()` method that returns data. You add a `web` property to make it an HTTP endpoint. You add a `task` property to make it a background job. CLI support comes for free. MCP tool exposure comes for free. Same validation, same error handling, same response shape — everywhere. ## A Simple Example ```ts import { z } from "zod"; import { Action, api } from "../api"; import { HTTP_METHOD } from "../classes/Action"; export class Status implements Action { name = "status"; description = "Return the status of the server"; inputs = z.object({}); web = { route: "/status", method: HTTP_METHOD.GET }; async run() { return { name: api.process.name, uptime: new Date().getTime() - api.bootTime, }; } } ``` That's a fully functioning HTTP endpoint, CLI command, and WebSocket handler. Hit `GET /api/status` from a browser, run `./keryx.ts status -q | jq` from the terminal, or send `{ action: "status" }` over a WebSocket — same action, same response. ## Properties | Property | Type | What it does | | ------------- | ----------------------- | ------------------------------------------------------------------------------ | | `name` | `string` | Unique identifier (e.g., `"user:create"`) | | `description` | `string` | Human-readable description, shows up in CLI `--help` and Swagger | | `inputs` | `z.ZodType` | Zod schema — validation happens automatically | | `web` | `{ route, method }` | HTTP routing. Routes are strings with `:param` placeholders or RegExp patterns | | `task` | `{ queue, frequency? }` | Makes this action schedulable as a background job | | `middleware` | `ActionMiddleware[]` | Runs before/after the action (auth, logging, etc.) | | `mcp` | `McpActionConfig` | Controls MCP tool exposure (default: enabled) | ## Input Validation Inputs use [Zod](https://zod.dev) schemas. If validation fails, the client gets a `422` with the validation errors — you don't need to write any error handling for bad inputs. ```ts inputs = z.object({ name: z.string().min(3).max(256), email: z .string() .email() .transform((val) => val.toLowerCase()), password: secret(z.string().min(8)), }); ``` ### Secret Fields You can mark sensitive fields with the `secret()` wrapper so they're redacted as `[[secret]]` in logs. Don't log passwords — use this: ```ts import { secret } from "../util/zodMixins"; inputs = z.object({ password: secret(z.string().min(8)), }); ``` ### Type Helpers Two type helpers make your life easier: * `ActionParams` infers the validated input type from an action's Zod schema * `ActionResponse` infers the return type of an action's `run()` method ```ts async run(params: ActionParams) { // params.name, params.email, params.password — all typed } ``` The frontend uses `ActionResponse` to get type-safe API responses without any code generation. ## Web Routes Add a `web` property to expose an action as an HTTP endpoint: ```ts web = { route: "/user/:id", method: HTTP_METHOD.GET }; ``` Routes support `:param` path parameters (like Express) and can also be RegExp patterns. There's no separate `routes.ts` file — the route lives on the action itself, right next to the handler that serves it. Available methods: `GET`, `POST`, `PUT`, `DELETE`, `PATCH`, `OPTIONS`. ## CLI Commands Every action is automatically available as a CLI command. No extra configuration needed: ```bash ./keryx.ts "user:create" --name evan --email "evan@example.com" --password secret -q | jq ``` The `-q` flag suppresses server logs so you can pipe the JSON output cleanly. Use `--help` on any action to see its parameters. ## MCP Tools When the MCP server is enabled, every action is automatically exposed as an [MCP](https://modelcontextprotocol.io) tool. AI agents can discover and call your actions through the Model Context Protocol — no extra configuration needed. To exclude an action from MCP, set `mcp = { enabled: false }`. See the [MCP guide](/guide/mcp) for full details on authentication, schema conversion, and configuration. ## Task Scheduling Add a `task` property to schedule an action as a recurring background job: ```ts task = { queue: "default", frequency: 1000 * 60 * 60 }; // every hour ``` * `queue` — which Resque queue to use * `frequency` — optional interval in ms for recurring execution See [Tasks](/guide/tasks) for the full story on background processing and the fan-out pattern. ## Error Handling Actions should throw `TypedError` for errors — not generic `Error`. Each error type maps to an HTTP status code: ```ts import { ErrorType, TypedError } from "../classes/TypedError"; throw new TypedError({ message: "User not found", type: ErrorType.CONNECTION_ACTION_RUN, // → 400 }); ``` Some common mappings: `ACTION_VALIDATION` → 422, `CONNECTION_SESSION_NOT_FOUND` → 401, `CONNECTION_ACTION_NOT_FOUND` → 404. ## Registration New actions need to be re-exported from `backend/actions/.index.ts`. This is how the frontend gets type information about your API — it imports from that barrel file to power `ActionResponse` on the client side. --- --- url: /guide/tasks.md description: >- Background tasks with Resque workers and the fan-out pattern for distributing work across child jobs. --- # Background Tasks One of the things I've always loved about ActionHero is that background tasks are a first-class citizen — not a plugin, not a separate service, just part of the framework. Keryx keeps that tradition, using [node-resque](https://github.com/actionhero/node-resque) for job processing backed by Redis. The key difference from the original ActionHero: tasks and actions are the same thing now. Any action can be scheduled as a background job by adding a `task` property. Same inputs, same validation, same `run()` method. ## Defining a Task ```ts export class MessagesCleanup implements Action { name = "messages:cleanup"; description = "Cleanup messages older than 24 hours"; task = { queue: "default", frequency: 1000 * 60 * 60 }; // every hour inputs = z.object({ age: z.coerce .number() .int() .min(1000) .default(1000 * 60 * 60 * 24), }); async run(params: ActionParams) { const deleted = await api.db.db .delete(messages) .where(lt(messages.createdAt, new Date(Date.now() - params.age))) .returning(); return { messagesDeleted: deleted.length }; } } ``` * **`queue`** — which Resque queue to put this job on (required) * **`frequency`** — how often to run it, in milliseconds (optional — omit for one-shot tasks) You can also run this same action from the CLI (`./keryx.ts "messages:cleanup" --age 3600000 -q`) or hit it via HTTP if you add a `web` property. It's all the same code. ## Queue Priority Workers drain queues left-to-right. This matters when you want some jobs to take priority: ```ts // In config/tasks.ts queues: ["worker", "scheduler"]; // Jobs on "worker" are processed before "scheduler" ``` Use `["*"]` to process all queues with equal priority. That said, for fan-out patterns (see below), you'll probably want to separate parent tasks from child tasks so the children get processed first. ## Fan-Out Pattern This is one of my favorite features. A parent task can distribute work across many child jobs for parallel processing using `api.actions.fanOut()`. Think "process all users" where you fan out to individual "process one user" jobs. ### Single Action Fan-Out The simple case — bulk-enqueue the same action with different inputs: ```ts export class ProcessAllUsers implements Action { name = "users:processAll"; task = { frequency: 1000 * 60 * 60, queue: "scheduler" }; async run() { const users = await getActiveUsers(); const result = await api.actions.fanOut( "users:processOne", users.map((u) => ({ userId: u.id })), "worker", ); return { fanOut: result }; } } // The child action — nothing special needed here export class ProcessOneUser implements Action { name = "users:processOne"; task = { queue: "worker" }; inputs = z.object({ userId: z.string() }); async run(params) { /* process one user */ } } ``` The child action doesn't know or care that it was spawned by a fan-out. It's just a regular action. ### Multi-Action Fan-Out You can also fan out to different action types in one batch: ```ts const result = await api.actions.fanOut([ { action: "users:processOne", inputs: { userId: "1" } }, { action: "users:processOne", inputs: { userId: "2" } }, { action: "emails:send", inputs: { to: "a@b.com" }, queue: "priority" }, ]); ``` ### Checking Results ```ts const status = await api.actions.fanOutStatus(result.fanOutId); // → { total: 3, completed: 3, failed: 0, results: [...], errors: [...] } ``` Results and metadata are stored in Redis with a configurable TTL (default 10 minutes). The TTL refreshes on each child job completion, so it's relative to the last activity — not the fan-out creation time. ### Options * **`batchSize`** — how many jobs to enqueue per batch (default: 100) * **`resultTtl`** — how long to keep results in Redis, in seconds (default: 600) --- --- url: /guide/channels.md description: >- Channels define PubSub topics for real-time WebSocket messaging with middleware-based authorization. --- # Channels Every project I've worked on eventually needs real-time messaging — chat, live dashboards, notifications, presence indicators. Channels are how Keryx handles this. They define PubSub topics that WebSocket clients can subscribe to, with middleware for controlling who gets access. Under the hood, channels use Redis PubSub, so messages are distributed across multiple server instances automatically. You don't need to think about sticky sessions or shared state. ## Defining a Channel ```ts import { Channel } from "../classes/Channel"; export class MessagesChannel extends Channel { constructor() { super({ name: "messages", description: "Public message stream", }); } } ``` That's a basic channel. Any WebSocket client can subscribe to `"messages"` and receive broadcasts. ## Pattern Matching Channel names can be exact strings or RegExp patterns: ```ts // Exact match — only "messages" name: "messages"; // Pattern match — "room:123", "room:abc", etc. name: /^room:.*$/; ``` This is useful when you have per-resource channels — chat rooms, user-specific feeds, document collaboration sessions. The `matches(channelName)` method handles the routing. ## Middleware Channel middleware controls who can subscribe and handles cleanup on unsubscribe: ```ts import type { ChannelMiddleware } from "../classes/Channel"; const AuthMiddleware: ChannelMiddleware = { runBefore: async (channel, connection) => { if (!connection.session) { throw new TypedError({ message: "Must be logged in to subscribe", type: ErrorType.CONNECTION_SESSION_NOT_FOUND, }); } }, runAfter: async (channel, connection) => { // cleanup on unsubscribe — presence tracking, etc. }, }; ``` * **`runBefore`** runs before a connection subscribes. Throw a `TypedError` to deny the subscription. * **`runAfter`** runs after a connection unsubscribes. Useful for cleanup or presence tracking. ## Custom Authorization For more complex authorization logic, you can override the `authorize()` method directly on the channel class: ```ts export class RoomChannel extends Channel { constructor() { super({ name: /^room:.*$/ }); } async authorize(channelName: string, connection: Connection) { const roomId = channelName.split(":")[1]; // check if the user has access to this room... } } ``` This runs after middleware, so you can combine both approaches — middleware for common checks (is the user logged in?) and `authorize()` for channel-specific logic (does this user belong to this room?). ## Broadcasting Use `api.pubsub.broadcast()` to send messages to all subscribers: ```ts await api.pubsub.broadcast( "messages", // channel name { message: serializedMessage }, // payload `user:${userId}`, // sender identifier ); ``` Messages go through Redis PubSub, so they work across server instances. If you're running three backend processes behind a load balancer, a broadcast from one reaches subscribers on all three. ## WebSocket Security Channels and WebSocket connections have several built-in protections. See the [Security guide](/guide/security) for the full picture. ### Channel Name Validation Channel names must match `/^[a-zA-Z0-9:._-]{1,200}$/` — alphanumeric characters plus `:`, `.`, `_`, `-`, with a max length of 200. Invalid names are rejected before any subscription logic runs. ### Undefined Channels If a client tries to subscribe to a channel name that doesn't match any registered channel, the subscription is denied with a `CHANNEL_NOT_FOUND` error. You must define a channel (exact or pattern) for every topic clients can subscribe to. ### Origin Validation Before upgrading an HTTP connection to WebSocket, the server checks the `Origin` header against `config.server.web.allowedOrigins`. Requests from unrecognized origins are rejected, preventing Cross-Site WebSocket Hijacking (CSWSH). ### Connection Limits Each WebSocket connection is subject to: * **Message size** — messages larger than `websocketMaxPayloadSize` (default 64 KB) are rejected * **Message rate** — clients sending more than `websocketMaxMessagesPerSecond` (default 20/s) are disconnected * **Subscription count** — each connection can subscribe to at most `websocketMaxSubscriptions` (default 100) channels All of these are configurable via environment variables. See [Configuration](/guide/config) for details. --- --- url: /guide/config.md description: >- Modular configuration with per-environment overrides via environment variables. --- # Configuration Config in Keryx is statically defined at boot — there's no dynamic config reloading. That said, every config value supports per-environment overrides via environment variables, so you can set things differently in test, development, and production without touching code. ## Structure Config is split into modules: ``` backend/config/ ├── index.ts # Aggregates everything into one `config` object ├── database.ts # Database connection string, auto-migrate flag ├── logger.ts # Log level, timestamps, colors ├── process.ts # Process name, shutdown timeout ├── rateLimit.ts # Rate limiting windows and thresholds ├── redis.ts # Redis connection string ├── session.ts # Session TTL, cookie security flags ├── tasks.ts # Task queue settings └── server/ ├── cli.ts # CLI error display, quiet mode ├── web.ts # Web server port, CORS, security headers, WS limits └── mcp.ts # MCP server toggle, route, OAuth TTLs ``` Everything rolls up into a single `config` object: ```ts import { config } from "../config"; config.database.connectionString; // Postgres URL config.server.web.port; // 8080 config.logger.level; // "info" ``` ## Environment Overrides The `loadFromEnvIfSet()` helper is where the magic happens: ```ts import { loadFromEnvIfSet } from "../util/config"; export const configDatabase = { connectionString: await loadFromEnvIfSet("DATABASE_URL", "x"), autoMigrate: await loadFromEnvIfSet("DATABASE_AUTO_MIGRATE", true), }; ``` The resolution order is: 1. `DATABASE_URL_TEST` (env var with `NODE_ENV` suffix — checked first) 2. `DATABASE_URL` (plain env var) 3. `"x"` (the default value) This means you can set `DATABASE_URL_TEST=postgres://localhost/bun-test` and it'll automatically be used when `NODE_ENV=test`, without any conditional logic in your config files. The helper is also type-aware — it parses `"true"`/`"false"` strings into booleans and numeric strings into numbers. So `DATABASE_AUTO_MIGRATE=false` does what you'd expect. ## Reference ### Database | Key | Env Var | Default | | ------------------ | ----------------------- | ------- | | `connectionString` | `DATABASE_URL` | `"x"` | | `autoMigrate` | `DATABASE_AUTO_MIGRATE` | `true` | ### Logger | Key | Env Var | Default | | ------------------- | ------------------------ | -------- | | `level` | `LOG_LEVEL` | `"info"` | | `includeTimestamps` | `LOG_INCLUDE_TIMESTAMPS` | `true` | | `colorize` | `LOG_COLORIZE` | `true` | ### Redis | Key | Env Var | Default | | ------------------ | ----------- | ---------------------------- | | `connectionString` | `REDIS_URL` | `"redis://localhost:6379/0"` | ### Session | Key | Env Var | Default | Description | | ---------------- | -------------------------- | -------------------------- | ----------------------------------------- | | `ttl` | `SESSION_TTL` | `86400` (1 day in seconds) | Session lifetime | | `cookieName` | `SESSION_COOKIE_NAME` | `"__session"` | Cookie name | | `cookieHttpOnly` | `SESSION_COOKIE_HTTP_ONLY` | `true` | Prevent JavaScript access | | `cookieSecure` | `SESSION_COOKIE_SECURE` | `false` | HTTPS-only cookies | | `cookieSameSite` | `SESSION_COOKIE_SAME_SITE` | `"Strict"` | CSRF protection (`Strict`, `Lax`, `None`) | ### Process | Key | Env Var | Default | | ----------------- | -------------------------- | ------------- | | `name` | `PROCESS_NAME` | `"server"` | | `shutdownTimeout` | `PROCESS_SHUTDOWN_TIMEOUT` | `30000` (30s) | ### Web Server | Key | Env Var | Default | | ------------------------------- | ------------------------------------ | ------------------------------------------------ | | `enabled` | `WEB_SERVER_ENABLED` | `true` | | `port` | `WEB_SERVER_PORT` | `8080` | | `host` | `WEB_SERVER_HOST` | `"localhost"` | | `applicationUrl` | `APPLICATION_URL` | `"http://localhost:8080"` | | `apiRoute` | `WEB_SERVER_API_ROUTE` | `"/api"` | | `allowedOrigins` | `WEB_SERVER_ALLOWED_ORIGINS` | `"*"` | | `allowedMethods` | `WEB_SERVER_ALLOWED_METHODS` | `"HEAD, GET, POST, PUT, PATCH, DELETE, OPTIONS"` | | `allowedHeaders` | `WEB_SERVER_ALLOWED_HEADERS` | `"Content-Type"` | | `staticFilesEnabled` | `WEB_SERVER_STATIC_ENABLED` | `true` | | `includeStackInErrors` | `WEB_SERVER_INCLUDE_STACK_IN_ERRORS` | `true` (dev) / `false` (prod) | | `websocketMaxPayloadSize` | `WS_MAX_PAYLOAD_SIZE` | `65536` (64 KB) | | `websocketMaxMessagesPerSecond` | `WS_MAX_MESSAGES_PER_SECOND` | `20` | | `websocketMaxSubscriptions` | `WS_MAX_SUBSCRIPTIONS` | `100` | #### Security Headers All HTTP responses include these headers. Each is configurable: | Header | Env Var | Default | | --------------------------- | ----------------------------------- | ------------------------------------- | | `Content-Security-Policy` | `WEB_SECURITY_CSP` | `default-src 'self'` | | `X-Content-Type-Options` | `WEB_SECURITY_CONTENT_TYPE_OPTIONS` | `nosniff` | | `X-Frame-Options` | `WEB_SECURITY_FRAME_OPTIONS` | `DENY` | | `Strict-Transport-Security` | `WEB_SECURITY_HSTS` | `max-age=31536000; includeSubDomains` | | `Referrer-Policy` | `WEB_SECURITY_REFERRER_POLICY` | `strict-origin-when-cross-origin` | ### Tasks | Key | Env Var | Default | | ---------------- | ----------------- | ------- | | `enabled` | `TASKS_ENABLED` | `true` | | `timeout` | `TASK_TIMEOUT` | `5000` | | `taskProcessors` | `TASK_PROCESSORS` | `1` | ### Rate Limiting See the [Security guide](/guide/security) for details on how rate limiting works. | Key | Env Var | Default | | ----------------------- | ------------------------------------- | ------------------------- | | `enabled` | `RATE_LIMIT_ENABLED` | `true` (disabled in test) | | `windowMs` | `RATE_LIMIT_WINDOW_MS` | `60000` (1 min) | | `unauthenticatedLimit` | `RATE_LIMIT_UNAUTH_LIMIT` | `20` | | `authenticatedLimit` | `RATE_LIMIT_AUTH_LIMIT` | `200` | | `keyPrefix` | `RATE_LIMIT_KEY_PREFIX` | `"ratelimit"` | | `oauthRegisterLimit` | `RATE_LIMIT_OAUTH_REGISTER_LIMIT` | `5` | | `oauthRegisterWindowMs` | `RATE_LIMIT_OAUTH_REGISTER_WINDOW_MS` | `3600000` (1 hour) | ### CLI | Key | Env Var | Default | | ---------------------- | ----------------------------- | ------- | | `includeStackInErrors` | `CLI_INCLUDE_STACK_IN_ERRORS` | `true` | | `quiet` | `CLI_QUIET` | `false` | ### MCP Server | Key | Env Var | Default | | ---------------- | ---------------------- | --------- | | `enabled` | `MCP_SERVER_ENABLED` | `false` | | `route` | `MCP_SERVER_ROUTE` | `"/mcp"` | | `oauthClientTtl` | `MCP_OAUTH_CLIENT_TTL` | `2592000` | | `oauthCodeTtl` | `MCP_OAUTH_CODE_TTL` | `300` | --- --- url: /reference/config.md description: >- Auto-generated configuration reference — every config key, its environment variable, and default value. --- # Configuration Reference Every config key in the backend, auto-generated from source. Each key can be overridden via environment variable — the system checks for `ENV_VAR_NODEENV` first (e.g., `DATABASE_URL_TEST` when `NODE_ENV=test`), then `ENV_VAR`, then falls back to the default. ## {{ section.section }} --- --- url: /guide/deployment.md description: >- Deploying Keryx — Docker, production builds, and running frontend and backend independently. --- # Deployment Keryx runs as two separate applications — a backend API server and a frontend Next.js app. This is intentional. You can deploy them together on the same box, or put the frontend on Vercel and the backend on a VPS, or containerize everything with Docker. Each app is independent. ## Production Build ```bash # compile both applications bun compile # set NODE_ENV=production in .env, then start bun start ``` ## Docker Each app has its own `Dockerfile`, and there's a `docker-compose.yml` to run everything together: ```bash docker compose up ``` This starts the backend, frontend, PostgreSQL, and Redis. You probably won't use this exact setup in production, but it shows how the pieces fit together and gives you a working reference for your own deployment config. ## Separate Applications Rather than bundling the frontend into the backend (like the original ActionHero did with plugins), the frontend and backend are separate Bun applications. This means you can: * Deploy them independently — frontend on Vercel, backend on Railway, whatever works * Scale them independently — maybe you need more API capacity but the frontend is fine * Develop them independently — `cd frontend && bun dev` works without the backend In development, `bun dev` from the root runs both concurrently with hot reload. ## Environment Variables Set production config through environment variables. The config system (see [Configuration](/guide/config)) handles the rest: ```bash NODE_ENV=production DATABASE_URL=postgres://user:pass@host:5432/dbname REDIS_URL=redis://host:6379/0 APPLICATION_URL=https://api.example.com WEB_SERVER_PORT=8080 ``` ## Production Security Keryx ships with secure defaults, but a few settings need adjustment for production. See the [Security guide](/guide/security) for full details. ```bash # Cookies — require HTTPS transport SESSION_COOKIE_SECURE=true # CORS — restrict to your domain (wildcard blocks credentials) WEB_SERVER_ALLOWED_ORIGINS=https://yourapp.com # Rate limiting — enabled by default, tune thresholds as needed RATE_LIMIT_UNAUTH_LIMIT=20 RATE_LIMIT_AUTH_LIMIT=200 # Error stack traces — auto-disabled when NODE_ENV=production NODE_ENV=production # Security headers — defaults are production-ready # Customize CSP if your backend serves HTML with external resources: # WEB_SECURITY_CSP="default-src 'self'; script-src 'self' https://cdn.example.com" # WebSocket limits — adjust for your expected traffic # WS_MAX_PAYLOAD_SIZE=65536 # WS_MAX_MESSAGES_PER_SECOND=20 ``` ## Database Migrations Migrations auto-apply on server start when `DATABASE_AUTO_MIGRATE=true` (the default). If you'd rather run them explicitly before deploying: ```bash cd backend && bun run migrations ``` This generates migration files from schema changes into `./drizzle/`. They'll be applied automatically the next time the server starts — or you can set `DATABASE_AUTO_MIGRATE=false` and handle it yourself. --- --- url: /guide.md description: >- Get up and running with Keryx — prerequisites, installation, and your first dev server. --- # Getting Started Keryx is a modern rewrite of [ActionHero](https://www.actionherojs.com), rebuilt from scratch on [Bun](https://bun.sh). I still believe in the core ideas behind ActionHero — transport-agnostic actions, built-in background tasks, strong typing between frontend and backend — but the original framework was showing its age. This project takes those ideas and pairs them with modern tooling: Bun for the runtime, Zod for validation, Drizzle for the ORM, and Next.js for the frontend. The result is a full-stack monorepo template where you write your controller logic once, and it works as an HTTP endpoint, WebSocket handler, CLI command, and background task… all at the same time. ## Prerequisites You'll need these running locally: * [Bun](https://bun.sh) (latest) * [PostgreSQL](https://www.postgresql.org/) * [Redis](https://redis.io/) ## Installation (macOS) ```bash # install bun curl -fsSL https://bun.sh/install | bash # install postgres and redis brew install postgresql redis brew services start postgresql brew services start redis # create a database createdb bun ``` ## Clone and Install ```bash git clone https://github.com/evantahler/keryx.git cd keryx bun install ``` ## Environment Variables ```bash cp backend/.env.example backend/.env cp frontend/.env.example frontend/.env # update as needed ``` ## Run the Dev Server ```bash bun dev ``` That's it. Both the frontend and backend will start with hot reload — edit a file, save it, and see the change immediately. ## Project Structure The repo is a monorepo with two workspaces: ``` keryx/ ├── backend/ # The Keryx API server │ ├── actions/ # Transport-agnostic controllers │ ├── initializers/ # Lifecycle components (DB, Redis, etc.) │ ├── config/ # Modular configuration │ ├── classes/ # Core framework classes │ ├── middleware/ # Action middleware (auth, etc.) │ ├── ops/ # Business logic layer │ ├── schema/ # Drizzle ORM table definitions │ ├── servers/ # HTTP + WebSocket server │ └── channels/ # PubSub channel definitions ├── frontend/ # Next.js application └── docs/ # This documentation site ``` The `backend/` and `frontend/` are separate Bun applications. This is an intentional change from the original ActionHero — rather than bundling the frontend into the backend, each app does what it does best. You could host the frontend on Vercel and the backend on a VPS if you wanted to. ## What's Next * [Actions](/guide/actions) — the core concept. Everything is an action. * [Initializers](/guide/initializers) — how the server boots up and connects to services * [Tasks](/guide/tasks) — background jobs and the fan-out pattern * [Configuration](/guide/config) — environment-based config with per-env overrides --- --- url: /reference/initializers.md description: Initializer class definition and the module augmentation pattern. --- # Initializer Source: `backend/classes/Initializer.ts` Initializers are the lifecycle components that boot up your server. They run in priority order during `initialize → start → stop`, and each one attaches its namespace to the global `api` singleton. ## Class Definition ```ts abstract class Initializer { /** The name of the initializer — also used as the api namespace key */ name: string; /** Order for initialize() phase. Lower = runs first. Default: 1000 */ loadPriority: number; /** Order for start() phase. Lower = runs first. Default: 1000 */ startPriority: number; /** Order for stop() phase. Lower = runs first. Default: 1000 */ stopPriority: number; /** Which run modes this initializer activates in */ runModes: RUN_MODE[]; constructor(name: string); /** Set up namespace object and return it. Attaches to api[name]. */ async initialize?(): Promise; /** Connect to external services. All initializers are loaded by this point. */ async start?(): Promise; /** Clean up — close connections, flush buffers. */ async stop?(): Promise; } ``` ## RUN\_MODE Initializers can be scoped to specific run modes. By default, they run in both: ```ts enum RUN_MODE { CLI = "cli", SERVER = "server", } ``` ## Module Augmentation Pattern This is how each initializer makes `api.myNamespace` fully typed. You declare the type on the `API` interface, and TypeScript knows what's there: ```ts const namespace = "db"; declare module "../classes/API" { export interface API { [namespace]: Awaited>; } } ``` The return type of `initialize()` becomes `api[namespace]` — autocomplete, type checking, the works. ## Priority Reference Core initializers use priorities below 1000 to ensure they run before application code: | Priority | Initializers | | -------- | ---------------------------------------------------- | | 100 | `actions`, `db` | | 150 | `pubsub`, `swagger` | | 250 | `resque` | | 1000 | `redis`, `application`, and your custom initializers | --- --- url: /guide/initializers.md description: >- Initializers are lifecycle components that set up services and attach them to the global API singleton. --- # Initializers Initializers are the backbone of the server's boot process. They're lifecycle components that set up services — connecting to databases, starting Redis, registering actions, configuring the task queue — in a controlled, priority-ordered sequence. If you've worked with the original ActionHero, initializers will feel familiar. The big difference here is the TypeScript integration: each initializer uses module augmentation to extend the `API` interface with its namespace, so `api.db`, `api.redis`, `api.actions` are all fully typed throughout the codebase. ## Lifecycle The server goes through three phases: ``` initialize() → start() → [running] → stop() ``` * **`initialize()`** — set up your namespace object and return it. This is where you define the shape of what gets attached to `api`. * **`start()`** — connect to external services (databases, Redis, etc.). By this point, all initializers have been loaded, so you can reference other namespaces. * **`stop()`** — clean up. Close connections, flush buffers, shut down gracefully. ## Priority Ordering Each initializer has three priority values. Lower numbers run first: | Initializer | Load Priority | What it does | | ------------- | ------------- | --------------------------------------- | | `actions` | 100 | Discovers and registers all actions | | `db` | 100 | Sets up Drizzle ORM + connection pool | | `pubsub` | 150 | Redis PubSub for real-time messaging | | `swagger` | 150 | Parses source code for OpenAPI schemas | | `oauth` | 175 | OAuth 2.1 provider for MCP auth | | `mcp` | 200 | MCP server — exposes actions as tools | | `resque` | 250 | Background task queue | | `application` | 1000 | App-specific setup (default user, etc.) | The defaults are `1000` for all three priorities (`loadPriority`, `startPriority`, `stopPriority`), so core framework initializers use lower values to ensure they run first. ## The Module Augmentation Pattern This is the part that makes the type system work. Each initializer extends the `API` interface so TypeScript knows what's available on the `api` singleton: ```ts import { Initializer } from "../classes/Initializer"; import { api, logger } from "../api"; const namespace = "db"; // This is the magic — tells TypeScript that api.db exists and what type it is declare module "../classes/API" { export interface API { [namespace]: Awaited>; } } export class DB extends Initializer { constructor() { super(namespace); this.loadPriority = 100; this.startPriority = 100; this.stopPriority = 910; } async initialize() { const dbContainer = {} as { db: ReturnType; pool: Pool; }; return Object.assign( { generateMigrations: this.generateMigrations, clearDatabase: this.clearDatabase, }, dbContainer, ); } async start() { api.db.pool = new Pool({ connectionString: config.database.connectionString, }); api.db.db = drizzle(api.db.pool); // migrations run here if configured... } async stop() { await api.db.pool.end(); } } ``` The return value of `initialize()` becomes `api.db` — and that type flows everywhere. You get autocomplete in your actions, your tests, your ops layer… everywhere. ## The `api` Singleton The `api` object lives on `globalThis` and accumulates namespaces as initializers run: ```ts api.db; // Drizzle ORM + Postgres pool api.redis; // Redis client api.actions; // Action registry + fan-out api.session; // Session manager api.pubsub; // Redis PubSub api.swagger; // OpenAPI schema cache api.oauth; // OAuth 2.1 provider api.mcp; // MCP server api.resque; // Background task queue ``` Every namespace is typed via module augmentation, so you never have to cast or guess at the shape of `api.db` or `api.redis`. ## Auto-Discovery Initializers are auto-discovered. Drop a `.ts` file in `initializers/`, export a class that extends `Initializer`, and it'll get picked up on boot. Files prefixed with `.` are skipped — useful for temporarily disabling an initializer without deleting it. --- --- url: /guide/mcp.md description: >- Expose your actions as MCP tools for AI agents, with built-in OAuth 2.1 authentication. --- # MCP Server [MCP (Model Context Protocol)](https://modelcontextprotocol.io) is an open standard for connecting AI agents to external tools and data sources. In Keryx, MCP is a natural extension of the transport-agnostic action model — just like an action can serve HTTP, WebSocket, CLI, and background tasks, it can also be exposed as an MCP tool for AI agents. ## Enabling the MCP Server The MCP server is disabled by default. Enable it with an environment variable: ```bash MCP_SERVER_ENABLED=true ``` Or set it directly in `backend/config/server/mcp.ts`: ```ts export const configServerMcp = { enabled: true, route: "/mcp", // ... }; ``` Once enabled, the server listens at `http://localhost:8080/mcp` (or your configured `applicationUrl` + `route`). ## How Actions Become Tools When the MCP server starts, it registers every action as an MCP tool automatically. No extra configuration needed — if an action exists, it becomes a tool. For each action: 1. **Name** — The action name is converted to a valid MCP tool name by replacing `:` with `-` (e.g., `user:create` → `user-create`) 2. **Description** — The action's `description` property becomes the tool description 3. **Input schema** — The action's Zod `inputs` schema is converted to JSON Schema for tool parameter definitions ```ts // This action... export class UserView extends Action { name = "user:view"; description = "View a user's profile"; inputs = z.object({ userId: z.string() }); // ... } // ...becomes MCP tool "user-view" with: // - description: "View a user's profile" // - inputSchema: { type: "object", properties: { userId: { type: "string" } } } ``` ## Controlling Exposure By default, all actions are exposed as MCP tools. To exclude an action: ```ts export class InternalAction extends Action { name = "internal:cleanup"; mcp = { enabled: false }; // ... } ``` The full `mcp` property is of type `McpActionConfig`: | Property | Type | Default | Description | | ---------------- | --------- | ------- | -------------------------------------------- | | `enabled` | `boolean` | `true` | Whether to expose this action as an MCP tool | | `isLoginAction` | `boolean` | — | Tag as the login action for the OAuth flow | | `isSignupAction` | `boolean` | — | Tag as the signup action for the OAuth flow | The `isLoginAction` and `isSignupAction` markers tell the OAuth system which actions to invoke when users authenticate through the MCP authorization page. These actions must return `OAuthActionResponse` (`{ user: { id: number } }`). ## Schema Sanitization The MCP SDK's internal JSON Schema converter (`zod/v4-mini`'s `toJSONSchema`) doesn't support all Zod types (e.g., `z.date()`). The MCP initializer tests each field individually and replaces incompatible fields with `z.string()` as a fallback, so your tools always register successfully even if some input types need coercion. ## OAuth 2.1 Authentication MCP clients authenticate using OAuth 2.1 with PKCE (Proof Key for Code Exchange). The flow is: 1. MCP client connects to `/mcp` and receives a `401` response 2. Client fetches `/.well-known/oauth-protected-resource` to discover the authorization server 3. Client fetches `/.well-known/oauth-authorization-server` for endpoints 4. Client registers dynamically via `POST /oauth/register` 5. Client opens a browser to `/oauth/authorize` with PKCE challenge 6. User logs in or signs up on the authorization page 7. Server issues an authorization code and redirects back 8. Client exchanges the code for an access token at `POST /oauth/token` 9. Client includes `Authorization: Bearer ` on subsequent MCP requests ### OAuth Endpoints | Endpoint | Method | Description | | ----------------------------------------- | ------ | -------------------------------------- | | `/.well-known/oauth-protected-resource` | GET | Resource metadata (RFC 9728) | | `/.well-known/oauth-authorization-server` | GET | Authorization server metadata | | `/oauth/register` | POST | Dynamic client registration | | `/oauth/authorize` | GET | Authorization page (login/signup form) | | `/oauth/authorize` | POST | Process login/signup form submission | | `/oauth/token` | POST | Exchange authorization code for token | ### Security The OAuth implementation includes several hardening measures: * **Redirect URI validation** — URIs registered via `/oauth/register` must not contain fragments or userinfo, and must use HTTPS for non-localhost addresses. When exchanging authorization codes, the redirect URI must match the registered URI exactly (origin + pathname). * **Registration rate limiting** — `POST /oauth/register` has a separate, stricter rate limit (default: 5 requests per hour per IP) to prevent abuse. See `RATE_LIMIT_OAUTH_REGISTER_LIMIT` and `RATE_LIMIT_OAUTH_REGISTER_WINDOW_MS` in [Configuration](/guide/config). * **CORS** — OAuth and MCP endpoints respect the `allowedOrigins` configuration. When `allowedOrigins` is `"*"`, credentials headers are not sent, per the browser spec. Set a specific origin in production for credentialed requests to work. ## Session Management Each authenticated MCP connection creates its own `McpServer` instance. Sessions are tracked via the `mcp-session-id` header — the MCP SDK generates a UUID per session and includes it in all subsequent requests. When a session closes, the transport and server instance are cleaned up automatically. ## PubSub Notifications When messages are broadcast through the PubSub system (e.g., chat messages sent via Redis PubSub), they are forwarded to all connected MCP clients as MCP logging messages. This allows AI agents to receive real-time notifications about events happening in your application. ## Configuration Reference | Key | Env Var | Default | Description | | ---------------- | ---------------------- | --------- | --------------------------------------- | | `enabled` | `MCP_SERVER_ENABLED` | `false` | Enable the MCP server | | `route` | `MCP_SERVER_ROUTE` | `"/mcp"` | URL path for the MCP endpoint | | `oauthClientTtl` | `MCP_OAUTH_CLIENT_TTL` | `2592000` | OAuth client registration TTL (seconds) | | `oauthCodeTtl` | `MCP_OAUTH_CODE_TTL` | `300` | Authorization code TTL (seconds) | ## Testing You can test MCP actions using the `@modelcontextprotocol/sdk` client: ```ts import { Client } from "@modelcontextprotocol/sdk/client/index.js"; import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js"; const transport = new StreamableHTTPClientTransport( new URL("http://localhost:8080/mcp"), { requestInit: { headers: { Authorization: `Bearer ${accessToken}`, }, }, }, ); const client = new Client({ name: "test-client", version: "1.0.0" }); await client.connect(transport); const tools = await client.listTools(); const result = await client.callTool({ name: "status", arguments: {}, }); ``` --- --- url: /guide/middleware.md description: >- Middleware intercepts action execution for authentication, authorization, logging, and response modification. --- # Middleware Middleware lets you run logic before and after an action executes — authentication checks, parameter normalization, response enrichment, logging, that sort of thing. If you've used Express middleware, the concept is similar, but scoped to individual actions rather than applied globally. ## The Basics Here's the session middleware we use for authenticated endpoints. It's about as simple as middleware gets: ```ts import type { ActionMiddleware } from "../classes/Action"; import { ErrorType, TypedError } from "../classes/TypedError"; export const SessionMiddleware: ActionMiddleware = { runBefore: async (_params, connection) => { if (!connection.session || !connection.session.data.userId) { throw new TypedError({ message: "Session not found", type: ErrorType.CONNECTION_SESSION_NOT_FOUND, }); } }, }; ``` If `runBefore` throws, the action's `run()` method is skipped entirely and the error goes back to the client. That's the primary pattern for auth — check the session, throw if it's missing. ## Interface ```ts type ActionMiddleware = { runBefore?: ( params: ActionParams, connection: Connection, ) => Promise; runAfter?: ( params: ActionParams, connection: Connection, ) => Promise; }; ``` Both methods are optional. You can have middleware that only runs before (auth), only runs after (logging), or both. Middleware can also modify params and responses by returning an `ActionMiddlewareResponse`: ```ts type ActionMiddlewareResponse = { updatedParams?: ActionParams; updatedResponse?: any; }; ``` ## Applying Middleware Add middleware to an action via the `middleware` array: ```ts export class UserEdit implements Action { name = "user:edit"; middleware = [SessionMiddleware]; // ... } ``` Middleware runs in array order. If you have `[AuthMiddleware, RateLimitMiddleware]`, auth runs first — if it throws, rate limiting never executes. ## Common Patterns ### Authentication This is the most common use case. Check that a session exists and has the data you expect: ```ts export const SessionMiddleware: ActionMiddleware = { runBefore: async (_params, connection) => { if (!connection.session?.data.userId) { throw new TypedError({ message: "Session not found", type: ErrorType.CONNECTION_SESSION_NOT_FOUND, }); } }, }; ``` ### Param Normalization You can modify params before the action sees them — useful for things like lowercasing emails: ```ts export const NormalizeMiddleware: ActionMiddleware = { runBefore: async (params) => { return { updatedParams: { ...params, email: params.email?.toLowerCase(), }, }; }, }; ``` That said, you can also handle this in the Zod schema with `.transform()` — so use whichever approach makes more sense for your case. ### Rate Limiting The built-in `RateLimitMiddleware` uses a Redis-backed sliding window to limit request rates per client. It identifies users by user ID (authenticated) or IP address (unauthenticated): ```ts import { RateLimitMiddleware } from "../middleware/rateLimit"; export class ApiEndpoint implements Action { name = "api:endpoint"; middleware = [SessionMiddleware, RateLimitMiddleware]; // ... } ``` When a client exceeds the limit, the middleware throws a `CONNECTION_RATE_LIMITED` error (HTTP 429). Rate limit info is attached to the connection and included in response headers automatically. See the [Security guide](/guide/security) for configuration options and custom limit overrides. ### Response Enrichment `runAfter` can add data to the response. This runs after the action's `run()` method completes: ```ts export const TimingMiddleware: ActionMiddleware = { runAfter: async (_params, connection) => { return { updatedResponse: { requestDuration: Date.now() - connection.startTime, }, }; }, }; ``` --- --- url: /reference/classes.md description: >- API singleton, Connection, Channel, Server, TypedError, and Logger class definitions. --- # Other Classes The remaining framework classes — the API singleton, connections, channels, servers, errors, and logging. ## API Source: `backend/classes/API.ts` The global singleton that manages the full server lifecycle. Stored on `globalThis` so it's accessible everywhere. Initializers attach their namespaces to it during boot. ```ts class API { rootDir: string; initialized: boolean; started: boolean; stopped: boolean; bootTime: number; logger: Logger; runMode: RUN_MODE; initializers: Initializer[]; /** Run all initializers in loadPriority order */ async initialize(): Promise; /** Start all initializers in startPriority order */ async start(runMode?: RUN_MODE): Promise; /** Stop all initializers in stopPriority order */ async stop(): Promise; /** Stop then start */ async restart(): Promise; // Initializer namespaces are added dynamically: // api.db, api.redis, api.actions, api.session, etc. [key: string]: any; } ``` The lifecycle is `initialize() → start() → [running] → stop()`. Calling `start()` automatically calls `initialize()` first if it hasn't been called yet. ## Connection Source: `backend/classes/Connection.ts` Represents a client connection — HTTP request, WebSocket, or CLI invocation. The connection handles action execution, session management, and channel subscriptions. ```ts class Connection = Record> { /** Connection type: "web", "websocket", "cli" */ type: string; /** Client identifier (IP, socket ID, etc.) */ identifier: string; /** Unique connection ID (UUID) */ id: string; /** Session data, typed with your session shape */ session?: SessionData; /** Channels this connection is subscribed to */ subscriptions: Set; /** The underlying transport object (Bun Request, WebSocket, etc.) */ rawConnection?: any; /** Execute an action with the given params */ async act( actionName: string | undefined, params: FormData, method?: string, url?: string, ): Promise<{ response: Object; error?: TypedError }>; /** Update session data (merges with existing) */ async updateSession(data: Partial): Promise; /** Subscribe to a PubSub channel */ subscribe(channel: string): void; /** Unsubscribe from a PubSub channel */ unsubscribe(channel: string): void; /** Broadcast a message to a subscribed channel */ async broadcast(channel: string, message: string): Promise; /** Remove this connection from the connection pool */ destroy(): void; } ``` The generic `T` parameter types your session data. For example, `Connection<{ userId: number }>` gives you typed access to `connection.session.data.userId`. ## Channel Source: `backend/classes/Channel.ts` Defines a PubSub topic for WebSocket real-time messaging. Channels support exact-match names or RegExp patterns. ```ts abstract class Channel { /** String for exact match, RegExp for pattern matching */ name: string | RegExp; description?: string; /** Middleware for subscribe/unsubscribe lifecycle */ middleware: ChannelMiddleware[]; /** Check if this channel definition matches a requested channel name */ matches(channelName: string): boolean; /** Override for custom authorization logic. Throw TypedError to deny. */ async authorize(channelName: string, connection: Connection): Promise; } ``` ### ChannelMiddleware ```ts type ChannelMiddleware = { /** Runs before subscribe — throw TypedError to deny */ runBefore?: (channel: string, connection: Connection) => Promise; /** Runs after unsubscribe — cleanup, presence tracking, etc. */ runAfter?: (channel: string, connection: Connection) => Promise; }; ``` ## Server Source: `backend/classes/Server.ts` Base class for transport servers. The framework ships with a web server (`Bun.serve` for HTTP + WebSocket), but you could add others. ```ts abstract class Server { name: string; /** The underlying server object (e.g., Bun.Server) */ server?: T; abstract initialize(): Promise; abstract start(): Promise; abstract stop(): Promise; } ``` ## TypedError Source: `backend/classes/TypedError.ts` All action errors should use `TypedError` instead of generic `Error`. Each error type maps to an HTTP status code, so the framework knows what status to return to the client. ```ts class TypedError extends Error { type: ErrorType; key?: string; // which param caused the error value?: any; // what value was invalid constructor(args: { message: string; type: ErrorType; originalError?: unknown; key?: string; value?: any; }); } ``` ### ErrorType → HTTP Status Mapping | ErrorType | Status | When | | ------------------------------------ | ------ | ------------------------------------------ | | `SERVER_INITIALIZATION` | 500 | Initializer failed to boot | | `SERVER_START` | 500 | Initializer failed to start | | `SERVER_STOP` | 500 | Initializer failed to stop | | `CONFIG_ERROR` | 500 | Invalid configuration | | `ACTION_VALIDATION` | 500 | Action class definition is invalid | | `CONNECTION_SESSION_NOT_FOUND` | 401 | No session / not authenticated | | `CONNECTION_ACTION_NOT_FOUND` | 404 | Unknown action name | | `CONNECTION_ACTION_PARAM_REQUIRED` | 406 | Missing required input | | `CONNECTION_ACTION_PARAM_VALIDATION` | 406 | Input failed Zod validation | | `CONNECTION_ACTION_RUN` | 500 | Action threw during `run()` | | `CONNECTION_NOT_SUBSCRIBED` | 406 | Tried to broadcast to unsubscribed channel | | `CONNECTION_CHANNEL_AUTHORIZATION` | 403 | Channel subscription denied | ## Logger Source: `backend/classes/Logger.ts` Simple logger that writes to stdout. No Winston, no Pino — just STDOUT and STDERR with optional colors and timestamps. ```ts class Logger { level: LogLevel; colorize: boolean; includeTimestamps: boolean; trace(message: string, object?: any): void; debug(message: string, object?: any): void; info(message: string, object?: any): void; warn(message: string, object?: any): void; error(message: string, object?: any): void; fatal(message: string, object?: any): void; } enum LogLevel { trace = "trace", debug = "debug", info = "info", warn = "warn", error = "error", fatal = "fatal", } ``` --- --- url: /guide/security.md description: >- Built-in security features — rate limiting, security headers, cookie hardening, CORS, WebSocket protections, and OAuth validation. --- # Security Keryx ships with security defaults that are sensible for development and tightenable for production. Most features are configured via environment variables — no code changes needed to go from development to a hardened production deployment. ## Rate Limiting Rate limiting uses a sliding window algorithm backed by Redis. It's implemented as action middleware, so you can apply it to specific actions or leave it off entirely. ### Setup Add `RateLimitMiddleware` to any action: ```ts import { RateLimitMiddleware } from "../middleware/rateLimit"; export class UserCreate implements Action { name = "user:create"; middleware = [RateLimitMiddleware]; // ... } ``` The middleware identifies clients by user ID (if authenticated) or IP address (if not), and applies different limits to each: | Config Key | Env Var | Default | Description | | ---------------------- | ------------------------- | ------------- | ------------------------------------ | | `enabled` | `RATE_LIMIT_ENABLED` | `true` | Master toggle (disabled in test) | | `windowMs` | `RATE_LIMIT_WINDOW_MS` | `60000` | Sliding window size (ms) | | `unauthenticatedLimit` | `RATE_LIMIT_UNAUTH_LIMIT` | `20` | Max requests per window (no session) | | `authenticatedLimit` | `RATE_LIMIT_AUTH_LIMIT` | `200` | Max requests per window (logged in) | | `keyPrefix` | `RATE_LIMIT_KEY_PREFIX` | `"ratelimit"` | Redis key prefix | When a client exceeds the limit, the action returns a `429` with a message indicating how many seconds until the window resets. Rate limit info is also included in response headers (`X-RateLimit-Limit`, `X-RateLimit-Remaining`, `X-RateLimit-Reset`). ### Custom Limits The `checkRateLimit()` function is exported for use outside of action middleware — for example, the OAuth registration endpoint uses it with a stricter limit: ```ts import { checkRateLimit } from "../middleware/rateLimit"; const info = await checkRateLimit(`oauth-register:${ip}`, false, { limit: config.rateLimit.oauthRegisterLimit, // default: 5 windowMs: config.rateLimit.oauthRegisterWindowMs, // default: 1 hour }); ``` ## Security Headers Every HTTP response includes security headers by default. Each is configurable via environment variable: | Header | Env Var | Default | | --------------------------- | ----------------------------------- | ------------------------------------- | | `Content-Security-Policy` | `WEB_SECURITY_CSP` | `default-src 'self'` | | `X-Content-Type-Options` | `WEB_SECURITY_CONTENT_TYPE_OPTIONS` | `nosniff` | | `X-Frame-Options` | `WEB_SECURITY_FRAME_OPTIONS` | `DENY` | | `Strict-Transport-Security` | `WEB_SECURITY_HSTS` | `max-age=31536000; includeSubDomains` | | `Referrer-Policy` | `WEB_SECURITY_REFERRER_POLICY` | `strict-origin-when-cross-origin` | These defaults are production-ready. The CSP may need loosening if your backend serves HTML with inline scripts or external resources — adjust via `WEB_SECURITY_CSP`. ## Cookie Security Session cookies are configured with security flags: | Config Key | Env Var | Default | Description | | ---------------- | -------------------------- | ---------- | -------------------------------------------- | | `cookieHttpOnly` | `SESSION_COOKIE_HTTP_ONLY` | `true` | Prevents JavaScript access to the cookie | | `cookieSecure` | `SESSION_COOKIE_SECURE` | `false` | Only send cookie over HTTPS | | `cookieSameSite` | `SESSION_COOKIE_SAME_SITE` | `"Strict"` | CSRF protection (`Strict`, `Lax`, or `None`) | For production, set `SESSION_COOKIE_SECURE=true` so cookies are only transmitted over HTTPS. The `SameSite=Strict` default prevents CSRF attacks by ensuring cookies aren't sent on cross-origin requests. ## CORS Cross-origin request handling is configured on the web server: | Config Key | Env Var | Default | | ---------------- | ---------------------------- | ------------------------------------------------ | | `allowedOrigins` | `WEB_SERVER_ALLOWED_ORIGINS` | `"*"` | | `allowedMethods` | `WEB_SERVER_ALLOWED_METHODS` | `"HEAD, GET, POST, PUT, PATCH, DELETE, OPTIONS"` | | `allowedHeaders` | `WEB_SERVER_ALLOWED_HEADERS` | `"Content-Type"` | **Important:** When `allowedOrigins` is `"*"` (the default), the server will not send `Access-Control-Allow-Credentials: true` — this follows the browser spec that forbids wildcard origins with credentials. For production, set `WEB_SERVER_ALLOWED_ORIGINS` to your specific domain(s) so that credentialed requests (cookies, auth headers) work correctly. ## WebSocket Protections WebSocket connections have several layers of protection: ### Origin Validation Before upgrading an HTTP connection to WebSocket, the server validates the `Origin` header against `config.server.web.allowedOrigins`. If the origin doesn't match, the upgrade is rejected. This prevents Cross-Site WebSocket Hijacking (CSWSH) attacks. ### Message Limits | Config Key | Env Var | Default | Description | | ------------------------------- | ---------------------------- | ------- | ---------------------------------- | | `websocketMaxPayloadSize` | `WS_MAX_PAYLOAD_SIZE` | `65536` | Max message size in bytes (64 KB) | | `websocketMaxMessagesPerSecond` | `WS_MAX_MESSAGES_PER_SECOND` | `20` | Per-connection rate limit | | `websocketMaxSubscriptions` | `WS_MAX_SUBSCRIPTIONS` | `100` | Max channel subscriptions per conn | Messages exceeding the payload size are rejected. Clients sending more than the per-second limit are disconnected. These protect against resource exhaustion from misbehaving or malicious clients. ### Channel Validation * **Channel names** must match the pattern `/^[a-zA-Z0-9:._-]{1,200}$/` — alphanumeric characters plus `:`, `.`, `_`, `-`, max 200 characters * **Undefined channels** are rejected — if no registered channel matches the requested name, the subscription is denied with a `CHANNEL_NOT_FOUND` error ## OAuth Security The MCP server's OAuth 2.1 implementation includes several hardening measures: ### Redirect URI Validation When clients register via `/oauth/register`, redirect URIs are validated: * Must be a valid URL * Must not contain a fragment (`#`) * Must not contain userinfo (username/password in the URL) * Must use HTTPS for non-localhost URIs When exchanging authorization codes, the redirect URI must match the registered URI exactly (origin + pathname comparison). ### Registration Rate Limiting OAuth client registration (`POST /oauth/register`) has a separate, stricter rate limit to prevent abuse: | Config Key | Env Var | Default | Description | | ----------------------- | ------------------------------------- | --------- | ---------------------------- | | `oauthRegisterLimit` | `RATE_LIMIT_OAUTH_REGISTER_LIMIT` | `5` | Max registrations per window | | `oauthRegisterWindowMs` | `RATE_LIMIT_OAUTH_REGISTER_WINDOW_MS` | `3600000` | Window size (1 hour) | ## Error Stack Traces By default, error responses include stack traces in development but omit them in production: | Config Key | Env Var | Default | | ---------- | ------------------------------------ | ---------------------------- | | Web server | `WEB_SERVER_INCLUDE_STACK_IN_ERRORS` | `true` (dev), `false` (prod) | | CLI | `CLI_INCLUDE_STACK_IN_ERRORS` | `true` | The web server default is based on `NODE_ENV` — when `NODE_ENV=production`, stack traces are automatically hidden from HTTP responses to avoid leaking internal implementation details. ## Static File Path Traversal Static file serving validates requested paths to prevent directory traversal attacks. Requests containing `..` segments that would escape the configured static files directory are rejected with a `403`. ## Production Checklist When deploying to production, review these environment variables: ```bash # Cookie security — require HTTPS SESSION_COOKIE_SECURE=true # CORS — restrict to your domain WEB_SERVER_ALLOWED_ORIGINS=https://yourapp.com # Rate limiting — tune for your traffic RATE_LIMIT_ENABLED=true RATE_LIMIT_UNAUTH_LIMIT=20 RATE_LIMIT_AUTH_LIMIT=200 # Error responses — hide internals NODE_ENV=production # (stack traces auto-disabled when NODE_ENV=production) # Security headers — defaults are good, customize CSP if needed WEB_SECURITY_CSP="default-src 'self'; script-src 'self'" ``` --- --- url: /reference/servers.md description: >- Server class and the built-in transports — HTTP, WebSocket, CLI, and MCP via Bun.serve. --- # Servers Source: `backend/classes/Server.ts`, `backend/servers/web.ts`, `backend/initializers/mcp.ts` Servers are the transport layer — they accept incoming connections and route them to actions. The framework ships with a web server (HTTP + WebSocket via `Bun.serve`), a CLI entry point, and an MCP server for AI agents. You could add others (gRPC, raw TCP, etc.) by extending the `Server` base class. ## Server Base Class ```ts abstract class Server { name: string; /** The underlying server object (e.g., Bun.Server) */ server?: T; abstract initialize(): Promise; abstract start(): Promise; abstract stop(): Promise; } ``` Servers are auto-discovered from the `servers/` directory, just like actions and initializers. ## WebServer The built-in web server uses `Bun.serve` to handle HTTP requests and WebSocket connections on the same port. It's configured via `config.server.web`. ### HTTP Request Flow When an HTTP request comes in, the server: 1. Checks for a WebSocket upgrade — if the client is requesting a WebSocket connection, it upgrades transparently 2. Tries to serve a static file (if `staticFilesEnabled` is `true` and the path matches) 3. Matches the request path and method against registered action routes 4. Extracts params from path segments (`:param`), query string, and request body 5. Creates a `Connection`, calls `connection.act()` with the action name and params 6. Returns the JSON response with appropriate headers and status codes Param loading order matters — later sources override earlier ones: 1. **Path params** (e.g., `/user/:id` → `{ id: "123" }`) 2. **Query params** (e.g., `?limit=10`) 3. **Body params** (JSON or FormData) ### WebSocket Message Flow WebSocket connections are long-lived. After the initial HTTP upgrade, the client sends JSON messages with a `messageType` field: | messageType | What it does | | --------------- | ------------------------------------------------------------- | | `"action"` | Execute an action — same validation and middleware as HTTP | | `"subscribe"` | Subscribe to a PubSub channel (with middleware authorization) | | `"unsubscribe"` | Unsubscribe from a channel | Action messages include `action`, `params`, and an optional `messageId` that's echoed back in the response so the client can correlate requests. ### Static Files The web server can serve static files from a configured directory (default: `assets/`). This is useful for serving the frontend build output or other static assets alongside the API. ### Configuration All web server settings are in `config.server.web`: | Key | Default | What it does | | ---------------------- | ------------- | ----------------------------- | | `enabled` | `true` | Enable/disable the web server | | `port` | `8080` | Listen port | | `host` | `"localhost"` | Bind address | | `apiRoute` | `"/api"` | URL prefix for action routes | | `allowedOrigins` | `"*"` | CORS allowed origins | | `staticFilesEnabled` | `true` | Serve static files | | `staticFilesDirectory` | `"assets"` | Directory for static files | ## CLI "Server" The CLI isn't technically a server — it's a separate entry point (`keryx.ts`) that uses [Commander](https://github.com/tj/commander.js) to register every action as a CLI command. But it goes through the same `Connection → act()` pipeline as HTTP and WebSocket. The server boots in `RUN_MODE.CLI`, which tells initializers to skip transport-specific setup (like binding to a port). After the action executes, the process exits with the appropriate exit code. ```bash # List all available actions ./keryx.ts actions # Run an action ./keryx.ts "user:create" --name Evan --email evan@example.com --password secret -q | jq # Start the full server ./keryx.ts start ``` ## MCP Server Source: `backend/initializers/mcp.ts`, `backend/initializers/oauth.ts` The [MCP (Model Context Protocol)](https://modelcontextprotocol.io) server exposes actions as tools for AI agents. Unlike the web server and CLI, MCP is implemented as an initializer rather than a `Server` subclass — but it follows the same pattern of accepting requests and routing them through `Connection → act()`. When enabled (`MCP_SERVER_ENABLED=true`), the MCP initializer: 1. Registers every action (where `mcp.enabled !== false`) as an MCP tool 2. Converts action names from `:` to `-` format (e.g., `user:create` → `user-create`) 3. Converts Zod input schemas to JSON Schema for tool parameter definitions 4. Handles Streamable HTTP transport at the configured route (default `/mcp`) Each authenticated client gets its own `McpServer` instance, tracked by the `mcp-session-id` header. ### Authentication MCP uses OAuth 2.1 with PKCE for authentication. The OAuth initializer (`backend/initializers/oauth.ts`) provides the required endpoints: | Endpoint | Method | Purpose | | ----------------------------------------- | ------ | --------------------------------- | | `/.well-known/oauth-protected-resource` | GET | Resource metadata (RFC 9728) | | `/.well-known/oauth-authorization-server` | GET | Authorization server metadata | | `/oauth/register` | POST | Dynamic client registration | | `/oauth/authorize` | GET | Authorization page (login/signup) | | `/oauth/authorize` | POST | Process login/signup | | `/oauth/token` | POST | Exchange code for access token | The authorization page is rendered from Mustache templates in `backend/templates/`. Actions tagged with `mcp.isLoginAction` or `mcp.isSignupAction` handle the actual authentication during the OAuth flow. ### Request Flow 1. MCP client sends a POST to `/mcp` with `Authorization: Bearer ` 2. The initializer verifies the token against Redis (`oauth:token:{token}`) 3. A new `Connection` is created with type `"mcp"` and the authenticated user's session 4. Action params are extracted from the MCP tool call arguments 5. `connection.act()` executes the action through the standard middleware pipeline 6. The result is returned as an MCP tool response ### Configuration | Key | Env Var | Default | | ---------------- | ---------------------- | --------- | | `enabled` | `MCP_SERVER_ENABLED` | `false` | | `route` | `MCP_SERVER_ROUTE` | `"/mcp"` | | `oauthClientTtl` | `MCP_OAUTH_CLIENT_TTL` | `2592000` | | `oauthCodeTtl` | `MCP_OAUTH_CODE_TTL` | `300` | See the [MCP guide](/guide/mcp) for full usage details. --- --- url: /guide/testing.md description: 'Testing with Bun''s built-in test runner — real HTTP requests, no mocking.' --- # Testing We don't mock the server. That's a deliberate choice — if you're testing an API, you should be making real HTTP requests against a real running server. Now that Bun includes `fetch` out of the box, this is trivially easy. ## Test Structure Each test file boots and stops the full server in `beforeAll`/`afterAll`. Tests use dynamic port binding (`WEB_SERVER_PORT=0`) so each file gets a random available port — no conflicts when running multiple test files: ```ts import { api } from "../../api"; import { serverUrl, HOOK_TIMEOUT } from "../setup"; let url: string; beforeAll(async () => { await api.start(); url = serverUrl(); }, HOOK_TIMEOUT); afterAll(async () => { await api.stop(); }, HOOK_TIMEOUT); test("status endpoint returns server info", async () => { const res = await fetch(url + "/api/status"); const body = (await res.json()) as ActionResponse; expect(res.status).toBe(200); expect(body.name).toBe("server"); expect(body.uptime).toBeGreaterThan(0); }); ``` Yes, this means each test file starts the entire server — database connections, Redis, the works. It's slower than unit testing with mocks, but you're testing what actually happens when a client hits your API. I'll take that tradeoff every time. ## Test Helpers The `backend/__tests__/setup.ts` file provides helpers used across the test suite: * **`serverUrl()`** — Returns the actual URL the web server bound to (with resolved port). Call after `api.start()`. * **`HOOK_TIMEOUT`** — A generous timeout (15s) for `beforeAll`/`afterAll` hooks, since they connect to Redis, Postgres, run migrations, etc. Pass as the second argument to `beforeAll`/`afterAll`. * **`waitFor(condition, { interval, timeout })`** — Polls a condition function until it returns `true`, or throws after a timeout. Use this instead of fixed `Bun.sleep()` calls when waiting for async side effects like background tasks: ```ts await waitFor( async () => { const result = await db.query( "SELECT count(*) FROM jobs WHERE status = 'done'", ); return result.count > 0; }, { interval: 100, timeout: 5000 }, ); ``` ## Running Tests ```bash # all backend tests cd backend && bun test # a single file cd backend && bun test __tests__/actions/user.test.ts # full CI — lint + test both frontend and backend bun run ci ``` Tests run non-concurrently to avoid port conflicts. Each test file gets the server to itself. ## Making Requests Just use `fetch`. Here's a typical test for creating a user: ```ts test("create a user", async () => { const res = await fetch(url + "/api/user", { method: "PUT", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ name: "Test User", email: "test@example.com", password: "password123", }), }); const body = await res.json(); expect(res.status).toBe(200); expect(body.user.name).toBe("Test User"); }); ``` Nothing special — it's the same `fetch` you'd use in a browser or a Bun script. ## Database Setup Tests typically clear the database before running to ensure a clean slate: ```ts beforeAll(async () => { await api.start(); await api.db.clearDatabase(); }); ``` `clearDatabase()` truncates all tables with `RESTART IDENTITY CASCADE`. It refuses to run when `NODE_ENV=production`, so you can't accidentally nuke your production data. You'll need a separate test database: ```bash createdb keryx-test ``` Set `DATABASE_URL_TEST` in your environment (or `backend/.env`) to point at it. ## Gotcha: Stale Processes If you're changing code but your tests are still seeing old behavior… you probably have a stale server process running from a previous dev session. This has bitten me more than once: ```bash ps aux | grep "bun keryx" | grep -v grep kill -9 ``` Check for old processes whenever code changes aren't being reflected. It'll save you hours of debugging.