Skip to content

Testing

We don't mock the server. That's a deliberate choice — if you're testing an API, you should be making real HTTP requests against a real running server. Now that Bun includes fetch out of the box, this is trivially easy.

Test Structure

Each test file boots and stops the full server in beforeAll/afterAll. Tests use dynamic port binding (WEB_SERVER_PORT=0) so each file gets a random available port — no conflicts when running multiple test files:

ts
import { api } from "keryx";
import { serverUrl, HOOK_TIMEOUT } from "../setup";

let url: string;

beforeAll(async () => {
  await api.start();
  url = serverUrl();
}, HOOK_TIMEOUT);

afterAll(async () => {
  await api.stop();
}, HOOK_TIMEOUT);

test("status endpoint returns server info", async () => {
  const res = await fetch(url + "/api/status");
  const body = (await res.json()) as ActionResponse<Status>;

  expect(res.status).toBe(200);
  expect(body.name).toBe("server");
  expect(body.uptime).toBeGreaterThan(0);
});

Yes, this means each test file starts the entire server — database connections, Redis, the works. It's slower than unit testing with mocks, but you're testing what actually happens when a client hits your API. I'll take that tradeoff every time.

Test Helpers

The backend/__tests__/setup.ts file provides helpers used across the test suite:

  • serverUrl() — Returns the actual URL the web server bound to (with resolved port). Call after api.start().
  • HOOK_TIMEOUT — A generous timeout (15s) for beforeAll/afterAll hooks, since they connect to Redis, Postgres, run migrations, etc. Pass as the second argument to beforeAll/afterAll.
  • waitFor(condition, { interval, timeout }) — Polls a condition function until it returns true, or throws after a timeout. Use this instead of fixed Bun.sleep() calls when waiting for async side effects like background tasks:
ts
await waitFor(
  async () => {
    const result = await db.query(
      "SELECT count(*) FROM jobs WHERE status = 'done'",
    );
    return result.count > 0;
  },
  { interval: 100, timeout: 5000 },
);

Running Tests

bash
# all backend tests
cd backend && bun test

# a single file
cd backend && bun test __tests__/actions/user.test.ts

# full CI — lint + test both frontend and backend
bun run ci

Tests run non-concurrently to avoid port conflicts. Each test file gets the server to itself.

Making Requests

Just use fetch. Here's a typical test for creating a user:

ts
test("create a user", async () => {
  const res = await fetch(url + "/api/user", {
    method: "PUT",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      name: "Test User",
      email: "test@example.com",
      password: "password123",
    }),
  });

  const body = await res.json();
  expect(res.status).toBe(200);
  expect(body.user.name).toBe("Test User");
});

Nothing special — it's the same fetch you'd use in a browser or a Bun script.

Database Setup

Tests typically clear the database before running to ensure a clean slate:

ts
beforeAll(async () => {
  await api.start();
  await api.db.clearDatabase();
});

clearDatabase() truncates all tables with RESTART IDENTITY CASCADE. It refuses to run when NODE_ENV=production, so you can't accidentally nuke your production data.

You'll need a separate test database:

bash
createdb keryx-test

Set DATABASE_URL_TEST in your environment (or backend/.env) to point at it.

Testing Authenticated Endpoints

Most endpoints require a session. The pattern is: create a user, log in, then pass the session cookie on subsequent requests:

ts
import { config } from "../../config";

test("authenticated request", async () => {
  // Create a user
  await fetch(url + "/api/user", {
    method: "PUT",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      name: "Test User",
      email: "test@example.com",
      password: "password123",
    }),
  });

  // Log in
  const sessionRes = await fetch(url + "/api/session", {
    method: "PUT",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      email: "test@example.com",
      password: "password123",
    }),
  });
  const sessionBody =
    (await sessionRes.json()) as ActionResponse<SessionCreate>;
  const sessionId = sessionBody.session.id;

  // Make an authenticated request
  const res = await fetch(url + "/api/user", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Cookie: `${config.session.cookieName}=${sessionId}`,
    },
    body: JSON.stringify({ name: "New Name" }),
  });

  expect(res.status).toBe(200);
});

The session ID comes from the login response, and you pass it as a Cookie header. This is the same cookie the browser would send automatically.

Testing WebSocket Connections

WebSocket tests connect to the same server and send JSON messages:

ts
test("websocket action", async () => {
  const wsUrl = url.replace("http", "ws");
  const ws = new WebSocket(wsUrl);

  await new Promise<void>((resolve) => {
    ws.onopen = () => resolve();
  });

  const responsePromise = new Promise<any>((resolve) => {
    ws.onmessage = (event) => resolve(JSON.parse(event.data));
  });

  ws.send(
    JSON.stringify({
      messageType: "action",
      action: "status",
      messageId: "test-1",
    }),
  );

  const response = await responsePromise;
  expect(response.messageId).toBe("test-1");
  expect(response.name).toBe("server");

  ws.close();
});

For channel subscriptions, send a subscribe message and then listen for broadcasts:

ts
ws.send(
  JSON.stringify({
    messageType: "subscribe",
    channel: "messages",
  }),
);

Testing Background Tasks

Use waitFor() to poll for side effects from background tasks:

ts
test("cleanup task removes old messages", async () => {
  // Insert test data...

  // Enqueue the task
  await api.actions.enqueue("messages:cleanup", { age: 1000 });

  // Wait for the side effect
  await waitFor(
    async () => {
      const remaining = await api.db.db.select().from(messages);
      return remaining.length === 0;
    },
    { interval: 100, timeout: 5000 },
  );
});

Gotcha: Stale Processes

If you're changing code but your tests are still seeing old behavior… you probably have a stale server process running from a previous dev session. This has bitten me more than once:

bash
ps aux | grep "bun keryx" | grep -v grep
kill -9 <PIDs>

Check for old processes whenever code changes aren't being reflected. It'll save you hours of debugging.

Released under the MIT License.