Skip to content

Testing

We don't mock the server. That's a deliberate choice — if you're testing an API, you should be making real HTTP requests against a real running server. Now that Bun includes fetch out of the box, this is trivially easy.

Test Structure

Each test file boots and stops the full server in beforeAll/afterAll. Tests use dynamic port binding (WEB_SERVER_PORT=0) so each file gets a random available port — no conflicts when running multiple test files:

ts
import { api } from "../../api";
import { serverUrl, HOOK_TIMEOUT } from "../setup";

let url: string;

beforeAll(async () => {
  await api.start();
  url = serverUrl();
}, HOOK_TIMEOUT);

afterAll(async () => {
  await api.stop();
}, HOOK_TIMEOUT);

test("status endpoint returns server info", async () => {
  const res = await fetch(url + "/api/status");
  const body = (await res.json()) as ActionResponse<Status>;

  expect(res.status).toBe(200);
  expect(body.name).toBe("server");
  expect(body.uptime).toBeGreaterThan(0);
});

Yes, this means each test file starts the entire server — database connections, Redis, the works. It's slower than unit testing with mocks, but you're testing what actually happens when a client hits your API. I'll take that tradeoff every time.

Test Helpers

The backend/__tests__/setup.ts file provides helpers used across the test suite:

  • serverUrl() — Returns the actual URL the web server bound to (with resolved port). Call after api.start().
  • HOOK_TIMEOUT — A generous timeout (15s) for beforeAll/afterAll hooks, since they connect to Redis, Postgres, run migrations, etc. Pass as the second argument to beforeAll/afterAll.
  • waitFor(condition, { interval, timeout }) — Polls a condition function until it returns true, or throws after a timeout. Use this instead of fixed Bun.sleep() calls when waiting for async side effects like background tasks:
ts
await waitFor(
  async () => {
    const result = await db.query(
      "SELECT count(*) FROM jobs WHERE status = 'done'",
    );
    return result.count > 0;
  },
  { interval: 100, timeout: 5000 },
);

Running Tests

bash
# all backend tests
cd backend && bun test

# a single file
cd backend && bun test __tests__/actions/user.test.ts

# full CI — lint + test both frontend and backend
bun run ci

Tests run non-concurrently to avoid port conflicts. Each test file gets the server to itself.

Making Requests

Just use fetch. Here's a typical test for creating a user:

ts
test("create a user", async () => {
  const res = await fetch(url + "/api/user", {
    method: "PUT",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      name: "Test User",
      email: "test@example.com",
      password: "password123",
    }),
  });

  const body = await res.json();
  expect(res.status).toBe(200);
  expect(body.user.name).toBe("Test User");
});

Nothing special — it's the same fetch you'd use in a browser or a Bun script.

Database Setup

Tests typically clear the database before running to ensure a clean slate:

ts
beforeAll(async () => {
  await api.start();
  await api.db.clearDatabase();
});

clearDatabase() truncates all tables with RESTART IDENTITY CASCADE. It refuses to run when NODE_ENV=production, so you can't accidentally nuke your production data.

You'll need a separate test database:

bash
createdb keryx-test

Set DATABASE_URL_TEST in your environment (or backend/.env) to point at it.

Gotcha: Stale Processes

If you're changing code but your tests are still seeing old behavior… you probably have a stale server process running from a previous dev session. This has bitten me more than once:

bash
ps aux | grep "bun keryx" | grep -v grep
kill -9 <PIDs>

Check for old processes whenever code changes aren't being reflected. It'll save you hours of debugging.

Released under the MIT License.