Caching
Keryx already boots a Redis connection for you. api.redis.redis is a fully configured ioredis client, available the moment your initializers finish starting. You don't need a separate caching library. Redis is your caching layer.
How Redis is Initialized
The redis initializer creates two ioredis connections at startup:
| Connection | Access | Purpose |
|---|---|---|
api.redis.redis | General commands | Caching, ad-hoc queries, anything your app needs |
api.redis.subscription | PubSub subscriber | Reserved for channels — don't use this for caching |
Both connect to the URL in config.redis.connectionString (defaults to redis://localhost:6379/0, overridable via REDIS_URL). The initializer verifies connectivity at boot — if Redis is unreachable, the server won't start.
Because api.redis.redis is a standard ioredis instance, every Redis command is available: GET, SET, DEL, HSET, LPUSH, EXPIRE, SETNX, pipelines, Lua scripts, streams — the full Redis API. No wrapper to learn.
Cache-Aside Pattern
The most common approach: check Redis before hitting the database, populate the cache on a miss.
import { api } from "keryx";
import { eq } from "drizzle-orm";
import { users } from "../schema";
const CACHE_TTL = 300; // 5 minutes
export async function getUserById(id: number) {
const cacheKey = `cache:user:${id}`;
// Try cache first
const cached = await api.redis.redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// Miss — query the database
const [user] = await api.db.db
.select()
.from(users)
.where(eq(users.id, id));
if (user) {
await api.redis.redis.set(cacheKey, JSON.stringify(user), "EX", CACHE_TTL);
}
return user ?? null;
}Invalidate when the data changes:
export async function updateUser(id: number, data: Partial<User>) {
const [updated] = await api.db.db
.update(users)
.set(data)
.where(eq(users.id, id))
.returning();
await api.redis.redis.del(`cache:user:${id}`);
return updated;
}Cache Helper
Wrap the cache-aside pattern into a reusable helper so your actions stay concise:
import { api } from "keryx";
export async function cached<T>(
key: string,
ttl: number,
fn: () => Promise<T>,
): Promise<T> {
const hit = await api.redis.redis.get(key);
if (hit) return JSON.parse(hit) as T;
const result = await fn();
await api.redis.redis.set(key, JSON.stringify(result), "EX", ttl);
return result;
}Use it in read-heavy actions:
export class UserShow implements Action {
name = "user:show";
web = { route: "/user/:userId", method: HTTP_METHOD.GET };
inputs = z.object({ userId: z.coerce.number() });
async run(params: ActionParams<UserShow>) {
const user = await cached(
`cache:user:${params.userId}`,
60,
() => UserOps.findById(params.userId),
);
return { user };
}
}The helper checks Redis first, calls your function on a miss, and stores the result with a TTL — all in one call. Because the caching happens inside run(), the action controls exactly what gets cached and how the key is built.
Invalidation
By Key
When you know exactly which cache entry changed:
await api.redis.redis.del(`cache:user:${userId}`);By Pattern
Need to bust all cached responses for a resource? Use a key prefix convention and scanStream:
export async function invalidateUserCache(userId: number) {
const stream = api.redis.redis.scanStream({
match: `cache:*:*user*:${userId}*`,
count: 100,
});
for await (const keys of stream) {
if (keys.length > 0) await api.redis.redis.del(...keys);
}
}Version-Based
For high-throughput invalidation, embed a version counter in your cache keys and bump it instead of scanning. A single INCR replaces an unbounded SCAN:
const version = await api.redis.redis.get(`version:user:${id}`) ?? "0";
const cacheKey = `cache:user:${id}:v${version}`;
// To invalidate: bump the version. Old keys expire naturally via TTL.
await api.redis.redis.incr(`version:user:${id}`);Beyond GET/SET
Because you have the full ioredis client, you're not limited to string caching:
| Technique | Redis command | Use case |
|---|---|---|
| Atomic set-if-absent | SETNX / SET ... NX | Distributed locks, deduplication |
| Hash fields | HSET / HGET / HGETALL | Cache objects without serializing the whole thing |
| Sorted sets | ZADD / ZRANGE | Leaderboards, rate-limit sliding windows |
| Pipelines | pipeline().get().set().exec() | Batch multiple commands in one round trip |
| Lua scripts | eval() / evalsha() | Atomic read-modify-write operations |
| Expiry | EXPIRE / PEXPIRE | TTL on any key type, not just strings |
Why Not a Cache Abstraction?
You might expect a cache.get() / cache.set() wrapper. We intentionally skip that layer — ioredis already has a clean API, and a thin wrapper just hides the Redis features you'll eventually need. If you know Redis, you already know the Keryx caching API.