The other day, I was trying to think of something to build to practice coding in Go. I'd recently discovered that Lambda supports the Go runtime, and after migrating one of my Node Lambdas to Go, I saw nearly a 50% reduction in memory usage. So, I thought I'd create another utility Lambda in Go just for practice. I looked at the free course of the month on codecrafters for inspiration. It was Build your own shell, which I wasn't really interested in. There was another course behind a paywall, though: Build your own Redis. I realized I'd already done something similar a few years ago in TypeScript, where I implemented a RESP parser for a very tiny subset of Redis commands.
I started thinking along the lines of Lambda + Redis... and wondered if Lambdas could ever be stateful. That's when I discovered Lambda durable functions. I read the documentation and realized this wasn't the kind of statefulness I was looking for. Then... durable... Cloudflare Durable Objects? I'd heard a lot about them before but only had a faint understanding of how the service worked. I looked it up and learned that your Cloudflare worker basically gets a SQLite database to work with, which is where the "state" would live. Where does Redis fit in, then? I've always been fascinated by Redis and loved working with it (I even made a contribution to its documentation—see my GitHub!).
So I put two and two together. I could write a compute service in Go that takes in Redis commands, does the parsing and translation into SQL queries, and then forwards them to a Durable Object instance, which executes the commands (SQL queries), returns the response back to the compute service, which does the final post-processing before responding to the client. Now we have a serverless, self-hostable, and (basically) free (on the Cloudflare free plan) Redis clone!
But why? When all is said and done, would this project even matter? Or would it just land in my graveyard of repositories that never see the light of day after my last commit? That's when I had my fourth or fifth light bulb moment in a row.
I remembered when I was building tracker, I really wanted to try Redis out, but I didn't want to pay for a real Redis instance. Plus, I was hosting tracker on Vercel, so even if I did get a Redis instance, my serverless Next.js deployment wouldn't be able to hold a connection to it. That's how I found out about Upstash Redis -- basically a serverless Redis accessible over HTTP, and they have a free tier as well. I provisioned an instance there and wedged Redis into a project that literally had no need for it. I ended up removing it a few months later.
Back to the present. What if I made my compute layer's API compatible with Upstash? Then, my Redis becomes (kind of) a drop-in replacement for Upstash Redis, and we could piggyback off Upstash's SDKs!
And so was born... meowdis.
High-level architecture
compute-go / compute-node (AWS Lambda / Cloudflare Worker)
- Authenticated POST endpoints accept Redis commands
- Commands are parsed and transformed into a SQLite query
- SQLite query is passed on to the storage service
- Results are transformed back into Redis response format
- Responses are returned to the client
- Bearer key authorization (static keys stored in env vars)
storage-sqlite (Cloudflare Durable Objects)
- Only accessible to compute services
- Receives SQLite queries from them
- Executes queries against built-in SQLite database
- Returns results back to the compute service
Design decisions
Tables store data only. This enforces the Redis rule that a key can only have one value.
Data models
A central keys table owns the type and expiry for every key. Type-specific tables store only data. This enforces the Redis rule that a key can only have one type at a time—writing a key with a different type returns a WRONGTYPE error. Expiry is deleted lazily on the next read.
CREATE TABLE keys (
key TEXT PRIMARY KEY,
type TEXT NOT NULL CHECK(type IN ('string', 'hash', 'list', 'set')),
expires_at INTEGER -- unix timestamp, NULL = no expiry
);
CREATE TABLE strings (
key TEXT PRIMARY KEY REFERENCES keys(key) ON DELETE CASCADE,
value TEXT NOT NULL
);
CREATE TABLE hashes (
key TEXT NOT NULL REFERENCES keys(key) ON DELETE CASCADE,
field TEXT NOT NULL,
value TEXT NOT NULL,
PRIMARY KEY (key, field)
);
CREATE TABLE lists (
key TEXT NOT NULL REFERENCES keys(key) ON DELETE CASCADE,
index REAL NOT NULL, -- float for O(1) prepend/append tricks
value TEXT NOT NULL,
PRIMARY KEY (key, index)
);
CREATE TABLE sets (
key TEXT NOT NULL REFERENCES keys(key) ON DELETE CASCADE,
member TEXT NOT NULL,
PRIMARY KEY (key, member)
);
Storage API
The compute layer sends a batch of SQL statements to the storage durable object. The request uses statements for a single command or pipeline for multiple independent commands.
Single command request:
{
"statements": [
{
"sql": "SELECT type, expires_at FROM keys WHERE key = ?",
"params": ["foo"]
},
{
"sql": "INSERT OR REPLACE INTO keys (key, type) VALUES (?, ?)",
"params": ["foo", "string"]
},
{
"sql": "INSERT OR REPLACE INTO strings (key, value) VALUES (?, ?)",
"params": ["foo", "bar"]
}
]
}
Pipeline request:
{
"pipeline": [
{
"statements": [
{
"sql": "INSERT OR REPLACE INTO keys (key, type) VALUES (?, ?)",
"params": ["foo", "string"]
},
{
"sql": "INSERT OR REPLACE INTO strings (key, value) VALUES (?, ?)",
"params": ["foo", "bar"]
}
]
},
{
"statements": [
{
"sql": "UPDATE strings SET value = CAST(value AS INTEGER) + 1 WHERE key = ? RETURNING value",
"params": ["counter"]
}
]
}
]
}
The storage layer executes each item in its own transactionSync -- pipeline items are independent and can fail separately:
function execBatch(statements) {
const results = [];
this.ctx.storage.transactionSync(() => {
for (const { sql, params } of statements) {
results.push([...this.ctx.storage.sql.exec(sql, ...params)]);
}
});
return results;
}
// single command
if (body.statements) return { results: execBatch(body.statements) };
// pipeline
if (body.pipeline)
return {
results: body.pipeline.map(({ statements }) => execBatch(statements)),
};
Single command response:
{
"results": [[{ "type": "string", "expires_at": null }], [], []]
}
Pipeline response:
{
"results": [[[], []], [[{ "value": "1" }]]]
}
The compute layer picks whichever result set it needs. DEL from keys cascades automatically to all data tables.
Translator
The translator is a pure function: (command, args) → []Statement. Each command has its own handler that parses args and returns the appropriate SQL statements.
Options like NX and EX are parsed from the args and select different SQL templates. changes() is used to chain dependent statements—the second statement is a no-op if the first affected zero rows.
Supported SET options:
| option | description |
|---|---|
NX | only set if key does not already exist |
XX | only set if key already exists |
GET | return the old value before setting |
EX seconds | expire after n seconds |
PX milliseconds | expire after n milliseconds |
EXAT timestamp | expire at unix timestamp (seconds) |
PXAT timestamp | expire at unix timestamp (milliseconds) |
KEEPTTL | retain the existing expiry |
Not supported: IFEQ, IFNE, IFDEQ, IFDNE (require hash digest computation outside SQLite).
Supported EXPIRE / EXPIREAT options:
| option | description |
|---|---|
NX | only set expiry if key has no expiry |
XX | only set expiry if key already has an expiry |
GT | only set expiry if new expiry is greater than current |
LT | only set expiry if new expiry is less than current |
Supported LPOP / RPOP options:
| option | description |
|---|---|
count | number of elements to pop (defaults to 1) |
Implementation
I first wrote the compute service in Go, compute-go, and it made HTTP calls over the internet to the storage service bound to the Durable Object instance, durable-object. But this could only be hosted on AWS Lambda, since Cloudflare Workers did not support the Go runtime. So I ported it to TypeScript and created compute-node. Since durable-object was a swappable storage layer, it worked with both compute-go and compute-node, and I modified compute-node to make use of RPC, so messages between the compute and storage services wouldn't have to travel over the internet if I added the durable object instance as a binding to compute-node.
Next, I thought, for better latency, I could meld the two layers into a single, standalone service. The compute service on AWS communicated with the DO over HTTP, so latency was terrible at the get-go, and the one running on Cloudflare, even though it communicated with the DO over Cloudflare's private infrastructure through the binding, it was still not that great, since the worker executed in an edge location near me, and the DO was located near wherever it was first created (in my case, wherever my GitHub Actions workflow first ran -- probably Washington, D.C., and very far away from me). So, I ended up making meowdis/meowdis.
Numbers
Here's what the latency looked like across the three approaches, measured locally as a rough test:
compute-go (AWS Lambda → Durable Object over HTTP)
ping() = 'PONG' [166.9ms]
get('zion') = 'clairo' [159.8ms]
compute-node (Cloudflare Worker → Durable Object over RPC)
ping() = 'PONG' [165.6ms]
get('zion') = 'clairo' [161.2ms]
meowdis (unified — compute and storage in the same Worker)
ping() = 'PONG' [25.8ms]
get('zion') = 'clairo' [62.8ms]
The first two are essentially identical — the RPC binding between compute-node and the Durable Object is faster than HTTP, but the bottleneck is still the Worker running near me while the DO lives somewhere in the US. The unified meowdis drops that extra hop entirely, and the improvement is dramatic.
Finally...
You can check out meowdis on GitHub -- and deploy your own with 1 click!