Cloudflare D1 and Durable Objects: A Framework for Designing Strong Consistency Boundaries at the Edge with Distributed SQLite
Honestly, when I first seriously worked with edge computing, the data layer was what tripped me up first. Coming from Lambda + RDS, the idea of "code running close to the user" sounded appealing — but the irony was that DB queries still had to round-trip back to a Virginia region. No matter how fast the edge responded, adding hundreds of milliseconds of DB round-trip latency cut the value in half.
More concretely, I ran into bugs like these: items added to a shopping cart would disappear on refresh, and a rate limiter would occasionally and silently exceed its configured limit. I later realized these weren't simple implementation mistakes — they were symptoms of poorly designed data consistency boundaries. With Cloudflare's introduction of D1 and Durable Objects, there's now a structural way to approach this problem, and much of the Lambda + RDS irony has been resolved.
The core of this post is the criteria for deciding what data belongs in D1 versus what belongs in Durable Objects. I'll use an e-commerce product catalog, a real-time collaborative document, and a multi-tenant SaaS rate limiter as examples — feel free to map these to your own service architecture as you read.
Core Concepts
D1: Serverless SQLite Running at the Edge
D1 is a serverless SQL database that runs on Cloudflare Workers. Because it's built on SQLite internally, you can apply existing SQL knowledge directly, with no additional learning curve compared to PostgreSQL or MySQL.
Writes are handled by a single Primary instance, and the Global Read Replication feature — which entered public beta in April 2025 — automatically provisions read replicas at PoPs around the world. Users in Korea read from the Tokyo replica, users in Europe from the Amsterdam replica, dramatically reducing read latency globally.
// D1 basic query — reads from the nearest replica
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const products = await env.DB.prepare(
"SELECT id, name, price FROM products WHERE active = 1"
).all();
return Response.json(products.results);
},
};There's an important caveat with replica-based reads. The "I just added something to my cart — why isn't it showing up?" bug comes from exactly here. Writes go to the Primary, but reads may hit a replica that hasn't yet synced. D1 addresses this structurally with Session Consistency.
Session Consistency: A guarantee that data you write within a session will always be readable within that same session. Implemented via Lamport timestamps and the
bookmarkparameter. The"first-primary"value inwithSession("first-primary")is a mode that ensures the session maintains the same read consistency level as writes on the Primary. Use this mode — rather than the default session — for any flow that needs to read immediately after writing.
// When session consistency is needed — reading immediately after writing
const session = env.DB.withSession("first-primary");
await session
.prepare("INSERT INTO cart_items (user_id, product_id) VALUES (?, ?)")
.bind(userId, productId)
.run();
// Same session → the data just written is guaranteed to be readable
const cart = await session
.prepare("SELECT * FROM cart_items WHERE user_id = ?")
.bind(userId)
.all();Durable Objects: The Isolation Unit for State Requiring Strong Consistency
Durable Objects (DO) are stateful serverless compute units. Each instance runs single-threaded and uses SQLite as its internal storage. "One instance = one logical entity" is the core design principle.
I was confused by this at first too, but the most important property of DOs is Strict Serializability. Even when multiple requests arrive simultaneously, they are automatically processed in order within the instance. This means race conditions are structurally impossible — and that makes a decisive difference in scenarios like rate limiting or real-time collaboration.
Strict Serializability: The strongest consistency guarantee, where all operations have a single global ordering that also respects real-time order. DOs implement this via the single-threaded model. If your data is subject to concurrent write conflicts — counters, shared documents, real-time session state — DO is the right choice over D1.
The Fundamental Difference Between the Two
| Criterion | D1 | Durable Objects |
|---|---|---|
| Consistency level | Session-based sequential consistency | Strict serializability |
| Write structure | Single Primary, centralized | Isolated per instance |
| Read structure | Globally distributed replicas | Instance-local |
| Scale unit | Per database | Per entity |
| Operations tooling | Migrations & insights built-in | Must build yourself |
| Best fit | Shared reads, low write contention | Coordinating concurrent writes to a single entity |
This table naturally surfaces the decision criteria. Data with many reads and low write contention belongs in D1; data requiring coordination of concurrent writes to a single entity belongs in DO. Let's validate these criteria with actual code.
Practical Application
Example 1: E-commerce Global Product Catalog — Leveraging D1 Read Replication
This is a situation that comes up often in practice. Product lookups in e-commerce are overwhelmingly read-heavy, while writes like inventory changes or price updates are relatively rare. D1's global read replication is a perfect fit. Product listings are read quickly from the nearest replica, and session consistency is applied only for flows that require "reading immediately after writing" — like adding to a cart.
The examples below use Hono (a lightweight web framework similar to Express for Node.js) and Drizzle ORM (a type-safe ORM that supports both D1 and DO SQLite).
// src/routes/products.ts
import { Hono } from "hono";
import { drizzle } from "drizzle-orm/d1";
import { products, inventory, cartItems } from "../schema";
import { eq } from "drizzle-orm";
const app = new Hono<{ Bindings: Env }>();
// Product listing — read from nearest replica (fast)
app.get("/products", async (c) => {
const db = drizzle(c.env.DB);
const items = await db
.select()
.from(products)
.where(eq(products.active, true));
return c.json(items);
});
// Add to cart and immediately verify — session consistency required
app.post("/cart", async (c) => {
const { productId, userId } = await c.req.json();
// "first-primary": guarantees same read consistency as writes on Primary
const session = c.env.DB.withSession("first-primary");
const db = drizzle(session);
// Check latest inventory (replica lag not acceptable here)
const [item] = await db
.select()
.from(inventory)
.where(eq(inventory.productId, productId));
if (item.stock < 1) {
return c.json({ error: "Out of stock" }, 409);
}
// Atomically decrement inventory + add to cart
await db.batch([
db
.update(inventory)
.set({ stock: item.stock - 1 })
.where(eq(inventory.productId, productId)),
db.insert(cartItems).values({ userId, productId }),
]);
// Same session → the item just added is guaranteed to appear
const cart = await db
.select()
.from(cartItems)
.where(eq(cartItems.userId, userId));
return c.json(cart);
});
export default app;| Code point | Description |
|---|---|
drizzle(c.env.DB) |
Default session — reads from nearest replica |
DB.withSession("first-primary") |
Guarantees same read consistency as Primary |
db.batch([...]) |
D1 batch transaction — atomic execution |
Example 2: Real-time Collaborative Document — Using Durable Objects Exclusively
Document editing is different. When multiple people type in the same document simultaneously, conflict resolution is required. Designing "one document = one DO instance" means all edit operations are serialized on a single thread, making conflicts structurally impossible.
You'll often see two approaches to WebSocket handling: tracking connections manually with private sessions: Set<WebSocket>, or using this.ctx.getWebSockets(). In production, getWebSockets() — which works with the DO Hibernation API — is the correct pattern. With Hibernation, idle connections are evicted from memory and reactivated later, which is much more cost-efficient.
// src/durable-objects/document.ts
import { DurableObject } from "cloudflare:workers";
interface EditOperation {
userId: string;
position: number;
text: string;
type: "insert" | "delete";
timestamp: number;
}
interface Operation {
op_type: string;
position: number;
content: string | null;
}
export class DocumentDO extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.ctx.blockConcurrencyWhile(async () => {
this.ctx.storage.sql.exec(`
CREATE TABLE IF NOT EXISTS operations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id TEXT NOT NULL,
op_type TEXT NOT NULL,
position INTEGER NOT NULL,
content TEXT,
applied_at INTEGER NOT NULL
)
`);
});
}
async fetch(request: Request): Promise<Response> {
if (request.headers.get("Upgrade") === "websocket") {
const [client, server] = Object.values(new WebSocketPair());
// acceptWebSocket + getWebSockets() → Hibernation API pattern
this.ctx.acceptWebSocket(server);
return new Response(null, { status: 101, webSocket: client });
}
return new Response("Not found", { status: 404 });
}
// Hibernation API: WebSocket messages are routed to this method
async webSocketMessage(ws: WebSocket, message: string): Promise<void> {
const op: EditOperation = JSON.parse(message);
// Single-threaded — even concurrent edit requests are processed in order
this.ctx.storage.sql.exec(
`INSERT INTO operations (user_id, op_type, position, content, applied_at)
VALUES (?, ?, ?, ?, ?)`,
op.userId,
op.type,
op.position,
op.text,
op.timestamp
);
const ops = [
...this.ctx.storage.sql.exec<Operation>(
`SELECT op_type, position, content FROM operations ORDER BY applied_at ASC`
),
];
// ⚠️ Simplified demo implementation (O(n) reapplication)
// Real production use requires OT (Operational Transformation) or CRDT
const content = this.reconstructDocument(ops);
for (const socket of this.ctx.getWebSockets()) {
if (socket !== ws) {
socket.send(JSON.stringify({ type: "op_applied", op, content }));
}
}
}
private reconstructDocument(ops: Operation[]): string {
let doc = "";
for (const op of ops) {
if (op.op_type === "insert") {
doc = doc.slice(0, op.position) + op.content + doc.slice(op.position);
} else {
doc =
doc.slice(0, op.position) +
doc.slice(op.position + (op.content?.length ?? 0));
}
}
return doc;
}
}
// src/index.ts — DO routing in the Worker
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const docId = url.searchParams.get("docId");
if (!docId) return new Response("docId required", { status: 400 });
// Same docId → always routed to the same DO instance
const id = env.DOCUMENT_DO.idFromName(`doc:${docId}`);
return env.DOCUMENT_DO.get(id).fetch(request);
},
};Example 3: Multi-tenant SaaS — Combined D1 + DO Design
This is the most interesting pattern in practice. D1 and DO each take on roles that play to their strengths — and once you understand the criteria, you can apply this pattern in many places.
┌─────────────────────────────────────────────────────┐
│ Cloudflare Workers │
│ │
│ Request Routing │
│ │ │
│ ├─── Shared Data ────────▶ D1 Global DB │
│ │ (users, plans, billing) (distributed replicas) │
│ │ │
│ ├─── Tenant-Isolated Data ▶ DO per Tenant │
│ │ (per-tenant SQLite) (independent instances) │
│ │ │
│ └─── API Rate Limiting ──▶ DO per API Key │
│ (sliding window) (single-threaded counter) │
└─────────────────────────────────────────────────────┘Because each rate limiter DO instance is already isolated per API key (idFromName("key:${apiKey}")), there's no need to pass the API key as a parameter to methods inside the DO. The instance itself serves as the identifier.
// src/durable-objects/rate-limiter.ts
import { DurableObject } from "cloudflare:workers";
export class RateLimiterDO extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
super(ctx, env);
this.ctx.blockConcurrencyWhile(async () => {
this.ctx.storage.sql.exec(`
CREATE TABLE IF NOT EXISTS requests (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp INTEGER NOT NULL
)
`);
});
}
// The DO instance is isolated per API key, so no need to receive it as a parameter
async checkLimit(limit: number, windowMs: number): Promise<boolean> {
const now = Date.now();
const windowStart = now - windowMs;
this.ctx.storage.sql.exec(
`DELETE FROM requests WHERE timestamp < ?`,
windowStart
);
const result = this.ctx.storage.sql
.exec<{ cnt: number }>(`SELECT COUNT(*) as cnt FROM requests`)
.one();
if (result.cnt >= limit) return false;
this.ctx.storage.sql.exec(
`INSERT INTO requests (timestamp) VALUES (?)`,
now
);
return true;
}
}
// src/middleware/rate-limiter.ts
import { Hono } from "hono";
import { drizzle } from "drizzle-orm/d1";
import { tenants } from "../schema";
import { eq } from "drizzle-orm";
const app = new Hono<{ Bindings: Env }>();
app.use("*", async (c, next) => {
const apiKey = c.req.header("X-API-Key");
if (!apiKey) return c.json({ error: "API key required" }, 401);
// One DO instance per API key — strong consistency for accurate counting
const id = c.env.RATE_LIMITER.idFromName(`key:${apiKey}`);
const limiter = c.env.RATE_LIMITER.get(id);
const allowed = await limiter.checkLimit(100, 60_000);
if (!allowed) return c.json({ error: "Rate limit exceeded" }, 429);
// Tenant info read from D1 (replicas OK — no write contention)
const db = drizzle(c.env.DB);
const [tenant] = await db
.select()
.from(tenants)
.where(eq(tenants.apiKey, apiKey));
c.set("tenant", tenant);
await next();
});
export default app;Pros and Cons
Advantages
| Item | Details |
|---|---|
| SQLite compatibility | Reuse existing SQL knowledge, no additional learning curve |
| Global read performance | Low latency for users worldwide via D1 read replication |
| Race condition elimination | DO's single-threaded model structurally prevents concurrency bugs |
| Co-located compute and storage | DO's internal SQLite is accessible in microseconds with no network hop |
| Built-in management tooling | D1 includes migrations, query insights, and an HTTP API |
| Free tier support | Both D1 and DO are available on the Workers Free plan |
Disadvantages and Caveats
| Item | Details | Mitigation |
|---|---|---|
| D1 10 GB limit | Maximum 10 GB per database | Horizontal sharding per user or per tenant |
| D1 write QPS ceiling | ~1,000 QPS max assuming 1 ms average | Move high-frequency write data to DO |
| D1 write latency | All writes route through the Primary region | Use D1 only for low-frequency write data |
| DO lacks operations tooling | No built-in migration or schema management | Use external tools like D1 Manager or build your own |
| DO location is fixed | Region determined at instance creation, cannot be moved afterward | Create initial instances based on user location |
| DO cannot parallelize | Single-threaded; not suitable for CPU-intensive work | Process heavy computation in Workers, store only results in DO |
The Most Common Mistakes in Practice
Talking about mistakes reminds me of myself. I made the first one firsthand early on — I only realized it was the bottleneck when I saw latency spikes in monitoring as global traffic funneled into a single DO instance.
-
Designing DOs at a global granularity — When all requests funnel into a single DO instance like
getByName("global-counter"), global traffic becomes bottlenecked on a single thread. The "one instance per logical entity" principle matters. Split instances by identifiers that distinguish entities: user IDs, document IDs, API keys, and so on. -
Putting all high-frequency write data in D1 — Data that writes hundreds of times per second — rate limiters, real-time counters, session state — will quickly exceed D1's write QPS ceiling. It's easy to overlook until you actually encounter an
overloadederror. This kind of data naturally belongs in DO's internal SQLite. -
Using the default session where session consistency is needed — The "data I just added isn't showing up" bug comes from here. It's tricky to debug because it reproduces intermittently. For any flow that reads immediately after writing, make sure to use
db.withSession("first-primary")or supply a bookmark.
Closing Thoughts
The irony of the Lambda + RDS era was ultimately the structure of "runs at the edge, but data goes to the center." D1 and Durable Objects change that structure itself. D1 and Durable Objects are not competing technologies — they are two layers that divide responsibilities according to the level of consistency required. The core decision criteria: choose D1 for globally shared, read-heavy data; choose DO when you need to coordinate concurrent writes to a single entity.
Three steps you can try right now:
-
Compare D1 session consistency directly. After the same write, read with both a default session and a
"first-primary"session and observe the difference in responses. This will give you a tangible feel for how session consistency works. -
Implement a simple counter API with DO yourself. Just extend the
DurableObjectclass and implement a singlefetch()method. Send multiple concurrent requests using the same name (idFromName) and verify that the count increments accurately — that's the DO serialization guarantee made concrete. -
Classify your service's data model using D1 vs. DO criteria. For each piece of data, ask: "Can concurrent write conflicts occur?" If yes or no, the placement decision follows naturally.
References
- Cloudflare D1 Official Documentation | Cloudflare
- Sequential consistency without borders: how D1 implements global read replication | Cloudflare Blog
- D1 Global Read Replication Best Practices | Cloudflare
- Cloudflare Durable Objects Official Documentation | Cloudflare
- Rules of Durable Objects | Cloudflare
- Durable Objects Storage Access Guide | Cloudflare
- Choosing a Workers Storage Option | Cloudflare
- Building D1: a Global Database | Cloudflare Blog
- D1 Read Replication Public Beta Changelog | Cloudflare
- Cloudflare D1 Read Replication for e-commerce Tutorial | Cloudflare
- One Database Per User with Durable Objects and Drizzle ORM | Boris Tane
- Using Cloudflare Durable Objects with SQL Storage, D1, and Drizzle ORM | Flashblaze
- Drizzle ORM - Cloudflare Durable Objects Connection Official Guide | Drizzle
- Hono - Cloudflare Durable Objects Example | Hono
- Cloudflare Fullstack Reference Architecture | Cloudflare
- D1 Limits Official Documentation | Cloudflare
- Scaling Cloudflare D1 from 10 GB to 500 GB with Manual Database Sharding | Medium
- Cloudflare Upgrades D1 Database with Global Read Replication | InfoQ
- Chapter 12: D1 SQLite at the Edge | Architecting on Cloudflare
- fullstack-next-cloudflare template | GitHub