Use this file to discover all available pages before exploring further.
One of Durable Streams’ core features is offset-based resumability. Your application can resume reading from exactly where it left off, even after page refreshes, network failures, or crashes.
Every read operation returns an offset representing your position in the stream:
const res = await stream.stream({ offset: "-1", // Start from beginning live: false,})const items = await res.json()// Save this offset to resume laterconst currentOffset = res.offsetconsole.log("Current position:", currentOffset)
Offsets are opaque strings (e.g., "42_1024") containing internal state. Always save and use the exact offset value returned by the server—never construct or modify offsets manually.
Store the offset after successfully processing each batch:
const res = await stream.stream({ live: false })const items = await res.json()// Process itemsfor (const item of items) { await processItem(item)}// Save offset for next readlocalStorage.setItem("stream-offset", res.offset)
2
Resume from saved offset
On restart, load the saved offset and continue:
const savedOffset = localStorage.getItem("stream-offset") ?? "-1"const res = await stream.stream({ offset: savedOffset, live: false,})// Only new data since saved offsetconst newItems = await res.json()
3
Update offset incrementally
For live streams, update the offset continuously:
const res = await stream.stream({ offset: savedOffset, live: "long-poll",})res.subscribeJson(async (batch) => { for (const item of batch.items) { await processItem(item) } // Update offset after each batch localStorage.setItem("stream-offset", batch.offset)})
For write-side resumability, use IdempotentProducer to guarantee exactly-once delivery:
import { DurableStream, IdempotentProducer } from "@durable-streams/client"const stream = await DurableStream.create({ url: "https://streams.example.com/v1/stream/orders", contentType: "application/json",})const producer = new IdempotentProducer(stream, "order-service-1", { epoch: 0, autoClaim: true, // Automatically handle epoch conflicts onError: (err) => console.error("Write failed:", err),})// Fire-and-forget writes (synchronous, returns immediately)for (const order of orders) { producer.append(JSON.stringify(order))}// Ensure all messages are deliveredawait producer.flush()
The producer uses (producerId, epoch, seq) headers to deduplicate writes. If the producer crashes and restarts, retried writes are automatically detected and ignored by the server.
Each producer has a unique ID (e.g., "order-service-1"):
const producer = new IdempotentProducer( stream, "order-service-1", // Stable producer ID { epoch: 0 })
2
Sequence numbers ensure order
Each batch gets a monotonic sequence number:
POST /v1/stream/ordersProducer-Id: order-service-1Producer-Epoch: 0Producer-Seq: 42
3
Server deduplicates retries
If the same (producerId, epoch, seq) arrives twice, the server returns 204 No Content without duplicating data:
// First attempt: 200 OK - data written// Retry after crash: 204 No Content - deduplicated
4
Epochs fence zombies
When restarting a producer, increment the epoch to fence old instances:
// Old producer (crashed)const oldProducer = new IdempotentProducer(stream, "worker-1", { epoch: 0 })// New producer (after restart)const newProducer = new IdempotentProducer(stream, "worker-1", { epoch: 1 })// Old producer's retries now fail with 403 Forbidden