Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/durable-streams/durable-streams/llms.txt

Use this file to discover all available pages before exploring further.

The Durable Streams Protocol is a minimal HTTP-based protocol for creating, appending to, and reading from durable, append-only byte streams. It provides a simple, web-native primitive for applications requiring ordered, replayable data streams with support for catch-up reads, live tailing, and explicit stream closure.

What is a stream?

A stream is simply a URL-addressable, append-only byte stream that you can read and write to using standard HTTP methods. Streams are durable and immutable by position—new data can only be appended to the end.
import { DurableStream } from '@durable-streams/client'

// Create a new stream
const stream = await DurableStream.create({
  url: 'https://streams.example.com/my-stream',
  headers: { Authorization: 'Bearer my-token' },
  contentType: 'application/json'
})

Core operations

The protocol defines six primary operations that work with stream URLs:
1
Create
2
Establish a new stream at a URL with optional initial content using PUT.
3
PUT https://streams.example.com/my-stream
Content-Type: application/json

{"message": "initial data"}
4
Append
5
Add bytes to the end of an existing stream using POST.
6
POST https://streams.example.com/my-stream
Content-Type: application/json

{"message": "new data"}
7
Read
8
Retrieve bytes starting from a given offset using GET with query parameters.
9
GET https://streams.example.com/my-stream?offset=-1
10
Close
11
Transition a stream to closed state, optionally with a final append.
12
POST https://streams.example.com/my-stream
Stream-Closed: true

{"message": "final data"}
14
Query stream metadata without transferring data using HEAD.
15
HEAD https://streams.example.com/my-stream
16
Delete
17
Remove a stream and all its data using DELETE.
18
DELETE https://streams.example.com/my-stream

Stream properties

Every stream has the following characteristics:
Durability: Once written and acknowledged, bytes persist until the stream is deleted or expired.
Immutability by position: Bytes at a given offset never change; new data is only appended.
Ordering: Bytes are strictly ordered by offset.
Content type: Each stream has a MIME content type set at creation (e.g., application/json, text/plain, application/octet-stream).

Independent read/write implementation

Servers may implement the read and write paths independently. This flexibility enables different use cases:

Read-only server

A database synchronization server may only implement the read path and use its own internal injection system for writes.

Full read/write server

A collaborative editing service implements both paths to allow clients to append changes and read the stream.

URL structure

The protocol does not prescribe a specific URL structure. Servers may organize streams using any URL scheme they choose:
  • /v1/stream/{path}
  • /streams/{id}
  • Domain-specific paths like /chat/room-1 or /db/users/changes
The protocol is defined by the HTTP methods, query parameters, and headers applied to any stream URL, not by the URL format itself.

Example: Complete workflow

Here’s a complete example showing create, write, and read operations:
import { DurableStream } from '@durable-streams/client'

// Create a stream
const stream = await DurableStream.create({
  url: 'https://streams.example.com/chat/room-1',
  headers: { Authorization: 'Bearer token' },
  contentType: 'application/json'
})

// Write data
await stream.append(JSON.stringify({ user: 'alice', message: 'hello' }))
await stream.append(JSON.stringify({ user: 'bob', message: 'hi there' }))

// Read from the beginning
const response = await stream.stream<{ user: string; message: string }>()
const messages = await response.json()
console.log(messages)
// [{ user: 'alice', message: 'hello' }, { user: 'bob', message: 'hi there' }]

Why HTTP?

Durable Streams uses HTTP because it:
  • Works universally across web browsers, mobile apps, native clients, IoT devices, and edge workers
  • Leverages existing infrastructure (CDNs, proxies, load balancers)
  • Supports standard authentication and authorization patterns
  • Enables caching and request collapsing for efficient scaling
  • Requires no custom protocols or special network configuration
The protocol is designed to be CDN-friendly, meaning a single origin server can efficiently serve millions of concurrent readers through CDN caching and request collapsing.

Next steps

Streams and Offsets

Learn how offset-based positioning enables resumable reads

Message Framing

Understand how different content types handle message boundaries

Live Modes

Explore long-polling and SSE for real-time updates

Idempotent Producers

Implement exactly-once write semantics