Getting Started
This guide reflects the current contract implemented in this repository:
- PostgreSQL business tables are authoritative
- clients push one logical dirty set through staged push sessions
- clients replay the authoritative committed bundle after push commit
- clients pull complete committed bundles only
- fresh installs and prune recovery rebuild through snapshot sessions
Supported Envelope
The supported envelope is intentionally narrow:
- exactly one visible sync key column per registered PostgreSQL table
- visible sync key type
uuidortexton registered PostgreSQL tables - scope-bound registered PostgreSQL identity through
_sync_scope_id TEXT NOT NULL - scope-inclusive foreign keys between registered PostgreSQL tables
- FK-closed server and client table sets
- fail-closed bootstrap if schema or config falls outside that envelope
PostgreSQL Table Requirements
Registered PostgreSQL tables must satisfy these rules:
- each registered table must have exactly one visible sync key column
- visible sync key type must be
uuidortext - each registered table must define
_sync_scope_id TEXT NOT NULL (_sync_scope_id, sync_key)must be unique- every unique constraint and unique index on a registered table must include
_sync_scope_id - every registered foreign key must point to another registered table or to the same registered table for self-references
- every registered foreign key must include
_sync_scope_id - every supported foreign key on a registered table must be
DEFERRABLE - supported
ON DELETE/ON UPDATEactions areNO ACTION,RESTRICT,CASCADE,SET NULL, orSET DEFAULT - supported
MATCHoptions are empty,NONE, orSIMPLE DEFERRABLE INITIALLY DEFERREDis recommendedDEFERRABLE INITIALLY IMMEDIATEis accepted because the runtime defers constraints inside sync transactions- partial, predicate, and expression unique indexes are unsupported on registered tables
If a table violates these rules, Bootstrap() fails with an UnsupportedSchemaError.
Recommended pattern:
CREATE TABLE business.users (
_sync_scope_id TEXT NOT NULL,
id UUID NOT NULL,
name TEXT NOT NULL
PRIMARY KEY (_sync_scope_id, id)
);
CREATE TABLE business.posts (
_sync_scope_id TEXT NOT NULL,
id UUID NOT NULL,
author_id UUID NOT NULL,
title TEXT NOT NULL,
PRIMARY KEY (_sync_scope_id, id),
CONSTRAINT posts_author_id_fkey
FOREIGN KEY (_sync_scope_id, author_id) REFERENCES business.users(_sync_scope_id, id)
ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED
);
Core Terms
user_id: one isolated sync streamcurrent_source_id: the internally managed current sync source identitysource_bundle_id: per-source monotonically increasing push idbundle_seq: server-side committed bundle sequencerow_version: authoritative row version used for optimistic concurrencysnapshot_bundle_seq: the frozen bundle ceiling attached to a snapshot rebuild
Server Metadata
The server keeps sync metadata in the sync schema. The main runtime tables are:
sync.user_statesync.row_statesync.bundle_logsync.bundle_rowssync.applied_pushessync.push_sessionssync.push_session_rowssync.snapshot_sessionssync.snapshot_session_rows
Step 1: Start PostgreSQL
docker run --name oversync-pg \
-e POSTGRES_PASSWORD=postgres \
-p 5432:5432 \
-d postgres:16
docker exec oversync-pg createdb -U postgres my_sync_app
Step 2: Create Your Business Tables
CREATE SCHEMA IF NOT EXISTS business;
CREATE TABLE IF NOT EXISTS business.users (
_sync_scope_id TEXT NOT NULL,
id UUID NOT NULL,
name TEXT NOT NULL,
email TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
PRIMARY KEY (_sync_scope_id, id),
UNIQUE (_sync_scope_id, email)
);
CREATE TABLE IF NOT EXISTS business.posts (
_sync_scope_id TEXT NOT NULL,
id UUID NOT NULL,
author_id UUID NOT NULL,
title TEXT NOT NULL,
content TEXT,
created_at TIMESTAMPTZ DEFAULT NOW(),
PRIMARY KEY (_sync_scope_id, id),
CONSTRAINT posts_author_id_fkey
FOREIGN KEY (_sync_scope_id, author_id) REFERENCES business.users(_sync_scope_id, id)
ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED
);
Step 3: Create The Server
package main
import (
"context"
"log"
"log/slog"
"net/http"
"os"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/mobiletoly/go-oversync/oversync"
)
func main() {
ctx := context.Background()
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
pool, err := pgxpool.New(ctx, "postgres://postgres:postgres@localhost:5432/my_sync_app?sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer pool.Close()
cfg := &oversync.ServiceConfig{
MaxSupportedSchemaVersion: 1,
AppName: "my-sync-app",
RegisteredTables: []oversync.RegisteredTable{
{Schema: "business", Table: "users", SyncKeyColumns: []string{"id"}},
{Schema: "business", Table: "posts", SyncKeyColumns: []string{"id"}},
},
}
svc, err := oversync.NewRuntimeService(pool, cfg, logger)
if err != nil {
log.Fatal(err)
}
if err := svc.Bootstrap(ctx); err != nil {
log.Fatal(err)
}
handlers := oversync.NewHTTPSyncHandlers(svc, logger)
syncActorMiddleware := oversync.ActorMiddleware(oversync.ActorMiddlewareConfig{
UserIDFromContext: func(ctx context.Context) (string, error) {
return yourUserIDFromContext(ctx)
},
})
withSyncActor := func(next http.Handler) http.Handler {
return yourAuthMiddleware(syncActorMiddleware(next))
}
mux := http.NewServeMux()
mux.Handle("POST /sync/connect", withSyncActor(http.HandlerFunc(handlers.HandleConnect)))
mux.Handle("POST /sync/push-sessions", withSyncActor(http.HandlerFunc(handlers.HandleCreatePushSession)))
mux.Handle("POST /sync/push-sessions/{push_id}/chunks", withSyncActor(http.HandlerFunc(handlers.HandlePushSessionChunk)))
mux.Handle("POST /sync/push-sessions/{push_id}/commit", withSyncActor(http.HandlerFunc(handlers.HandleCommitPushSession)))
mux.Handle("DELETE /sync/push-sessions/{push_id}", withSyncActor(http.HandlerFunc(handlers.HandleDeletePushSession)))
mux.Handle("GET /sync/committed-bundles/{bundle_seq}/rows", withSyncActor(http.HandlerFunc(handlers.HandleGetCommittedBundleRows)))
mux.Handle("GET /sync/pull", withSyncActor(http.HandlerFunc(handlers.HandlePull)))
mux.Handle("POST /sync/snapshot-sessions", withSyncActor(http.HandlerFunc(handlers.HandleCreateSnapshotSession)))
mux.Handle("GET /sync/snapshot-sessions/{snapshot_id}", withSyncActor(http.HandlerFunc(handlers.HandleGetSnapshotChunk)))
mux.Handle("DELETE /sync/snapshot-sessions/{snapshot_id}", withSyncActor(http.HandlerFunc(handlers.HandleDeleteSnapshotSession)))
mux.Handle("GET /sync/capabilities", withSyncActor(http.HandlerFunc(handlers.HandleCapabilities)))
mux.HandleFunc("GET /health", handlers.HandleHealth)
mux.HandleFunc("GET /status", handlers.HandleStatus)
log.Fatal(http.ListenAndServe(":8080", mux))
}
yourAuthMiddleware must authenticate the request and expose trusted user_id in request context.
oversync.ActorMiddleware(...) reads Oversync-Source-ID and combines it with that trusted user
identity into oversync.Actor{UserID, SourceID}. The server derives _sync_scope_id from
Actor.UserID, so clients must not send _sync_scope_id in push payloads.
Step 4: Create The SQLite Client
Local SQLite managed tables must declare exactly one visible sync key column, and that column
must also be the local SQLite PRIMARY KEY in the current runtime. Supported local key shapes are
TEXT PRIMARY KEY and UUID-backed BLOB PRIMARY KEY.
package main
import (
"context"
"database/sql"
"log"
"github.com/mobiletoly/go-oversync/oversqlite"
_ "github.com/mattn/go-sqlite3"
)
func main() {
ctx := context.Background()
db, err := sql.Open("sqlite3", "app.db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec(`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL
);
CREATE TABLE IF NOT EXISTS posts (
id TEXT PRIMARY KEY,
author_id TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT,
FOREIGN KEY (author_id) REFERENCES users(id) ON DELETE CASCADE
);
`)
if err != nil {
log.Fatal(err)
}
cfg := oversqlite.DefaultConfig("business", []oversqlite.SyncTable{
{TableName: "users", SyncKeyColumnName: "id"},
{TableName: "posts", SyncKeyColumnName: "id"},
})
tokenProvider := func(ctx context.Context) (string, error) {
return "<jwt>", nil
}
client, err := oversqlite.NewClient(
db,
"http://localhost:8080",
tokenProvider,
cfg,
)
if err != nil {
log.Fatal(err)
}
defer client.Close()
if err := client.Open(ctx); err != nil {
log.Fatal(err)
}
connectResult, err := client.Attach(ctx, "user-123")
if err != nil {
log.Fatal(err)
}
if connectResult.Status == oversqlite.AttachStatusRetryLater {
log.Printf("connect pending, retry after %s", connectResult.RetryAfter)
return
}
}
Step 5: Sync
The supported high-level operations are:
ctx := context.Background()
pushReport, err := client.PushPending(ctx)
if err != nil {
log.Fatal(err)
}
log.Printf("push outcome: %s", pushReport.Outcome)
pullReport, err := client.PullToStable(ctx)
if err != nil {
log.Fatal(err)
}
log.Printf("pull outcome: %s", pullReport.Outcome)
syncReport, err := client.Sync(ctx)
if err != nil {
log.Fatal(err)
}
log.Printf("sync outcomes: push=%s remote=%s", syncReport.PushOutcome, syncReport.RemoteOutcome)
detachResult, err := client.Detach(ctx)
if err != nil {
log.Fatal(err)
}
if detachResult.Outcome == oversqlite.DetachOutcomeBlockedUnsyncedData {
log.Printf("detach blocked by %d pending rows", detachResult.PendingRowCount)
}
Rebuild operation:
rebuildReport, err := client.Rebuild(ctx)
if err != nil {
log.Fatal(err)
}
log.Printf("rebuild outcome: %s", rebuildReport.Outcome)
Behavior to expect:
Attach()resolves first-account lifecycle throughPOST /sync/connect.Open(),PushPending(),PullToStable(),Sync(),Detach(), andRebuild()now return structured results in addition toerror.PushPending()freezes one outbound snapshot, uploads it through push sessions, fetches the committed authoritative rows, and replays them locally.Attach()may returnretry_lateras a normal retriable lifecycle outcome before the client becomes attached.PullToStable()drains complete bundles until the frozenstable_bundle_seqis reached.PullToStable()rebuilds automatically through snapshot sessions if the server returnshistory_pruned.Rebuild(ctx)rebuilds through chunked snapshot sessions.- if durable source recovery is active,
Rebuild(ctx)internally chooses the rebuild-plus-rotate path. Detach()clears synced local cache/state after a successful attached detach, restores dirty-row capture before returning, and reports blocked detach as a normalDetachResult.- local writes made after
Detach()are captured immediately as anonymous pending rows and can be synced after a laterAttach(). - Go intentionally does not expose reactive progress streams; inspect result structs for final lifecycle and sync outcomes instead.
Important Client Rules
- every managed table must declare its sync key explicitly
- managed tables must be FK-closed
PullToStable()andRebuild(ctx)fail closed while_sync_outbox_*existsSync()fails closed while_sync_attachment_state.rebuild_required = 1- the durable read checkpoint is
_sync_attachment_state.last_bundle_seq_seen - the next outgoing client bundle id is
_sync_source_state.next_source_bundle_id - sync-visible absolute timestamps should use RFC3339 or RFC3339Nano text with an explicit zone, for example
2026-03-24T18:02:00Z
Endpoints
The server exposes:
POST /sync/connectPOST /sync/push-sessionsPOST /sync/push-sessions/{push_id}/chunksPOST /sync/push-sessions/{push_id}/commitDELETE /sync/push-sessions/{push_id}GET /sync/committed-bundles/{bundle_seq}/rowsGET /sync/pullPOST /sync/snapshot-sessionsGET /sync/snapshot-sessions/{snapshot_id}DELETE /sync/snapshot-sessions/{snapshot_id}GET /sync/capabilitiesGET /healthGET /status
All authenticated /sync/* requests must send Oversync-Source-ID: <current-source-id>.
Echo Integration
If your server uses Echo, you can still use the standard net/http middleware:
func WrapHTTPMiddleware(mw func(http.Handler) http.Handler) echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
var handlerErr error
h := mw(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
c.SetRequest(r)
handlerErr = next(c)
}))
h.ServeHTTP(c.Response(), c.Request())
return handlerErr
}
}
}
Lifecycle-specific sync failures to expect on the HTTP surface:
scope_uninitializedscope_initializinginitialization_staleinitialization_expired
Binary Payload Contract
- non-key binary payload fields use standard base64 on the wire
- UUID-valued keys and UUID-valued key columns use dashed UUID text on the wire
- local trigger capture may use different internal encodings; those are not the HTTP contract
Next Steps
- Run the reference server in
examples/nethttp_server/. - Run the simulator in
examples/mobile_flow/. - Inspect the API contract in
docs/documentation/api.md.