File Sync

Share state across multiple instances of an agent-native app through a remote database.

Overview

File sync lets multiple instances of an agent-native app share state through a remote database. Files remain the primary source of truth — the database is a sync target, not a replacement. The sync engine watches the local file system for changes, pushes them to the database, and pulls remote updates back to disk.

Three backends ship with @agent-native/core: Firestore, Supabase, and Convex. You can also build your own adapter for any backend.

The sync engine handles file watching, pattern matching, conflict resolution (three-way merge), deduplication, and retry queues. Your backend adapter only needs to implement a small interface — the engine does everything else.

Quick Start

1. Install the peer dependency for your backend

# Pick one:
pnpm add firebase-admin       # Firestore
pnpm add @supabase/supabase-js  # Supabase
pnpm add convex                # Convex

2. Set environment variables

# .env
FILE_SYNC_ENABLED=true
FILE_SYNC_BACKEND=firestore   # or "supabase" or "convex"

# Backend-specific (see sections below)
GOOGLE_APPLICATION_CREDENTIALS=./service-account.json  # Firestore
# SUPABASE_URL=https://xyz.supabase.co                 # Supabase
# SUPABASE_PUBLISHABLE_KEY=sb_publishable_...          # Supabase
# CONVEX_URL=https://xyz.convex.cloud                  # Convex

3. Restart your app

The default template server already calls createFileSync(). Once the env vars are set, sync starts automatically on the next server boot.

Configuration

Control which files sync with a sync-config.json file in your content root (typically data/sync-config.json or content/sync-config.json):

{
  "syncFilePatterns": [
    "data/**/*.json",
    "data/**/*.md",
    "!data/local-only/**"
  ],
  "privateSyncFilePatterns": [
    "data/private/**"
  ]
}
FieldDescription
syncFilePatternsGlob patterns for files to sync. Supports negation with ! prefix.
privateSyncFilePatternsPatterns for files that sync to a per-user channel instead of the shared channel.

Denylist

Regardless of your patterns, the sync engine always blocks sensitive and infrastructure files. These are never synced:

  • Secrets: .env*, *.key, *.pem, credentials.json, service-account*.json, .ssh/, .aws/
  • Infrastructure: .git/, node_modules/, *.sqlite, *.db, *.tfstate
  • Sync meta-files: sync-config.json, .sync-status.json, .sync-failures.json
  • Scratch files: _tmp-* (agent scratch space)
  • Editor/OS junk: *.swp, .DS_Store, Thumbs.db

The denylist is hardcoded and cannot be overridden by user patterns. This prevents accidental credential leaks.

Backend: Firestore

Firestore provides real-time listeners out of the box, so remote changes arrive with minimal latency.

Setup

  1. Install the peer dependency:
    pnpm add firebase-admin
  2. Create a service account in the Firebase console and download the JSON key file.
  3. Set the environment variables:
    FILE_SYNC_ENABLED=true
    FILE_SYNC_BACKEND=firestore
    GOOGLE_APPLICATION_CREDENTIALS=./service-account.json

The adapter stores documents in a files collection. No additional Firestore setup is required — the collection is created automatically on first write.

Backend: Supabase

Setup

  1. Install the peer dependency:
    pnpm add @supabase/supabase-js
  2. Create the files table in your Supabase project:
    CREATE TABLE files (
      id TEXT PRIMARY KEY,
      path TEXT,
      content TEXT,
      app TEXT,
      owner_id TEXT,
      last_updated BIGINT,
      created_at BIGINT
    );
    
    CREATE INDEX idx_files_app_owner ON files(app, owner_id);
  3. Set the environment variables:
    FILE_SYNC_ENABLED=true
    FILE_SYNC_BACKEND=supabase
    SUPABASE_URL=https://xyz.supabase.co
    SUPABASE_PUBLISHABLE_KEY=sb_publishable_...

    Legacy env vars (SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY) are also supported.

Row Level Security

The publishable key respects Supabase RLS policies. If your files table has no RLS policies, reads and writes will be blocked. Either add appropriate policies or disable RLS on the files table for development.

Backend: Convex

Convex provides real-time reactivity built in — the adapter subscribes to query results and receives changes automatically when data updates.

Setup

  1. Install the peer dependency:
    pnpm add convex
  2. Initialize Convex in your project:
    npx convex init
  3. Create the schema at convex/schema.ts:
    import { defineSchema, defineTable } from "convex/server";
    import { v } from "convex/values";
    
    export default defineSchema({
      files: defineTable({
        id: v.string(),
        path: v.string(),
        content: v.string(),
        app: v.string(),
        ownerId: v.string(),
        lastUpdated: v.number(),
        createdAt: v.optional(v.number()),
      })
        .index("by_sync_id", ["id"])
        .index("by_app_owner", ["app", "ownerId"]),
    });
  4. Create the functions at convex/files.ts:
    import { query, mutation } from "./_generated/server";
    import { v } from "convex/values";
    
    export const list = query({
      args: { app: v.string(), ownerId: v.string() },
      handler: async (ctx, { app, ownerId }) => {
        return await ctx.db.query("files")
          .withIndex("by_app_owner", (q) => q.eq("app", app).eq("ownerId", ownerId))
          .collect();
      },
    });
    
    export const get = query({
      args: { id: v.string(), app: v.string(), ownerId: v.string() },
      handler: async (ctx, { id, app, ownerId }) => {
        const doc = await ctx.db.query("files")
          .withIndex("by_sync_id", (q) => q.eq("id", id))
          .unique();
        if (doc && (doc.app !== app || doc.ownerId !== ownerId)) return null;
        return doc;
      },
    });
    
    export const upsert = mutation({
      args: {
        id: v.string(),
        path: v.optional(v.string()),
        content: v.optional(v.string()),
        app: v.optional(v.string()),
        ownerId: v.optional(v.string()),
        lastUpdated: v.optional(v.number()),
        createdAt: v.optional(v.number()),
      },
      handler: async (ctx, args) => {
        const existing = await ctx.db.query("files")
          .withIndex("by_sync_id", (q) => q.eq("id", args.id))
          .unique();
        if (existing) {
          const updates: Record<string, unknown> = {};
          if (args.path !== undefined) updates.path = args.path;
          if (args.content !== undefined) updates.content = args.content;
          if (args.lastUpdated !== undefined) updates.lastUpdated = args.lastUpdated;
          await ctx.db.patch(existing._id, updates);
        } else {
          await ctx.db.insert("files", {
            id: args.id,
            path: args.path!,
            content: args.content!,
            app: args.app!,
            ownerId: args.ownerId!,
            lastUpdated: args.lastUpdated!,
            createdAt: args.createdAt,
          });
        }
      },
    });
    
    export const remove = mutation({
      args: { id: v.string(), app: v.string(), ownerId: v.string() },
      handler: async (ctx, { id, app, ownerId }) => {
        const doc = await ctx.db.query("files")
          .withIndex("by_sync_id", (q) => q.eq("id", id))
          .unique();
        if (doc && doc.app === app && doc.ownerId === ownerId) {
          await ctx.db.delete(doc._id);
        }
      },
    });
  5. Deploy to Convex:
    npx convex deploy
  6. Set the environment variables:
    FILE_SYNC_ENABLED=true
    FILE_SYNC_BACKEND=convex
    CONVEX_URL=https://your-project.convex.cloud

Security

The functions above have no authentication checks. In production, add Convex auth and validate the caller's identity in each function handler. Without auth, anyone with your deployment URL can read and write files.

Document size limit

Convex documents have a 1 MiB size limit. Files larger than this will fail to sync. If you need to sync large files, consider Firestore or Supabase instead.

The createFileSync() factory

The factory reads environment variables, creates the correct adapter, initializes sync, and returns a discriminated union so you can handle all three states cleanly:

import { createFileSync } from "@agent-native/core/adapters/sync";

const syncResult = await createFileSync({ contentRoot: "./data" });

// syncResult is one of:
// { status: "disabled" }               — FILE_SYNC_ENABLED !== "true"
// { status: "error", reason: string }  — misconfiguration or init failure
// { status: "ready", fileSync, sseEmitter, shutdown }

FileSyncResult

StatusFieldsWhen
"disabled"NoneFILE_SYNC_ENABLED is not "true"
"error"reason: stringMissing env vars, invalid backend, adapter init failure
"ready"fileSync, sseEmitter, shutdownSync is running

Wiring SSE

Pass sseEmitter to createSSEHandler so clients receive real-time sync events alongside file-watcher events:

const extraEmitters =
  syncResult.status === "ready" ? [syncResult.sseEmitter] : [];

app.get(
  "/api/events",
  createSSEHandler(watcher, { extraEmitters, contentRoot: "./data" }),
);

Shutdown

Call shutdown() on SIGTERM to flush the retry queue and close database connections:

process.on("SIGTERM", async () => {
  if (syncResult.status === "ready") await syncResult.shutdown();
  process.exit(0);
});

Template server example

The default template wires everything together. Here is the full server setup:

import "dotenv/config";
import fs from "fs";
import {
  createServer,
  createFileWatcher,
  createSSEHandler,
} from "@agent-native/core";
import { createFileSync } from "@agent-native/core/adapters/sync";

export async function createAppServer() {
  const app = createServer();
  const watcher = createFileWatcher("./data");

  // --- File sync (opt-in via FILE_SYNC_ENABLED=true) ---
  const syncResult = await createFileSync({ contentRoot: "./data" });

  if (syncResult.status === "error") {
    console.warn(`[app] File sync failed: ${syncResult.reason}`);
  }

  const extraEmitters =
    syncResult.status === "ready" ? [syncResult.sseEmitter] : [];

  // --- Your API routes ---
  app.get("/api/hello", (_req, res) => {
    res.json({ message: "Hello from your @agent-native/core app!" });
  });

  // File sync status (diagnostic endpoint)
  app.get("/api/file-sync/status", (_req, res) => {
    if (syncResult.status !== "ready") {
      return res.json({ enabled: false, conflicts: 0 });
    }
    res.json({
      enabled: true,
      connected: true,
      conflicts: syncResult.fileSync.conflictCount,
    });
  });

  // SSE events (keep this last)
  app.get(
    "/api/events",
    createSSEHandler(watcher, { extraEmitters, contentRoot: "./data" }),
  );

  // Graceful shutdown
  process.on("SIGTERM", async () => {
    if (syncResult.status === "ready") await syncResult.shutdown();
    process.exit(0);
  });

  return app;
}

Sync Status & Diagnostics

data/.sync-status.json

The sync engine writes a status file to disk on every state change. This file is readable by both the UI and the agent:

{
  "enabled": true,
  "connected": true,
  "conflicts": ["data/projects/draft.json"],
  "lastSyncedAt": 1710849600000,
  "retryQueueSize": 0,
  "failedPaths": []
}

data/.sync-failures.json

When a file fails to sync after all retries (evicted from the retry queue or dropped at shutdown), an entry is appended to the failures log:

[
  {
    "path": "data/large-file.json",
    "reason": "evicted",
    "timestamp": 1710849600000
  }
]

GET /api/file-sync/status

The template server exposes a diagnostic endpoint. Returns the current sync state as JSON:

// Response when sync is active
{ "enabled": true, "connected": true, "conflicts": 0 }

// Response when sync is off or failed
{ "enabled": false, "conflicts": 0 }

useFileSyncStatus() hook

On the client, use the React hook to track sync state. It fetches initial status from the endpoint, then listens to SSE for real-time updates:

import { useFileSyncStatus } from "@agent-native/core/client";

function SyncIndicator() {
  const { enabled, connected, conflicts, lastSyncedAt } = useFileSyncStatus({
    onEvent: (event) => console.log("Sync event:", event),
  });

  if (!enabled) return null;

  return (
    <div>
      {connected ? "Synced" : "Disconnected"}
      {conflicts.length > 0 && <span> ({conflicts.length} conflicts)</span>}
    </div>
  );
}

Agent-Native Parity

File sync follows agent-native's core rule: files are the database. The agent reads and writes files — sync is transparent. A few conventions keep the agent in the loop:

Conflict notification

When a conflict cannot be auto-merged, the sync engine writes the details to application-state/sync-conflict.json. The agent can read this file and help resolve the conflict through chat:

// application-state/sync-conflict.json
{
  "type": "conflict-needs-llm",
  "path": "data/projects/draft.json",
  "localSnippet": "...(first 500 chars of local version)...",
  "remoteSnippet": "...(first 500 chars of remote version)..."
}

Scratch files

Files prefixed with _tmp- are excluded from sync by the denylist. Use this prefix for agent scratch work that should not leave the local machine — draft outputs, intermediate computations, or temporary state.

Sync status for agents

The agent can read data/.sync-status.json to check whether sync is healthy, see active conflicts, or inspect the retry queue. This is the same file the UI reads — no separate agent API is needed.

Building a Custom Adapter

If Firestore, Supabase, and Convex don't fit your stack, you can build an adapter for any backend — DynamoDB, MongoDB, Turso, a REST API, whatever you need. An adapter is a single class that implements six methods.

The interface

Every adapter implements FileSyncAdapter:

import type {
  FileSyncAdapter,
  FileRecord,
  FileChange,
  Unsubscribe,
} from "@agent-native/core/adapters/sync";

interface FileSyncAdapter {
  query(appId: string, ownerId: string): Promise<{ id: string; data: FileRecord }[]>;
  get(id: string): Promise<{ id: string; data: FileRecord } | null>;
  set(id: string, record: Partial<FileRecord>): Promise<void>;
  delete(id: string): Promise<void>;
  subscribe(
    appId: string,
    ownerId: string,
    onChange: (changes: FileChange[]) => void,
    onError: (error: any) => void,
  ): Unsubscribe;
}

The types it works with:

interface FileRecord {
  path: string;        // File path relative to project root
  content: string;     // File contents
  app: string;         // Application identifier
  ownerId: string;     // Owner/user ID
  lastUpdated: number; // Unix timestamp (ms)
  createdAt?: number;  // Optional creation timestamp
}

interface FileChange {
  type: "added" | "modified" | "removed";
  id: string;          // Document ID
  data: FileRecord;
}

type Unsubscribe = () => void;

Implementing methods

query(appId, ownerId)

Return all file records for a given app and owner. Called at startup to load the initial state.

async query(appId: string, ownerId: string) {
  const rows = await db.select("files", { app: appId, owner_id: ownerId });
  return rows.map(row => ({ id: row.id, data: toFileRecord(row) }));
}

get(id)

Fetch a single record by its document ID. Return null if not found.

async get(id: string) {
  const row = await db.findOne("files", { id });
  if (!row) return null;
  return { id: row.id, data: toFileRecord(row) };
}

set(id, record)

Upsert a file record. The record argument is Partial<FileRecord> — on updates, only changed fields are passed. Your implementation should merge with existing data, not overwrite.

async set(id: string, record: Partial<FileRecord>) {
  await db.upsert("files", {
    id,
    path: record.path,
    content: record.content,
    app: record.app,
    owner_id: record.ownerId,
    last_updated: record.lastUpdated,
    created_at: record.createdAt,
  });
}

delete(id)

Delete a file record by ID.

async delete(id: string) {
  await db.remove("files", { id });
}

subscribe(appId, ownerId, onChange, onError)

Listen for remote changes and call onChange with an array of FileChange objects. Return an unsubscribe function.

Real-time listener

If your database supports change streams (Firestore onSnapshot, Supabase Realtime, MongoDB Change Streams), use them. Lower latency, no wasted queries.

Polling

If your database doesn't support real-time, poll on an interval. Keep an in-memory snapshot and diff against it.

Polling approach (works with any database):

subscribe(appId, ownerId, onChange, onError): Unsubscribe {
  const snapshot = new Map<string, { content: string; lastUpdated: number }>();
  let stopped = false;

  const poll = async () => {
    if (stopped) return;
    try {
      const records = await this.query(appId, ownerId);
      const currentIds = new Set<string>();
      const changes: FileChange[] = [];

      for (const { id, data } of records) {
        currentIds.add(id);
        const prev = snapshot.get(id);

        if (!prev) {
          changes.push({ type: "added", id, data });
        } else if (prev.content !== data.content || prev.lastUpdated !== data.lastUpdated) {
          changes.push({ type: "modified", id, data });
        }
        snapshot.set(id, { content: data.content, lastUpdated: data.lastUpdated });
      }

      for (const [id] of snapshot) {
        if (!currentIds.has(id)) {
          changes.push({
            type: "removed", id,
            data: { path: "", content: "", app: appId, ownerId, lastUpdated: 0 },
          });
          snapshot.delete(id);
        }
      }

      if (changes.length > 0) onChange(changes);
    } catch (err) {
      onError(err);
    }
    if (!stopped) setTimeout(poll, 2000);
  };

  poll();
  return () => { stopped = true; };
}

Full example

A complete adapter for a generic SQL database (e.g., Turso, PlanetScale, or any driver that supports parameterized queries):

import type {
  FileSyncAdapter, FileRecord, FileChange, Unsubscribe
} from "@agent-native/core/adapters/sync";

function rowToRecord(row: any): FileRecord {
  return {
    path: row.path,
    content: row.content,
    app: row.app,
    ownerId: row.owner_id,
    lastUpdated: Number(row.last_updated),
    createdAt: row.created_at != null ? Number(row.created_at) : undefined,
  };
}

export class MyDatabaseAdapter implements FileSyncAdapter {
  constructor(private db: any) {}

  async query(appId: string, ownerId: string) {
    const { rows } = await this.db.execute(
      "SELECT * FROM files WHERE app = ? AND owner_id = ?",
      [appId, ownerId]
    );
    return rows.map((row: any) => ({ id: row.id, data: rowToRecord(row) }));
  }

  async get(id: string) {
    const { rows } = await this.db.execute(
      "SELECT * FROM files WHERE id = ? LIMIT 1", [id]
    );
    if (rows.length === 0) return null;
    return { id: rows[0].id, data: rowToRecord(rows[0]) };
  }

  async set(id: string, record: Partial<FileRecord>) {
    await this.db.execute(
      `INSERT INTO files (id, path, content, app, owner_id, last_updated, created_at)
       VALUES (?, ?, ?, ?, ?, ?, ?)
       ON CONFLICT (id) DO UPDATE SET
         path = COALESCE(excluded.path, files.path),
         content = COALESCE(excluded.content, files.content),
         app = COALESCE(excluded.app, files.app),
         owner_id = COALESCE(excluded.owner_id, files.owner_id),
         last_updated = COALESCE(excluded.last_updated, files.last_updated),
         created_at = COALESCE(excluded.created_at, files.created_at)`,
      [id, record.path ?? "", record.content ?? "", record.app ?? "",
       record.ownerId ?? "", record.lastUpdated ?? 0, record.createdAt ?? null]
    );
  }

  async delete(id: string) {
    await this.db.execute("DELETE FROM files WHERE id = ?", [id]);
  }

  subscribe(appId: string, ownerId: string, onChange: (c: FileChange[]) => void, onError: (e: any) => void): Unsubscribe {
    const snapshot = new Map<string, { content: string; lastUpdated: number }>();
    let stopped = false;

    const poll = async () => {
      if (stopped) return;
      try {
        const records = await this.query(appId, ownerId);
        const currentIds = new Set<string>();
        const changes: FileChange[] = [];

        for (const { id, data } of records) {
          currentIds.add(id);
          const prev = snapshot.get(id);
          if (!prev) {
            changes.push({ type: "added", id, data });
          } else if (prev.content !== data.content || prev.lastUpdated !== data.lastUpdated) {
            changes.push({ type: "modified", id, data });
          }
          snapshot.set(id, { content: data.content, lastUpdated: data.lastUpdated });
        }

        for (const [id] of snapshot) {
          if (!currentIds.has(id)) {
            changes.push({
              type: "removed", id,
              data: { path: "", content: "", app: appId, ownerId, lastUpdated: 0 },
            });
            snapshot.delete(id);
          }
        }

        if (changes.length > 0) onChange(changes);
      } catch (err) { onError(err); }
      if (!stopped) setTimeout(poll, 2000);
    };

    poll();
    return () => { stopped = true; };
  }
}

Wiring it up

Pass your adapter to FileSync in your server setup:

import { FileSync } from "@agent-native/core/adapters/sync";
import { MyDatabaseAdapter } from "./my-adapter";

const adapter = new MyDatabaseAdapter(dbClient);

const sync = new FileSync({
  appId: "my-app",
  ownerId: "shared",
  contentRoot: "./data",
  adapter,
});

// Start syncing
await sync.initFileSync();

The sync engine handles the rest — watching files, pushing changes, pulling remote updates, and resolving conflicts via three-way merge.

Table schema

All SQL-based adapters use the same table schema:

CREATE TABLE files (
  id TEXT PRIMARY KEY,
  path TEXT,
  content TEXT,
  app TEXT,
  owner_id TEXT,
  last_updated BIGINT,
  created_at BIGINT
);

CREATE INDEX idx_files_app_owner ON files(app, owner_id);

Testing

Test your adapter against the five methods:

import { describe, it, expect } from "vitest";
import { MyDatabaseAdapter } from "./my-adapter";

describe("MyDatabaseAdapter", () => {
  const adapter = new MyDatabaseAdapter(testDb);

  it("set and get", async () => {
    await adapter.set("test-1", {
      path: "data/test.json",
      content: '{"hello":"world"}',
      app: "test-app",
      ownerId: "user-1",
      lastUpdated: Date.now(),
    });

    const result = await adapter.get("test-1");
    expect(result).not.toBeNull();
    expect(result!.data.path).toBe("data/test.json");
    expect(result!.data.content).toBe('{"hello":"world"}');
  });

  it("query filters by app and owner", async () => {
    const results = await adapter.query("test-app", "user-1");
    expect(results.length).toBeGreaterThan(0);
    expect(results.every(r => r.data.app === "test-app")).toBe(true);
  });

  it("delete removes the record", async () => {
    await adapter.delete("test-1");
    const result = await adapter.get("test-1");
    expect(result).toBeNull();
  });

  it("subscribe detects changes", async () => {
    const changes = await new Promise<any[]>((resolve) => {
      const unsub = adapter.subscribe("test-app", "user-1", (c) => {
        unsub();
        resolve(c);
      }, console.error);

      // Trigger a change
      adapter.set("test-2", {
        path: "data/new.json", content: "{}", app: "test-app",
        ownerId: "user-1", lastUpdated: Date.now(),
      });
    });

    expect(changes.some(c => c.type === "added")).toBe(true);
  });
});

Publishing

You can publish your adapter as a standalone npm package. Export the adapter class and any config types:

// package.json
{
  "name": "agent-native-adapter-turso",
  "peerDependencies": {
    "@agent-native/core": ">=0.2"
  }
}

Users install your package and pass it to FileSync — that's it. The adapter interface is stable and versioned with @agent-native/core.