Guides

Mar 10, 2026

Mar 10, 2026

Build a Multi-Tenant SaaS Backend in 10 Minutes

From zero to isolated tenant databases in 10 minutes. A step-by-step walkthrough using the TenantsDB CLI with no infrastructure to manage.

What you will build

By the end of this walkthrough, you will have:

  • A control database for your application logic (users, billing, config)

  • A tenant workspace with a versioned schema

  • Three isolated tenant databases, each with their own connection string

  • Full-text search across all tenants

  • Everything running with TLS, query logging, and automated backups

No Docker. No Kubernetes. No infrastructure to manage. Just the CLI and 10 minutes.

Minute 0: Install and sign up

Install the CLI. One command, works on macOS, Linux, and Windows.

Create your account.


Your API key is saved automatically. Every command from here uses it.

Minute 1: Create your control database

Your application needs a database for its own data: users, billing, configuration. This is not tenant data. It is your application's data.

Create a control mode workspace. Control mode gives you a managed database with full DDL access. No blueprints, no versioning, no tenant deployment. Just a database.


Connect with any PostgreSQL client and create your application tables.

CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email TEXT UNIQUE NOT NULL,
    tenant_id TEXT NOT NULL,
    plan TEXT DEFAULT 'free'
);

CREATE TABLE subscriptions (
    id SERIAL PRIMARY KEY,
    user_id INT REFERENCES users(id),
    plan TEXT NOT NULL,
    status TEXT DEFAULT 'active'
)

Your control database is ready. Your application uses this connection string directly.

Minute 3: Create your tenant workspace

Now create the schema that every customer's database will use. This is a tenant mode workspace. Every schema change you make here is tracked as a versioned blueprint.


Connect to the workspace and design your schema.

CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    price DECIMAL(10,2) NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    product_id INT REFERENCES products(id),
    quantity INT NOT NULL,
    total DECIMAL(10,2) NOT NULL,
    status TEXT DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_orders_status ON orders(status)

Every DDL statement is captured automatically as a blueprint version. You do not need to write migration files.

Minute 5: Create your first tenants

Each tenant gets their own isolated database with the schema you just designed.


Three tenants. Three isolated databases. Each has the same schema (products and orders tables with the index). Each has its own connection string. Data in one database is not accessible from another.

Minute 6: Insert data

Insert some data into each tenant. You can use the CLI, the API, or connect directly with any PostgreSQL client using the connection string.


Each INSERT goes to the correct tenant's database automatically. The proxy handles routing based on the tenant name.

Minute 7: Query across tenants

Query a single tenant:


Query all tenants at once:


One query, executed against every tenant's database in parallel, results returned per tenant. Acme sees only acme's data. Globex sees only globex's data. The isolation is structural.

Minute 8: Search across databases

Every write that passes through the proxy is automatically indexed. Search is available immediately.


No search engine to deploy. No indexing pipeline to configure. No sync jobs to maintain. Data is searchable the moment it is written.

Minute 9: Evolve your schema

Add a column to the orders table in your workspace.


The change is tracked as a new blueprint version. Deploy it to all tenants:


All three tenants now have the discount_code column. One command, every tenant updated, no migration files written.

Minute 10: Connect your application

Your application connects to the proxy using standard database drivers. No SDK required. No special client library. Any PostgreSQL driver works.

import psycopg2

# Your app knows which tenant the request is for (from auth/session)
tenant = "acme"
blueprint = "orders"

# Connect through the proxy
conn = psycopg2.connect(
    host="pg.tenantsdb.com",
    port=5432,
    dbname=f"{blueprint}__{tenant}",
    user="tdb_2abf90d3",
    password="tdb_****",
    sslmode="require"
)

cursor = conn.cursor()
cursor.execute("SELECT * FROM orders WHERE status = 'completed'")
rows = cursor.fetchall()

The same pattern works for any language. Node.js with pg, Go with pgx, Java with JDBC, Ruby with pg gem. If it connects to PostgreSQL, it works with TenantsDB.

For MySQL, MongoDB, and Redis, the same approach applies with their respective drivers and proxy endpoints.

What you have now

In 10 minutes, without managing any infrastructure:

A control database for your application logic, accessible at a standard PostgreSQL connection string.

A versioned schema that tracks every DDL change and deploys to all tenants in one command.

Three isolated tenant databases, each with their own connection string, their own data, and their own query logs.

Full-text search across all tenants and all database engines, with tenant isolation enforced at the index level.

Automated backups running on schedule to encrypted S3 storage.

TLS encryption on every connection. Query logging per tenant. Rate limiting and abuse detection at the proxy level.

What comes next

This walkthrough used PostgreSQL. TenantsDB also supports MySQL, MongoDB, and Redis. You can add additional workspaces with different engines and attach multiple databases to the same tenant.


Tenant "acme" now has both a PostgreSQL database (orders) and a MongoDB database (documents), both isolated, both searchable, both backed up.

When a tenant needs dedicated infrastructure, promote them with one command:

Zero downtime. Connection strings unchanged. Your application does not know it happened.

Start free with up to 5 tenants at docs.tenantsdb.com.