Guides
How to Migrate a Shared Database to Isolated Tenants
You have a shared database with a tenant_id column. Here is how to split it into isolated per-tenant databases without downtime and without rewriting your application.
You already have the data. You just need to split it.
Most SaaS applications start with a single shared database. Every customer's data lives in the same tables, separated by a tenant_id column or something similar. A company_id field. An org_id. A workspace_id. The name varies, the pattern is the same.
At some point, you decide that isolated databases per tenant would be better. Stronger security, simpler compliance, no noisy neighbors, cleaner operations. But the data is already in a shared database with hundreds of thousands or millions of rows. Migrating feels like a project that will take months.
It does not have to. If your data already has a routing field that identifies which rows belong to which tenant, the migration is mechanical. Read the rows, group by tenant, write each group into its own database. The challenge is not the concept. It is doing it reliably across tables, foreign keys, and multiple database engines without breaking your application.
Step 1: Understand what you have
Before migrating anything, you need to know the shape of your data. How many tenants are in the database? How many rows per tenant? Do all tables have the routing field, or do some tables reference it indirectly through foreign keys?
The analysis step answers all of these questions without modifying your source database.
The analysis scans your source database, finds all distinct values in the routing field, counts rows per tenant per table, and verifies that the routing field exists in every table. If some tables do not have the routing field directly, the analyzer traces foreign key relationships to determine which tenant each row belongs to.
Your source database is never modified during analysis. It is a read-only operation.
Step 2: Create a workspace for the schema
Before splitting the data, you need a workspace that holds the target schema. This is the blueprint that every tenant database will be created from.
You can import the schema directly from your source database.
The schema import reads the structure of your source database: tables, columns, indexes, constraints. It creates matching objects in the workspace. No data is copied at this step, only the structure.
One important detail: the routing field (tenant_id) exists in your source schema but will not exist in the tenant databases. Each tenant has their own database, so there is no need for a column that identifies which tenant a row belongs to. The import process can optionally drop the routing column from the target schema.
Step 3: Split and import the data
This is the actual migration. The import reads your source database, groups rows by the routing field, creates a tenant for each distinct value, and writes each tenant's rows into their own isolated database.
The import creates one tenant per distinct routing field value. If the routing field contains "acme", "globex", and "wayne", three tenants are created, each with their own database containing only their rows.
The --create-tenants flag automatically provisions a tenant for each value found. The --drop-routing flag removes the routing column from the imported data since it is no longer needed when each tenant has their own database.
Step 4: Monitor the progress
For large databases, the import runs asynchronously. You can check progress at any time.
Each tenant is imported independently. If one tenant's import fails, the others are not affected. Failed imports can be retried without re-importing tenants that already completed.
What happens to the routing column
In your source database, every table has a tenant_id column (or equivalent) that identifies which rows belong to which customer. In the isolated model, this column is redundant. The database itself is the isolation boundary.
When you import with --drop-routing, the column is removed from the target schema and not included in the imported data. Your application code that previously included WHERE tenant_id = ? in every query no longer needs that clause. The query runs against a database that contains only one tenant's data.
Before:
After:
Your application still needs to know which tenant it is serving so it connects to the correct database. But the query logic is simpler because the tenant filter is no longer your application's responsibility.
Tables without the routing field
Not every table in a shared database has a direct tenant_id column. Some tables reference it indirectly through foreign keys. A line_items table might not have tenant_id, but it has an order_id that references the orders table, which does have tenant_id.
The import analyzer traces these foreign key paths. When it encounters a table without the routing field, it follows the foreign key chain until it finds a table that has it. The rows are then assigned to the correct tenant based on the resolved relationship.
If a table has no routing field and no foreign key path to one, the analyzer reports it. You can then decide whether that table is shared reference data (like a list of countries or currencies) that should be copied to every tenant, or whether it needs a routing field added before import.
Multi-engine migration
If your application uses multiple database engines, you can run separate imports for each one. Create a workspace per engine, import the schema, then import the data.
Each engine's data is imported independently. Tenants that exist across multiple engines get multiple databases, all routed through the same proxy, all isolated by the same mechanism.
The full import shortcut
If you want to import both schema and data in a single operation, use import-full. This combines the schema import and data import into one step.
This is the fastest path from a shared database to isolated tenants.
What about the source database?
The import process reads from your source database but never modifies it. Your existing application continues running against the original database throughout the entire migration. There is no downtime, no cut-over, no moment where both systems are partially running.
Once the import is complete and you have verified the data in the new tenant databases, you switch your application to connect through the TenantsDB proxy instead of directly to your source database. The switch is a connection string change, not a code change.
Your source database remains intact. Keep it as a backup until you are confident the migration is complete. Then decommission it on your own schedule.
After migration
Once your data is split into isolated tenant databases, the platform handles everything that follows. Schema changes are deployed through versioned blueprints. New tenants are provisioned automatically. Search indexing begins as soon as data flows through the proxy. Backups run on schedule per tenant.
If a specific tenant later needs dedicated infrastructure, a different region, or elevated compliance controls, they can be promoted from shared to dedicated with zero downtime and no data re-migration.
The migration from shared tables to isolated databases is a one-time operation. Everything after it is simpler.
Start free with up to 5 tenants at docs.tenantsdb.com. Full migration documentation is available at docs.tenantsdb.com/quickstart.