Architecture

Mar 16, 2026

Mar 16, 2026

Multi-Tenant Search: Querying Across Isolated Databases

When every tenant has their own database, search becomes an engineering challenge. Here is how cross-database, tenant-isolated search works without building a custom pipeline.

The search problem nobody talks about

Multi-tenant database isolation is a well-understood pattern. Give each tenant their own database, and data separation is guaranteed at the infrastructure level. No shared tables, no WHERE clauses, no risk of cross-tenant data leaks.

But there is a problem that surfaces immediately after you implement this pattern: how do you search?

When all your data lived in one database, search was straightforward. Run a query against the table, maybe add a full-text index, and return results. With isolated databases, that same search now needs to hit hundreds or thousands of separate databases, merge the results, rank them, and return a response in under 500 milliseconds.

This is the search problem that comes with multi-tenant isolation, and most teams underestimate how much engineering it requires.

Why search is hard across isolated databases

Consider a SaaS application with 200 tenants. Each tenant has their own PostgreSQL database, their own MongoDB collection for documents, and their own Redis instance for sessions.

A support agent needs to find a customer named "Alice" across the entire platform. Without a search layer, the application would need to:

  1. Connect to all 200 PostgreSQL databases and run SELECT queries.

  2. Connect to all 200 MongoDB instances and run find queries.

  3. Aggregate results from 400 separate queries.

  4. Rank and deduplicate the results.

  5. Return the response before the support agent's browser times out.

This is not a practical approach. Even with connection pooling and parallel execution, 400 database queries per search request does not scale.

The standard solution is a search index. But building one for multi-tenant isolated databases introduces its own set of challenges.

Challenge 1: Indexing across multiple database engines

Your tenants might use PostgreSQL for relational data, MongoDB for documents, and Redis for session state. Each engine has a different data model, a different change detection mechanism, and a different way of representing the same data.

A search indexing pipeline needs to understand all of them. PostgreSQL rows need to be flattened into searchable documents. MongoDB documents need nested fields extracted. Redis hashes need key-value pairs normalized.

Most search solutions assume a single data source. Elasticsearch, Typesense, and Meilisearch all work well when connected to one database. Connecting them to three different database engines across hundreds of tenants requires a custom indexing pipeline per engine.

Challenge 2: Maintaining tenant isolation in the index

The entire point of database-per-tenant isolation is that one tenant's data is physically separate from another's. If you dump all tenant data into a single search index with a tenant_id field, you have recreated the exact problem you were trying to avoid.

A search query with a missing tenant filter now exposes every customer's data through the search layer. The isolation you enforced at the database level is bypassed at the search level.

The proper approach is to partition the search index by tenant at write time. Each tenant's data goes into its own namespace. When a search query arrives, only the relevant namespace is queried. There is no filter to misconfigure because other namespaces are never touched.

Challenge 3: Keeping the index in sync

Schema changes happen. Tables are added, columns are renamed, new collections appear. Every change in the source database needs to be reflected in the search index. If the index falls behind, search returns stale or incomplete results.

With one database, keeping an index in sync is manageable. With hundreds of databases across multiple engines, it is a synchronization problem that requires event-driven architecture, reliable message queues, and retry logic for every database type.

Challenge 4: Cross-tenant queries for your own operations

Sometimes you need to search across all tenants. Not for your customers, but for your own team. Find all tenants with overdue invoices. Identify accounts that match a compliance keyword. Locate a specific record when a customer contacts support but you do not know which tenant they belong to.

This requires a search endpoint that queries all tenant namespaces in parallel, returns results tagged with the tenant ID, and still respects the isolation boundaries so that per-tenant search results are never mixed.

How TenantsDB Search solves this

TenantsDB sits between your application and every database. Because every write passes through the proxy, the platform sees every INSERT, UPDATE, and document write across every tenant and every database engine.

The indexer subscribes to these write events and automatically indexes the affected data into a tenant-partitioned search namespace. By the time your application issues a search, the data is already indexed.


There is no indexing pipeline to build. No sync job to maintain. No per-engine configuration. Data from PostgreSQL, MySQL, MongoDB, and Redis is indexed through the same mechanism and queryable through the same API.

Tenant isolation at the index level

Search isolation is structural, not filter-based. When data is indexed, it is written into a namespace derived from the tenant ID. When a search query arrives, only that tenant's namespace is queried. Other namespaces are not accessed.

This is the same principle as database-per-tenant isolation, applied to the search layer. There is no tenant_id filter that could be omitted. The isolation is enforced by the architecture, not by a query parameter.


The cross-tenant endpoint queries all namespaces in parallel and tags each result with its tenant ID. Your support team can use it to find records across the entire platform. Your customers never see it.

Filtering by database and collection

Not every search needs to hit every database. You can narrow scope by database engine or collection name. Filtering happens at the index level before scoring, so filtered queries are faster and return tighter results.


Both filters are optional. Without them, search queries every indexed database and collection for the tenant.

What gets indexed

Every write that passes through the proxy is indexed automatically. The specifics depend on the database engine.

PostgreSQL and MySQL: all rows from all tables. Every column value is indexed as a searchable field. Primary keys and timestamps are extracted automatically.

MongoDB: full documents are indexed field by field. Nested fields are flattened. GridFS collections are excluded automatically.

Redis: hash fields from HSET commands are indexed. Plain strings, lists, and keys with very short TTLs are excluded since they typically represent ephemeral data.

Indexing is asynchronous. Your writes are never delayed by the indexer. The proxy acknowledges the write to your application before indexing begins.

The cost of building this yourself

Building multi-tenant keyword search from scratch involves deploying a search engine, writing indexing pipelines for each database type, partitioning the index per tenant, keeping the index in sync as schemas evolve, and making sure none of it violates tenant isolation.

This is typically a 2 to 4 month engineering project for a dedicated team. And it needs ongoing maintenance as new database types, new tenants, and new schema versions are added.

TenantsDB Search ships as part of the platform on all tiers, including free. It works across all four database engines, maintains structural tenant isolation, and requires zero configuration from your application.

Start free with up to 5 tenants at docs.tenantsdb.com.