I spent the some time building a comprehensive benchmark to answer this question definitively, specifically for the modern stack of Bun and Drizzle ORM on PostgreSQL 18.1. The results challenged are different from what I expected.
The Setup: No Synthetic Benchmarks
I rejected the common approach of running SELECT 1 a million times. That tells you nothing about real-world performance. Instead, I built a benchmark that mirrors a production e-commerce system:
Schema Design
// 4 tables with realistic relationships (Large dataset)
users → 939,374 rows (UUID PKs, JSONB profiles)
products → 100,000 rows (Decimal prices, JSONB metadata)
orders → 4,696,870 rows (Foreign keys, Enum status)
order_items → 16,439,122 rows (High-volume intersection table)
Query Scenarios
| Scenario | What It Tests |
|---|---|
| Point Read | Minimum latency floor |
| Aggregates | Analytics workloads |
| Pagination | Dashboard/list views |
| Complex Search | Real filter operations |
| Dashboard | Multi-table JOINs |
Environment
- Machine: Apple M1 MacBook Air (8-core, 16GB RAM)
- Runtime: Bun v1.3.5
- Database: PostgreSQL 18.1 (Docker, constrained to 2 CPU, 2GB RAM)
- ORM: Drizzle ORM v1.0.0-beta
The Contenders
Native Bun SQL
Bun’s built-in SQL driver is written in Zig, it uses:
- Binary protocol (not text) for PostgreSQL communication
- Zero-copy memory mapping to minimize allocations
- Direct integration with JavaScriptCore (no N-API overhead)
import { sql } from "bun";
// Native Bun SQL
const result = await sql`
SELECT * FROM users WHERE id = ${userId}
`;
Drizzle ORM
Drizzle positions itself as a “thin-layer” ORM-it generates SQL, not an abstraction layer. This means:
- 1:1 mapping to SQL (
db.select().from(users)→SELECT * FROM users) - No separate query engine (unlike Prisma)
- Prepared statements that cache query plans
// Drizzle ORM
const result = await db.select().from(users).where(eq(users.id, userId));
// Drizzle with Prepared Statement (optimal)
const query = db
.select()
.from(users)
.where(eq(users.id, sql.placeholder("id")))
.prepare("get_user");
const result = await query.execute({ id: userId });
The Results
Point Read: The Baseline
This is the simplest query-a single row lookup by primary key from a table with nearly 1 million rows.
SELECT * FROM users WHERE id = $1 LIMIT 1
| Implementation | Avg Time | vs Native |
|---|---|---|
| Native Bun SQL | 493 µs | baseline |
| Drizzle (Prepared) | 531 µs | +8% |
| Drizzle (Standard) | 565 µs | +15% |
Takeaway: With production-scale data, the overhead is remarkably small:
- Prepared statements reduce Drizzle overhead to just 8%
- The absolute difference is 38 microseconds (0.038ms)
To put that in perspective: a single network hop to an external API takes 50-500ms. The ORM overhead is 0.008% of a typical API call.
Aggregates: Analytics Queries
SELECT
COUNT(*)::int as total_orders,
SUM(total)::numeric as total_spent,
AVG(total)::numeric as avg_order_value,
MIN(total)::numeric as min_order,
MAX(total)::numeric as max_order
FROM orders
WHERE user_id = $1
| Implementation | Avg Time | vs Native |
|---|---|---|
| Native Bun SQL | 480 µs | baseline |
| Drizzle ORM | 575 µs | +20% |
With a large dataset, the overhead becomes proportionally smaller because database execution time dominates. We’re talking about 95 microseconds of overhead on queries that take half a millisecond.
Pagination: Real Dashboard Queries
This test fetches 50 orders with a JOIN to the users table-a common pattern in any admin dashboard. With 4.7 million orders, this tests real-world performance.
SELECT o.*, u.email as user_email, u.name as user_name
FROM orders o
JOIN users u ON o.user_id = u.id
ORDER BY o.created_at DESC
LIMIT 50 OFFSET $1
| Implementation | Avg Time | vs Native |
|---|---|---|
| Native Bun SQL | 1.16 ms | baseline |
| Drizzle (Prepared) | 1.48 ms | +28% |
| Drizzle (Standard) | 1.56 ms | +34% |
Key observation: Even with millions of rows, the ORM overhead stays reasonable at 28-34%. The absolute difference is 0.32ms-invisible to users.
Complex Search: The Surprise
This is where things get really interesting. A search query scanning millions of rows:
SELECT o.*, u.email FROM orders o
JOIN users u ON o.user_id = u.id
WHERE o.status = 'pending'
AND o.total >= 100
AND o.total <= 1000
ORDER BY o.total DESC
LIMIT 100
| Implementation | Avg Time | vs Native |
|---|---|---|
| Drizzle ORM | 703 ms | -6% ✨ |
| Native Bun SQL | 750 ms | baseline |
Wait, what? Drizzle is faster than native SQL?
This isn’t a measurement error. With production-scale data, several factors come into play:
- Query structure: Drizzle generates consistent, well-formed SQL that PostgreSQL’s planner can optimize effectively
- Type hints: The ORM provides type information that may help the query planner
- Connection handling: Drizzle’s internal connection management may be more optimized for repeated similar queries
This means:
- Simple queries: You pay a small ORM tax (~8-15% with prepared statements)
- Complex queries: Drizzle can actually be faster
- Real-world workloads: The aggregate overhead is negligible-often negative!
What This Means for Your Application
The Math That Matters
Let’s say you’re building a trading application and you’re worried about Drizzle overhead. Here’s the reality of a typical request with production-scale data:
| Operation | Time |
|---|---|
| Network latency (client) | 10-50 ms |
| TLS handshake | 5-30 ms |
| Request parsing | 0.1-0.5 ms |
| Authentication/middleware | 0.5-5 ms |
| Database query (ORM overhead) | 0.038 ms |
| Database query (execution) | 0.5-750 ms |
| Response serialization | 0.1-1 ms |
| Network latency (response) | 10-50 ms |
The ORM overhead is 0.04% of your total request time at worst-and for complex queries, it might even be negative!
Optimizing this before optimizing network calls, authentication, or serialization is a classic case of premature optimization.
When Native SQL Makes Sense
That said, there are legitimate cases for native SQL:
- Bulk operations: Inserting 100,000 rows with
COPYorINSERT ... SELECT - Real-time price feeds: When you’re processing 10,000 updates per second
- Complex CTEs: Recursive queries or advanced PostgreSQL features not yet supported by Drizzle
- Existing raw SQL: Migrating a legacy system incrementally
When Drizzle (or any typed ORM) Makes Sense
For everything else-which is 95% of most applications:
// This catches bugs at compile time, not in production
const order = await db.insert(orders).values({
userId: user.id,
total: "not-a-number", // ← TypeScript error!
status: "invalid", // ← TypeScript error!
});
The developer experience and safety of a typed ORM prevents bugs that would otherwise surface in production. And as we’ve seen, the performance cost is negligible.
Recommendations by Use Case
E-commerce / SaaS Applications
Use Drizzle with Prepared Statements
The type safety, schema-as-code, and maintainability benefits far outweigh the microseconds of overhead. Use native SQL for batch operations (bulk imports, data migrations).
Trading / FinTech Applications
Hybrid approach
// 95% of code: Drizzle for safety
const portfolio = await db.query.positions.findMany({
where: eq(positions.userId, userId),
});
// 5% hot path: Native SQL for real-time operations
const prices = await sql`
SELECT symbol, price FROM live_prices
WHERE symbol = ANY(${symbols})
`;
High-Frequency Data Processing
Native SQL with careful architecture
If you’re processing millions of events per second, the ORM overhead might matter. But also: consider whether PostgreSQL is the right tool. Time-series databases (TimescaleDB, QuestDB) might be more appropriate.
The Verdict
After building this benchmark and analyzing the results, my recommendation is clear:
Start with Drizzle ORM using prepared statements. Optimize to native SQL only when you have profiling data proving it’s necessary.
The “performance tax” of Drizzle is:
- 38-95 microseconds for simple queries (8-20% overhead)
- Negative for complex queries-Drizzle is actually faster
- Invisible compared to network latency, I/O, and business logic
And the benefits are substantial:
- Type-safe queries that catch bugs at compile time
- Schema as code that can be version-controlled
- Migrations that are predictable and reviewable
- Developer productivity that translates to shipping faster
Try It Yourself
The complete benchmark is open source:
git clone https://github.com/habibium/bun-sql-drizzle-benchmark
cd bun-sql-drizzle-benchmark
bun install
bun run db:up # spin up the database container
bun run db:push # push the schema to the database
bun run seed:small # seed the database with small data
bun run bench:micro # run the benchmark
I’d love to see results from different hardware configurations. If you run these benchmarks, share your findings!
Source code: https://github.com/habibium/bun-sql-drizzle-benchmark