N+1 queries are one of the easiest performance bugs to ship because the code often looks correct. A controller returns users and their posts. A serializer adds comments. A template prints an author name. Locally, everything works. In production, the same path quietly sends hundreds or thousands of small queries to PostgreSQL.
The core problem is not that ORMs are slow. Laravel Eloquent, Django ORM, SQLAlchemy, Prisma, and TypeORM can all produce efficient SQL when used deliberately. The problem is that object traversal feels cheap in application code, while every lazy relationship access can become a database round trip. That mismatch is where backend latency, connection pressure, and PostgreSQL CPU time start to leak.
What N+1 Actually Means in Production
The classic N+1 pattern is simple:
Fetch one list of parent records.
For each parent, lazily fetch related records.
Repeat the related query N times.
For 50 users, the backend may execute:
select * from users limit 50;
select * from posts where user_id = 1;
select * from posts where user_id = 2;
select * from posts where user_id = 3;
-- repeated for every userThe first query is expected. The next 50 are the bug.
This gets worse when the response contains nested relationships. A page can accidentally become 1 + N + N*M queries: users, then posts per user, then comments per post, then author profiles per comment.
PostgreSQL can handle many queries, but query count is not free. Each query has parsing, planning, execution, network, locking, buffer access, and result transfer overhead. Even when every individual query is indexed and fast, the aggregate cost can become significant under concurrency.
The dangerous part of N+1 is not one slow query. It is many small queries that look harmless until they compete for the same database connections, CPU, and I/O budget.
Why ORMs Make This Easy to Miss
ORMs encourage developers to think in object graphs:
foreach ($users as $user) {
echo $user->profile->timezone;
}That line reads like a property access. In many ORMs, it may execute a query if profile was not loaded. The bug is not obvious from the surface syntax.
This is especially common in:
API serializers and resource transformers
server-rendered templates
admin panels
GraphQL resolvers
background exports
notification jobs
permission checks inside loops
The issue often survives code review because the query is not visible where the relationship is accessed. The SQL is emitted at runtime, far away from the repository, controller, or service method that originally fetched the data.
Laravel Eloquent: The Familiar Trap
In Laravel, lazy loading is convenient, but it is also the fastest path to an N+1 query when relationships are accessed inside loops.
// Bad: one query for users, then one query per user's posts
$users = User::query()->latest()->take(50)->get();
foreach ($users as $user) {
echo $user->posts->count();
}The fix is not to abandon Eloquent. The fix is to make the data shape explicit before iteration.
// Better: users and posts are loaded in a bounded number of queries
$users = User::query()
->with('posts')
->latest()
->take(50)
->get();
foreach ($users as $user) {
echo $user->posts->count();
}For aggregates, eager loading the full relationship may still be wasteful. If the UI only needs counts, ask for counts.
$users = User::query()
->withCount('posts')
->latest()
->take(50)
->get();
foreach ($users as $user) {
echo $user->posts_count;
}This distinction matters. with() reduces query count, but it may increase memory usage if the relationship is large. withCount() avoids hydrating child models when the response only needs an aggregate.
Django ORM:
select_related vs prefetch_related
Django gives two common tools for avoiding N+1, and they solve different relationship shapes.
# Bad: each access to author may trigger another query
posts = Post.objects.order_by("-created_at")[:50]
for post in posts:
print(post.author.email)For single-valued relationships such as foreign keys and one-to-one relations, select_related() usually maps to a SQL join.
posts = (
Post.objects
.select_related("author")
.order_by("-created_at")[:50]
)
for post in posts:
print(post.author.email)For multi-valued relationships such as many-to-many or reverse foreign keys, prefetch_related() fetches related rows separately and joins them in application memory.
authors = (
Author.objects
.prefetch_related("books")
.order_by("name")[:50]
)
for author in authors:
print(len(author.books.all()))The operational difference is important: select_related() can reduce round trips through joins, while prefetch_related() avoids exploding result sets for one-to-many relationships.
SQLAlchemy: Make Loading Strategy Part of the Query
SQLAlchemy is explicit enough to give teams good control, but lazy relationship loading can still cause N+1 if the query shape is not defined.
# Bad: accessing user.posts can issue one query per user
users = session.query(User).limit(50).all()
for user in users:
print(len(user.posts))A more intentional query uses a loading strategy such as selectinload() or joinedload().
from sqlalchemy.orm import selectinload
users = (
session.query(User)
.options(selectinload(User.posts))
.limit(50)
.all()
)
for user in users:
print(len(user.posts))selectinload() is often a practical default for one-to-many relationships because it fetches children in a separate query using an IN condition. joinedload() can be suitable when the relationship is small and the join does not multiply rows too much.
Prisma and TypeORM: Explicit Includes Still Need Boundaries
Modern TypeScript ORMs often make relation loading more explicit, but N+1 can still appear when code performs per-row queries.
// Bad: one query for users, then one query per user's posts
const users = await prisma.user.findMany({ take: 50 });
for (const user of users) {
const posts = await prisma.post.findMany({
where: { userId: user.id },
});
console.log(posts.length);
}A better approach is to fetch the required graph in one query shape.
const users = await prisma.user.findMany({
take: 50,
include: {
posts: {
select: {
id: true,
title: true,
},
},
},
});This still requires judgment. Loading every column from every related table can shift the problem from query count to memory pressure and payload size. Eager loading is not a license to fetch the whole database.
Loading Strategy Trade-Offs
Avoiding N+1 is not only about replacing lazy loading with eager loading. It is about selecting the right data loading strategy for the relationship, response shape, and workload.
Strategy | Query count | Memory usage | PostgreSQL behavior | Good fit | Main operational risk |
|---|---|---|---|---|---|
Lazy loading in loops | High | Low per query | Many round trips, repeated planning and execution | Rare single-object access | Connection pressure and p95 latency spikes |
Join-based eager loading | Low | Medium to High | Larger joined result sets | Small one-to-one or many-to-one relations | Row multiplication and duplicate parent data |
Separate-query eager loading | Low to Medium | Medium | Batched related lookup, often with IN filters | One-to-many collections | Large IN lists or oversized child collections |
Aggregate loading | Low | Low | Count or summary query | Counters, badges, list metadata | Not enough when full child rows are needed |
Manual projection | Low | Low to Medium | Query matches exact response shape | High-traffic APIs | More SQL or query-builder code to maintain |
The right answer depends on what the endpoint actually needs. A user list that displays post counts should not load full posts. A detail page that renders the latest five comments should not prefetch every comment ever written.
PostgreSQL Symptoms That Point to N+1
N+1 bugs often appear as backend latency first, not as database failure. Teams may see:
increased p95 or p99 response time on list endpoints
high query count per request in application traces
frequent short queries with similar SQL and different bind values
connection pool saturation during traffic spikes
PostgreSQL CPU usage rising without one obvious slow query
slow exports or background jobs that process records one at a time
Traditional slow query logs may miss the pattern because every individual query is below the slow threshold. Observability needs to show query count per request, not only query duration.
A useful performance review should ask:
How many SQL statements does this endpoint execute?
Does query count grow with page size?
Are relationships accessed in serializers, templates, or computed properties?
Are aggregates loaded as aggregates, or by hydrating full collections?
Does pagination limit parent records but accidentally load unbounded children?
The Team-Level Fix: Make Data Shape Explicit
N+1 is rarely solved permanently by telling developers to “remember eager loading.” The better fix is to make query shape a normal part of backend design.
For production code, treat these as reviewable design decisions:
which relationships are loaded
which columns are selected
which aggregates are computed in SQL
where pagination applies
whether nested collections have limits
whether serializers can trigger database queries
whether tests assert query count for critical paths
In Laravel, teams often combine with(), withCount(), constrained eager loading, API resources, and query logging in tests. In Django, select_related() and prefetch_related() become part of queryset design. In SQLAlchemy, loading options should be visible near the query. In Prisma and TypeORM, includes and relation queries need the same discipline as SQL joins.
The principle is the same across ecosystems: do not let response rendering discover the database shape by accident.
Testing for N+1 Without Overfitting
Query-count tests can be useful, but they should be applied carefully. A brittle test that fails whenever a harmless query is added creates noise. A better pattern is to protect high-traffic or high-risk paths with upper bounds.
For example:
listing 20 users should not execute more than a small bounded number of queries
increasing page size from 20 to 50 should not increase query count linearly
rendering a serializer should not issue additional queries after the data is loaded
export jobs should process records in batches, not issue child lookups per row
The exact threshold depends on the endpoint. The important property is bounded growth. If query count grows with the number of returned rows, the endpoint deserves review.
What to Adopt First
Teams do not need a large migration to reduce N+1 risk. Start with the paths where the cost is easiest to prove:
Add query count visibility to local debugging and request tracing.
Review list endpoints, admin screens, exports, and GraphQL resolvers.
Replace loop-driven relationship access with explicit eager loading.
Use aggregate queries instead of loading collections for counts.
Add bounded query-count tests for critical endpoints.
Keep relationship loading close to the query, not hidden in presentation code.
For engineers working deeply with production data systems, the practical skill is not just writing SQL or using an ORM. It is understanding how application access patterns translate into database work. If this is part of your day-to-day role, the PostgreSQL Architect certification is the most relevant DevCerts track to review.
Conclusion
N+1 queries are a production architecture problem disguised as a small ORM mistake. They increase database round trips, hide inside clean object access, bypass slow-query detection, and scale with response size in exactly the wrong direction.
The fix is not to reject ORMs. The fix is to stop treating relationship loading as incidental. Make the data shape explicit, choose loading strategies based on relationship cardinality, test that query count remains bounded, and keep database work visible during review. PostgreSQL performs best when the application sends deliberate queries, not a stream of accidental ones produced by a loop.