DevCerts logo DevCerts

When a React App Feels Slow: Find Extra Renders Before Reaching for memo

Slow React apps are rarely fixed by adding memo everywhere. The practical path is to profile first, identify whether the problem is render frequency or render cost, then reduce state reach, split context, and memoize only where it changes measurable behavior.

React
When a React App Feels Slow: Find Extra Renders Before Reaching for memo

A slow React application usually does not fail because one component renders. It fails because too many components render for the wrong reasons, or because a few expensive components render at the wrong time. Treating React.memo as the first response often hides the real design problem: state is too high, context is too broad, derived work is repeated, or expensive UI is coupled to fast-changing input.

The better approach is diagnostic. Profile the interaction, find what actually renders, separate render frequency from render cost, then choose the smallest structural fix. In many production codebases, the most effective optimization is not memoization. It is moving state closer to where it is used.

The common mistake: optimizing before locating the render boundary

A familiar pattern appears in many React applications:

  1. A screen feels slow.

  2. Developers see many components re-rendering.

  3. React.memo, useMemo, and useCallback are added across the tree.

  4. The app becomes harder to read, but the interaction still feels slow.

This happens because memoization is not a general performance model. It is a cache. A cache only helps when the cache key is stable, the avoided work is meaningful, and the cost of checking the cache is lower than the work being skipped.

A component can re-render for valid reasons and still be cheap. Another component can render rarely and still be expensive. The profiler helps distinguish those cases.

The first question is not “where can we add memo?” It is “which state change caused this render path, and was that path necessary?”

Start with profiling, not intuition

Use the React Profiler to record the specific interaction that feels slow: typing in a field, opening a modal, switching filters, expanding a row, or dragging an element. Avoid profiling a full page load if the complaint is about an input lag. Measure the interaction that users notice.

Look for three things:

  • Components that render on every keystroke, hover, timer tick, or selection change

  • Components with high render cost relative to their purpose

  • Components that receive new object, array, or function props every time

The profiler does not tell you the architecture is wrong. It shows symptoms. Your job is to connect those symptoms to data flow.

A simplified example:

function DashboardPage() {
  const [search, setSearch] = useState('');
  const [selectedUserId, setSelectedUserId] = useState<string | null>(null);

  return (
    <>
      <SearchBox value={search} onChange={setSearch} />
      <UserTable search={search} onSelect={setSelectedUserId} />
      <ActivityPanel selectedUserId={selectedUserId} />
      <RevenueChart />
    </>
  );
}

If RevenueChart renders on every search input change, the question is not whether RevenueChart should be wrapped in memo. The first question is why search state lives in a parent that owns unrelated sections of the page.

Render frequency vs render cost

React performance work becomes clearer when you separate two dimensions.

Symptom

Likely cause

Operational signal

Better first fix

When memo helps

Many components render after a small state change

State is owned too high

High render fan-out

State colocation

Only after props are stable

Context consumers update together

Context value is too broad

Many consumers commit per update

Context splitting

For stable leaf components

One component takes most render time

Heavy computation or large tree

High self time or subtree time

Move work, virtualize, defer, cache

When inputs repeat

Memoized component still renders

Props change by reference

New object/function props each render

Stabilize props or change API

After reducing prop churn

Typing feels delayed

Expensive sibling work shares update path

Input update coupled to heavy UI

Separate urgent and non-urgent state

Sometimes, but not first

This table matters because the same visible problem, a slow interaction, can have different causes. Adding memo to every row in a table may help if rows receive stable props. It will not help much if the parent creates new row objects and callbacks on every render.

State colocation: reduce the blast radius

State colocation means keeping state as close as practical to the component that needs it. It is not about making every component local and isolated. It is about avoiding a wide render path for narrow state changes.

Here is a common problematic structure:

function ProductPage() {
  const [draftName, setDraftName] = useState('');
  const [activeTab, setActiveTab] = useState<'details' | 'pricing'>('details');

  return (
    <PageLayout>
      <ProductNameInput value={draftName} onChange={setDraftName} />
      <ProductTabs activeTab={activeTab} onChange={setActiveTab} />
      <InventoryTable />
      <AuditLog />
      <PricingRules />
    </PageLayout>
  );
}

If draftName is only needed by ProductNameInput, lifting it to ProductPage makes unrelated sections part of the same render path.

A better structure keeps local edits local and only commits when necessary:

function ProductPage() {
  const [activeTab, setActiveTab] = useState<'details' | 'pricing'>('details');

  return (
    <PageLayout>
      <ProductNameEditor />
      <ProductTabs activeTab={activeTab} onChange={setActiveTab} />
      <InventoryTable />
      <AuditLog />
      <PricingRules />
    </PageLayout>
  );
}

function ProductNameEditor() {
  const [draftName, setDraftName] = useState('');

  return (
    <ProductNameInput
      value={draftName}
      onChange={setDraftName}
    />
  );
}

This is often more effective than memoizing InventoryTable, AuditLog, and PricingRules. The unrelated components no longer participate in that state update.

The trade-off is that colocated state needs deliberate commit points. For example, local form state may need to sync with server data, validation, or navigation guards. That is a real design concern, but it is usually easier to reason about than a page-level state object that invalidates the entire screen.

Context splitting: avoid global invalidation by accident

Context is convenient, but a broad context value can turn one small update into a large render event. Every consumer of a context can be affected when the provided value changes. This is especially visible when the value is an object containing unrelated state.

Problematic pattern:

type AppContextValue = {
  user: User;
  theme: Theme;
  notifications: Notification[];
  selectedProjectId: string | null;
  setSelectedProjectId: (id: string | null) => void;
};

const AppContext = createContext<AppContextValue | null>(null);

If selectedProjectId changes frequently, consumers that only need theme or user are now connected to a noisy update source.

A more maintainable approach is to split context by update behavior, not just by domain name:

const ThemeContext = createContext<Theme | null>(null);
const CurrentUserContext = createContext<User | null>(null);
const SelectedProjectContext = createContext<{
  selectedProjectId: string | null;
  setSelectedProjectId: (id: string | null) => void;
} | null>(null);

function AppProviders({ children }: { children: React.ReactNode }) {
  const theme = useTheme();
  const user = useCurrentUser();
  const [selectedProjectId, setSelectedProjectId] = useState<string | null>(null);

  return (
    <ThemeContext.Provider value={theme}>
      <CurrentUserContext.Provider value={user}>
        <SelectedProjectContext.Provider value={{ selectedProjectId, setSelectedProjectId }}>
          {children}
        </SelectedProjectContext.Provider>
      </CurrentUserContext.Provider>
    </ThemeContext.Provider>
  );
}

This does not mean every value needs its own provider. The useful rule is simpler: values that change at different rates should usually not live in the same context object. Authentication state, theme, feature flags, current filters, and transient UI selection often have different update patterns.

Expensive components need different treatment

Sometimes the problem is not render fan-out. It is one component doing too much work during render.

Examples include:

  • Rendering thousands of DOM nodes

  • Sorting or grouping large arrays in render

  • Formatting complex data repeatedly

  • Rendering charts, timelines, editors, or dense tables

  • Running expensive derived calculations on every parent update

In that case, colocating state may not be enough. You need to reduce work.

function OrdersTable({ orders, filters }: Props) {
  const visibleOrders = useMemo(() => {
    return orders
      .filter(order => matchesFilters(order, filters))
      .sort((a, b) => b.createdAt.localeCompare(a.createdAt));
  }, [orders, filters]);

  return (
    <Table>
      {visibleOrders.map(order => (
        <OrderRow key={order.id} order={order} />
      ))}
    </Table>
  );
}

This useMemo can be useful if orders and filters are stable and the filtering or sorting is expensive enough to matter. It is not useful if orders is recreated every render or if the list is tiny.

For large lists, the bigger win may be changing what gets rendered, not caching the calculation. Virtualization, pagination, server-side filtering, and smaller row components often produce a more predictable result than memoizing a large tree after it has already been constructed.

Use memo where component boundaries are stable

React.memo is most useful for stable, relatively independent components that receive stable props and are expensive enough to skip. It is weaker when props are always new references.

Problematic usage:

const UserCard = memo(function UserCard({ user, onSelect }: Props) {
  return <button onClick={() => onSelect(user.id)}>{user.name}</button>;
});

function UserList({ users }: { users: User[] }) {
  return users.map(user => (
    <UserCard
      key={user.id}
      user={{ ...user }}
      onSelect={(id) => console.log(id)}
    />
  ));
}

UserCard is memoized, but user and onSelect are new on every render. The memo boundary does not have stable inputs.

A more useful version keeps references stable or changes the component API:

const UserCard = memo(function UserCard({
  id,
  name,
  onSelect,
}: {
  id: string;
  name: string;
  onSelect: (id: string) => void;
}) {
  return <button onClick={() => onSelect(id)}>{name}</button>;
});

function UserList({ users }: { users: User[] }) {
  const handleSelect = useCallback((id: string) => {
    console.log(id);
  }, []);

  return users.map(user => (
    <UserCard
      key={user.id}
      id={user.id}
      name={user.name}
      onSelect={handleSelect}
    />
  ));
}

This is not a command to wrap every callback in useCallback. It is a targeted fix when a memoized child depends on referential stability. Without that dependency, useCallback can add noise without changing runtime behavior in a meaningful way.

A practical debugging sequence

When a React app slows down, use a repeatable process instead of scattering memoization through the tree.

  1. Reproduce one slow interaction.

  2. Record it in the profiler.

  3. Identify components with high render frequency and high render cost.

  4. Trace the state or context update that triggered them.

  5. Move state closer to the components that need it.

  6. Split context values that update at different rates.

  7. Stabilize props only where a memo boundary exists.

  8. Memoize expensive components or derived calculations after measuring the effect.

  9. Re-profile the same interaction.

The key is to keep the optimization tied to one user-visible behavior. “The app feels slow” is too broad. “Typing in the customer search field blocks for noticeable time when the activity chart is visible” is actionable.

What not to optimize first

Some changes look performance-oriented but often create maintenance cost before measurable value:

  • Adding memo to most components by default

  • Wrapping every inline function in useCallback

  • Wrapping every computed value in useMemo

  • Moving all state to a global store to “avoid prop drilling”

  • Combining unrelated values into one convenience context

  • Optimizing development-only render behavior without checking production behavior

The maintainability cost is real. Memoization adds another condition to understand: not only what a component renders, but when React can safely skip it. In large teams, unnecessary memoization can make refactoring props harder because developers start preserving reference stability without knowing whether it still matters.

Production-grade optimization is architectural

The strongest React performance improvements usually come from better boundaries:

  • Local state for local interactions

  • Context split by update frequency

  • Derived data computed at the right layer

  • Expensive components isolated from noisy updates

  • Stable props where memoization is intentional

  • Fewer rendered nodes for large lists or dense views

For engineers working with React in production, these skills sit closer to architecture than syntax. If React performance and component design are part of your day-to-day work, the Senior React Developer certification is the most relevant DevCerts track to review.


Conclusion

A slow React app is not a memoization problem by default. It is a data-flow problem until profiling proves otherwise. Start by finding which interaction is slow, which components render, and whether the cost comes from frequency, expensive computation, or broad invalidation through context.

Use memo when the component boundary is stable and the skipped work is meaningful. Before that, colocate state, split context, and isolate expensive components from fast-changing UI state. The result is not only better runtime behavior, but also a codebase where performance decisions are visible, testable, and easier to maintain.