The Complete Guide to Next.js Hybrid Rendering: When and Why to Use SSG, ISR, SSR, and PPR
Anyone who has done web development has probably faced this dilemma at some point: "SEO matters, so I should use SSR — but that drives up server costs... and if I stick with SSG alone, I can't reflect real-time data..." When I first adopted Next.js, I naively thought "why not just use SSR for everything?" — then traffic grew, I saw the server bill, and reality hit. I still remember the satisfaction of switching to ISR and watching server requests drop by nearly 70%.
Hybrid rendering is the strategy that resolves exactly this dilemma. It means choosing CSR, SSR, SSG, or ISR on a per-route or per-component basis within a single application. The moment you let go of the desire to handle everything one way, you can win on both performance and cost. This article covers the characteristics of each rendering strategy, the criteria for combining them in real-world projects, and the latest patterns drawing the most attention in 2025–2026 — all in one place.
This is written for frontend developers who already know the basics of Next.js. If you're comfortable with App Router syntax, the code examples should be immediately applicable to your work.
Core Concepts
Rendering Strategies at a Glance
Whenever you're unsure which strategy to use for a given page, this table is a useful starting point.
| Strategy | When Rendered | Best For | Main Drawback |
|---|---|---|---|
| SSG | Build time | Marketing pages, blogs | Slow content updates |
| ISR | Build time + background regeneration | Product listings, news | Cache invalidation complexity |
| SSR | Request time | User-specific dashboards, real-time data | Server load, TTFB |
| CSR | Client | Interactive areas after authentication | Weak SEO, slow initial load |
| Streaming SSR | Request time (chunked) | Parallel loading of data-dependent components | Implementation complexity |
One question I get frequently: "Isn't Streaming SSR strictly better than SSR?" Not necessarily. TTFB is identical between the two. The moment the server receives a request and begins processing is the same. Where Streaming SSR has the edge is FCP (First Contentful Paint) and LCP (Largest Contentful Paint) — because it can send the rest of the content to the browser while waiting on components that depend on slow APIs.
ISR (Incremental Static Regeneration): Generates static HTML at build time, then automatically regenerates the page in the background after a specified time (
revalidate). Think of it as the midpoint between fully static (SSG) and fully dynamic (SSR).
One important behavioral detail worth calling out: due to ISR's stale-while-revalidate nature, after the revalidate window expires, the first visitor still receives the previous version of the page, and that request triggers a background regeneration. Only the second visitor onward receives the new page. Be careful about using ISR alone for data that requires immediate updates — like real-time stock availability — since users may see stale information.
The Trend Toward Ever-Finer Blending of Static and Dynamic
The three concepts that follow are the result of continuously pushing the question: "How far can we mix static and dynamic within a single page?" PPR splits static and dynamic at the route level; Server Islands brings that down to the component level; and Resumability goes further by eliminating the concept of hydration altogether.
Partial Prerendering (PPR) — Static and Dynamic on the Same URL
Honestly, my first reaction to PPR was "what does that even mean?" The idea that part of the same page is served instantly from a CDN while another part is streamed in from the server felt a bit foreign at first.
The core idea is simple. Using <Suspense> boundaries as the dividing line: content outside the boundary is generated statically at build time and cached at the CDN edge, while dynamic components inside <Suspense> are streamed in at request time. The result is that the initial HTML is delivered immediately from the edge, which significantly improves LCP.
// next.config.ts
export default {
experimental: { ppr: 'incremental' }
}// app/products/page.tsx
export const experimental_ppr = true
export default function ProductsPage() {
return (
<>
{/* Cached on CDN at build time — must be purely static components with no async data fetching */}
<StaticHero />
<CategoryNav />
{/* Streamed in from the server at request time */}
<Suspense fallback={<ProductSkeleton />}>
<DynamicProductFeed />
</Suspense>
<Suspense fallback={<RecommendSkeleton />}>
<PersonalizedRecommendations />
</Suspense>
</>
)
}One important caveat: components placed outside <Suspense> (like StaticHero and CategoryNav) must be genuinely static. Any component that fetches dynamic data with async is excluded from build-time caching. You can't just put any component outside <Suspense> and expect it to work.
PPR is currently in the experimental stage; check the Next.js official documentation for the latest on stabilization. For now, you can experiment incrementally with incremental mode on specific routes of your choosing.
Server Islands — Planting Dynamic "Islands" in a Static Site
Where PPR is the Next.js ecosystem's approach, Astro's Server Islands — stabilized in late 2024 — offer a different approach tailored to content-centric sites. Even if you primarily use Next.js, understanding why this concept is gaining attention will help inform your architecture decisions.
The idea: cache the vast majority of a page indefinitely on a CDN, but render only the parts that need personalized data (cart count, username, etc.) as server-rendered "islands."
---
// pages/blog/[slug].astro
// getStaticPaths must be inside the frontmatter (---) to work
import { getPost, getAllPosts } from '../../lib/cms';
import CommentSection from '../../components/CommentSection.astro';
export async function getStaticPaths() {
const posts = await getAllPosts();
return posts.map(p => ({ params: { slug: p.slug } }));
}
const { slug } = Astro.params;
const post = await getPost(slug);
---
<article>
<!-- Body: statically generated at build time, cached indefinitely on CDN -->
<h1>{post.title}</h1>
<Fragment set:html={post.content} />
</article>
<!-- server:defer — an island rendered asynchronously on the server -->
<CommentSection server:defer postId={post.id}>
<div slot="fallback">Loading comments...</div>
</CommentSection>Islands Architecture: A pattern where interactive component "islands" are placed within a sea (ocean) of static HTML. Each island hydrates independently, so no JavaScript runs at all in the remaining static areas.
Resumability — A New Paradigm That Abandons Hydration
This is a concept driven by the Qwik framework — a separate ecosystem from Next.js and Astro. Even so, it consistently appears in discussions of rendering trends for 2025–2026.
In traditional SSR, the server generates HTML and sends it, then the browser "revives" that HTML through a process called hydration — re-executing all component code. This means JS bundle size directly affects TTI (Time to Interactive).
Resumability works differently: the server serializes component state and event listener information into the HTML, and the browser "resumes" from that state rather than re-executing the component tree from scratch. Because the entire component tree doesn't need to re-run, the cost of JS parsing and execution approaches zero. That's why Qwik outperforms other frameworks in cold-start TTI benchmarks.
Real-World Application
Hybrid Rendering in an E-Commerce Platform
This is the most common scenario in practice. Even within a single online store, every page has completely different requirements.
| Area | Rendering Strategy | Reason |
|---|---|---|
| Main / marketing pages | SSG | Rarely updated, high traffic — maximizes CDN caching benefit |
| Product listings | ISR (every 5 minutes) | Needs SEO + periodic price and stock updates |
| Product detail | SSR | Real-time stock, user-specific recommendations |
| Cart / checkout | CSR | Post-auth interaction focus, SEO not needed |
On my team, we chose SSR for product detail pages, but recommendation API latency was a bigger problem than real-time stock. So we wrapped just the recommendations section in <Suspense> and streamed it, which noticeably reduced the overall page response wait time.
// app/products/page.tsx — Product listing: ISR
export const revalidate = 300 // regenerate every 5 minutes
export default async function ProductListPage() {
const products = await fetchProducts()
return <ProductGrid products={products} />
}// app/products/[id]/page.tsx — Product detail: SSR
export const dynamic = 'force-dynamic'
export default async function ProductDetailPage({
params,
}: {
params: { id: string }
}) {
const [product, recommendations] = await Promise.all([
fetchProduct(params.id),
fetchRecommendations(params.id),
])
return (
<>
<ProductDetail product={product} />
<Suspense fallback={<RecommendSkeleton />}>
<Recommendations items={recommendations} />
</Suspense>
</>
)
}Streaming SSR + CSR in a SaaS Dashboard
This is the classic SaaS structure where pre-login and post-login experiences are completely different. The trickiest part for me was figuring out "where to cut off SSR and start CSR." The rule turns out to be simple: if it needs to react to real-time external events (WebSocket, SSE), use CSR; otherwise, handle it with Streaming SSR.
// app/page.tsx — Landing page: SSG
// No revalidate = generated once at build time
export default function LandingPage() {
return (
<>
<HeroSection />
<PricingTable />
<Testimonials />
</>
)
}// app/dashboard/page.tsx — Dashboard: Streaming SSR
export default async function DashboardPage() {
return (
<DashboardLayout>
{/* Summary info that loads quickly */}
<Suspense fallback={<StatsSkeleton />}>
<StatsOverview />
</Suspense>
{/* Heavy charts as a separate stream */}
<Suspense fallback={<ChartSkeleton />}>
<RevenueChart />
</Suspense>
{/* Real-time notifications as a client component */}
<RealtimeNotifications />
</DashboardLayout>
)
}// components/RealtimeNotifications.tsx — CSR + WebSocket
'use client'
import { useEffect, useState } from 'react'
type Notification = {
id: string
message: string
type: 'info' | 'warning' | 'error'
createdAt: string
}
export function RealtimeNotifications() {
const [notifications, setNotifications] = useState<Notification[]>([])
useEffect(() => {
// NEXT_PUBLIC_WS_URL must be set in .env.local
const wsUrl = process.env.NEXT_PUBLIC_WS_URL
if (!wsUrl) throw new Error('NEXT_PUBLIC_WS_URL environment variable is not set')
const ws = new WebSocket(wsUrl)
ws.onmessage = (event) => {
setNotifications(prev => [JSON.parse(event.data) as Notification, ...prev])
}
return () => ws.close()
}, [])
return <NotificationList items={notifications} />
}Astro Server Islands for Content / Media Sites
For content-centric sites like blogs or media outlets, this is the pattern favored by teams that have achieved Lighthouse 95+ scores with Astro. One mistake I made when first trying Astro: putting getStaticPaths outside the frontmatter simply doesn't work. In Astro, it must be inside ---.
---
// pages/blog/[slug].astro
import { getPost, getAllPosts } from '../../lib/cms';
import CommentSection from '../../components/CommentSection.astro';
import RelatedPosts from '../../components/RelatedPosts.astro';
// getStaticPaths must be inside the frontmatter (---) to work
export async function getStaticPaths() {
const posts = await getAllPosts();
return posts.map(p => ({ params: { slug: p.slug } }));
}
const { slug } = Astro.params;
const post = await getPost(slug);
---
<article>
<!-- Body: statically generated at build time, cached on CDN -->
<h1>{post.title}</h1>
<Fragment set:html={post.content} />
</article>
<!-- Comments: dynamically rendered as a server island -->
<CommentSection server:defer postId={post.id}>
<div slot="fallback">Loading comments...</div>
</CommentSection>
<!-- Related posts: server island -->
<RelatedPosts server:defer postId={post.id}>
<div slot="fallback">Loading related posts...</div>
</RelatedPosts>Pros and Cons
Advantages
| Item | Detail |
|---|---|
| Performance optimization | Match strategy to each route's characteristics to maximize LCP, INP, and CLS |
| SEO + interactivity together | Server-rendered HTML for crawler indexing + client-side interaction maintained |
| Infrastructure cost reduction | CDN caching of static content reduces origin server requests and cuts traffic costs |
| Incremental migration | Apply SSR/SSG to only select routes of an existing CSR app, minimizing risk |
| Edge deployment benefit | SSR TTFB reduced by 60–80% when using Vercel Edge Functions or Cloudflare Workers |
Disadvantages and Caveats
| Item | Detail | Mitigation |
|---|---|---|
| Increased architectural complexity | Managing different caching strategies, server environment differences, and data fetching patterns per route | Document team guidelines for rendering strategy decisions |
| Hydration mismatch | React Hydration Error when SSR HTML and CSR output differ | Separate Date.now(), Math.random(), and window access into client-only code |
| ISR cache invalidation complexity | stale-while-revalidate nature means data changes aren't reflected immediately, exposing stale content | Supplement with On-Demand Revalidation using revalidatePath() / revalidateTag() |
| Cold start cost | Cold start latency on serverless SSR endpoints affects TTFB | Mitigate with edge deployment; confirm Edge Runtime constraints in advance |
| Bundle bloat risk | Excessive JS in CSR segments negates the benefits of hybrid strategy | Make aggressive use of Dynamic Import + Tree Shaking |
TTFB (Time To First Byte): The time from when the browser sends a request to the server until it receives the first byte. The metric most affected in SSR, best addressed by reducing physical distance through edge deployment.
Hydration Error: Occurs in React SSR when the HTML produced by the server differs from what the browser renders. Since
windowandlocalStoragedon't exist on the server, code accessing these objects must be strictly isolated to client-only components.
The Most Common Mistakes in Practice
These come up everywhere, but you only truly internalize them once you've experienced them yourself. I've stepped in all three of the following traps at least once.
-
"Just use SSR for everything" — Setting SSR on pages that don't need dynamic data, creating unnecessary server load. Static content like a main page or an About page is perfectly served by SSG. Before deciding on a rendering strategy, start by asking: "Does this data change per request?"
-
Introducing ISR without a cache invalidation strategy — Only setting the
revalidatetime and forgetting On-Demand Revalidation for when data actually changes. You can't make users wait five minutes when a price updates. Design the structure to call Next.js'srevalidatePath()orrevalidateTag()from a Server Action or webhook from the very beginning. -
Placing
'use client'components at the top level — Adding'use client'to an unnecessarily large component pulls the entire subtree into the client bundle. It's important to isolate only the interactive parts at the smallest possible unit. For example, there's no need to make an entire card component a client component just because of a single like button.
Closing Thoughts
Even though it seems complex, there's no need to design it perfectly from the start. Just looking at the app you're currently working on and asking "does this page really need SSR?" is enough to start seeing opportunities for improvement. When you apply these criteria to the actual routes in your own project, the rendering strategy decisions you made without much thought will start to look very different. Using data dynamism, SEO necessity, and server cost as three axes for judgment makes architecture decisions significantly clearer.
Three steps you can start right now:
- List your app's routes and classify them by dynamism — Simply checking off "does the data change frequently?", "does it require login?", and "is SEO important?" for each route in a spreadsheet or Notion table is enough to sketch out a rendering strategy.
- Start by converting your highest-traffic static pages to SSG — In Next.js, you can start by adding
export const dynamic = 'force-static'to the page file, or removing async data fetching without arevalidate. Compare LCP metrics before and after the switch and the effect will be visible. Beyond Vercel Analytics, you can also measure with Lighthouse CI or PageSpeed Insights. - Wrap heavy data-dependent components in
<Suspense>— Wrapping components with slow API calls in<Suspense fallback={<Skeleton />}>activates Streaming SSR, delivering content to users in chunks without waiting for the entire page to finish.
Next article: A practical guide to edge rendering with Vercel Edge Functions and Cloudflare Workers — implementing A/B testing, localization, and personalization at the edge without server load.
References
- How to choose the best rendering strategy for your app | Vercel
- Partial Prerendering | Next.js Official Docs
- Partial prerendering: Building towards a new default rendering model | Vercel
- Islands Architecture | Astro Official Docs
- Server Islands | Astro Official Docs
- The Next.js 15 Streaming Handbook | freeCodeCamp
- Resumable | Qwik Official Docs
- What Are the Emerging Trends in Server-Side Rendering for a JavaScript Framework? | Sencha
- Edge Computing for Frontend Developers | daily.dev
- Islands Architecture | patterns.dev
- Advanced SSR 2025: Selective Hydration, RSCs, and Edge Rendering | blog.madrigan.com
- CSR vs SSR vs SSG vs ISR: Best Rendering Method in 2026 | hashbyt