Deep Dive: Caching and Revalidating #54075
Replies: 68 comments 219 replies
-
Hi Tim, thanks for the detailed write-up. The functionality I'm keen to get out of a framework like Next.js is SSR on the initial page load, for reasons like SEO & accessibility. Having a fallback html payload before the JS app kicks in, which is industry standard in sectors like ecommerce. Beyond that, when the user has JS enabled and is navigating between screens, I don't exactly see a need for RSC (beyond not wanting to write and maintain separate Server/Client Component implementations of each screen). The main points you've listed for requiring the client-side cache seem to come down to performance? But IMO a higher priority for certain types of websites is predictability over marginal performance gains. I wrote a comment in the last thread about how, while "bfcache" may well be intuitive, clicking on a link not loading a fresh view of the data is IMO unintuitive compared to way the majority of the web functions. I also don't really follow why this client-side cache is so important for performance, yet only up to 30 seconds, and after that it becomes unimportant (seems very arbitrary). To give a more specific use-case: imagine an ecommerce website with a public-facing products list page. SSR is essential for the reasons listed above. But that page may also contain lots of highly dynamic state (e.g. amount left in stock, how many the user has added to the basket, and more advanced gamification features). Having that data be stale even though the user just clicked a fresh link to that page could be highly confusing / make the site feel unreliable (ironic as the stated goal is user perceived performance, but I'd perceive a website showing me half-minute old data as much "slower" than one which spends 200ms longer on a loading spinner but then gives me perfectly fresh data). And to address some of the workarounds you talked about, consider an example where:
|
Beta Was this translation helpful? Give feedback.
-
First of all thank you so much Tim and Vercel for taking this topic so seriously. Really appreciate how responsive you guys are to feedback. I would say "The mutations I trigger" area is thoroughly covered by the tools nextjs provides and is not an area of concern. However in the "mutations others trigger" area, if I am writing a dashboard and the sidebar has a link to a list of all products, users have been trained from years of the internet that clicking on the link in the sidebar (or clicking a link to go to a product detail page etc) will get them the absolute most fresh and up to date information available. Currently it seems like it is not possible to get sidebar links to behave like an old school "traditional MPA" even when I think it is being pretty explicitly asked for when the developer puts Edit: Also I think partial rendering is a great idea and I think it would be totally fine for the layouts to not refetch / re-render even on a dynamically rendered page. This would give RSCs a leg up over the traditional MPAs and hey if you really want the layouts to re-render also you can always use
Isn't this the whole point of dynamic rendering and Regardless of whatever happens here I will be a happy user of nextjs and RSCs, I have been loving them so far and you guys are doing incredible work. Cheers. |
Beta Was this translation helpful? Give feedback.
-
Hello Tim, Thank you for the deep dive, it really helps put in perspective some of the roadmap and thinking around caching. While i don't really aggree with most of the reasons, i will give you just my particular usecases which would be suited for a lower stale time and some opinions (of my own) i have for the other related subjects you talked about in the discussion. 1st use case : flash messages that should only show onceI have an app where i've implemented the concept of sessions from scratch without using 3rd party libraries, with the objective of having different features within my app, notably : authentication, remembering the user's form entry and flash messages. The last one is very important as flash messages are supposed to clear themself on access so that they show up only one time, as a feedback after a user's action. In my app i have a register form where the user can sign up, and on submit, it shows them wether the action has been successfull or not as a dismissable alert box, i've used sessions for that because i want it to work without the need of JS as the form is also progressively enhanced with form actions. The problem is that whenever the user navigate away then back to the page, it still shows them the flash message, even though they navigated away and finished their signup and/or clicked the dismiss button, even though the flash is cleared on access, if the user refresh the browser, they will not see the flash message. Below the video for a better illustration : Enregistrement.de.l.ecran.2023-08-16.a.01.01.00.movThe expected behavior is this (JS is disabled in this video), the flash only show once after the user has made an action : Enregistrement.de.l.ecran.2023-08-16.a.01.07.08.mov
2nd use case : remembering the user's input and showing form errorsI also used sessions in another app to give the user the possibility to submit a form and get progressively enhanced form errors with their last input restored. The problem is the same : when they navigate back and forth to their form it should only show once as form data & errors are also flashed in the sessions, so they only should show once (this behavior is the same as laravel), but with the client side caching, it shows them the same errors on navigation. Enregistrement.de.l.ecran.2023-08-16.a.01.11.49.movAnd this is the expected behavior (JS is disabled) : Enregistrement.de.l.ecran.2023-08-16.a.01.12.32.mov3rd use case : notificationsIn the original thread, someone mentionned the case of github notifications, i am in the process of trying to recreate a clone of github in Next App Router and the notifications are one the features i want to implement. In github when you receive a notification (ex: someone mentionned you in an issue, a new comment has been made on an issue you're subscribed, someone changed the status of the issue you wrote), github will show you a little badge over the notification button in the header, and when you click that button, it navigates you to the notification page where you can see unread notifications. Let's take the case of how it would work with the router cache of 30s :
What would the user conclusion would be ? That the app is broken right ? The user's expectatino is that, if the blue badge show up, they can go to their notifications page and instantly see the new notification, this expectation is broken with this behavior. Now let's take the case where instead of going to the notification page, the notifications would show up in a dropdwon list and directly link up to the destination (like on vercel) :
Same problem here. My opinions around cachingYou said here :
Why would you need to rerender everything from the root ? why not just rerender dynamic segments on the server with respect to a TTL defined by the user. You do not need to rerender everything, just the sections marked as dynamic, static segments would not need to be refetched. The point of caching is that some data do not change everytime and those data we don't want to recompute everytime, that does not means that we want everything to be cached.
There would not be a need for 3rd party libs to write to disk or an external store, as the caching is only temporary in the client side router, that at least means that the user would hit the fetch to twitter API at most every 30s, which is already a lot. And if the user cannot run JavaScript (for many different reasons) and/or has js disabled while using the app, every navigation to them is a full page navigation which, will hit the twitter API a lot more. Even with JS enabled, let's take the case where your app receives a lot of traffic (1 000 of users for ex), each new user loads your page and for each user you would need to request the twitter API, you can more or less agree that there will be many users accessing your app at the same app, this would be way more than one request per 30s. The thing is that there is already the data cache preventing the spamming of 3rd party APIs, and there is also static generation & ISR which helps reduce the number of requests sent to 3rd parties. If we take the case of the website like https://dub.sh/ , which uses react-tweet to show their testimonials, they can be assured that the twitter APIs will not be spammed because the homepage is static.
// Without integration into the SDK
export async function updateUser(config) {
"use server"
await prisma.user.update(config);
revalidateTag('prisma-user');
}
// With integration into the SDK
// If the SDK called revalidateTag and applied tags instead:
export async function updateUser(config) {
"use server"
// Automatically calls `revalidateTag` on your behalf.
await prisma.user.update(config);
} This one is odd, unless nextjs directly detects that prisma made a db update (which i really have no idea how it would), prisma SDK cannot automatically invalidate the client side cache, if it did then the library would tie itself strongly to nextjs. Prisma cannot do that as they have to serve much more different users and are used in vastly different contexts which are completely unrelated to nextjs or RSCs. It is the same for the other libraries. As of now, the ecosystem of RSC is only nextjs, but there will be different RSC framework (and there is already) with support for server actions which would not use the same API for client side revalidations, or provide a router or data cache at all, i suppose those libraries would not want to tie themselves to any framework because of the vastly diverging implementation that could exist outside of nextjs, letting the user do the caching and invalidation by themself is the safe choice; The other side is to create an adapter for each framework, which would break the premise of the RSC ecosystem of using one lib everywhere without any change. Sorry for the long message, but i had a lot to say. I still really like using nextjs App Router, and i would like to continue working with it in future projects, but the reluctance from the next team on this issue is making me second guess my decision to continue using it. |
Beta Was this translation helpful? Give feedback.
-
This is all fine and great if you're building Facebook, where seeing a stale version of your news feed isn't a big deal, but not everybody is building Facebook. What we're looking for at my company, is the flexibility to make those UX decisions for ourselves. In my company's case, having fresh data is more important than avoiding loading spinners on a back navigation. We're a B2B SaaS dealing with business critical data where freshness is key. The frustrating part is that NextJS assumes that all users have the same requirements and that one cache policy works for all apps under the sun; this is simply not the case. |
Beta Was this translation helpful? Give feedback.
-
Different applications have different requirements in which case certain UX tradeoffs around caching may or may not be acceptable. The expectation for a framework like NextJS is to give developers the flexibility to configure based on their application's needs and what is or isn't acceptable to their users. For certain applications, having the ultra fast back/forwards between navigations could be preferable. For others, always fetching the latest up-to-date data and not showing stale data regardless of whether or not the user has navigated within that browser session could be preferable. Rather than trying to force developers to implement workarounds to the client router cache like using a useEffect hook for stale-while-revalidate, or moving all fetching to client components, why not just let developers configure and/or opt-out of the client router cache if that is more suited to their application's needs? NextJS has already added the options to configure or fully opt-out of the data cache, request memoization, and the full route cache, so why why not just do the same for the client router cache? To draw the line in the sand that you must have the client router cache and that it must be 30 seconds because Facebook has prior research that 30 seconds is a good number for Facebook doesn't make any sense. |
Beta Was this translation helpful? Give feedback.
-
Thank you for this posting. I think that this concludes the issue pretty well. The biggest question remains: why this is the only cache the developer cannot fully control and adjust the caching time to be 0, 5, 30 or maybe even 120 seconds if that suits best the application. There is not a single correct value for this and lots on many very good use cases are discussed here already as well as in the original issue. At least I got a feeling that this behavior is not going to change and I expected that there would have been already clear suggestion how this behavior is going to change. Now we need to continue discussing the problem and its consequences instead of discussing a RFC for the (in my opinion) required changes for the logic. |
Beta Was this translation helpful? Give feedback.
This comment has been hidden.
This comment has been hidden.
-
I currently work on an event listings website and two issues where a stale cache impacts us is:
Configurable Time
Yes! That would solve both issues for us. To echo some other comments, if this configuration is going to be added, it may be worthwhile making it an arbitrary number for users who want caching durations longer or shorter than 30s. For a simple/quickest solution, if we could just configure the default router cache time on a global level to start, then that would be a big win. In the longer-term, localised options to configure the cache time per page/segment would be a advantageous to optimize different routes where needed e.g. "check out" vs "about us" pages could have different cache settings. Time Calculation
Whilst not a major concern, I don't really understand why the stale time is calculated based on when the user first navigated away from the page, rather than when the data was first retrieved. For example, If the user leaves the browser page open and goes out to lunch for an hour, they could come back and do a navigation to and from the page and still see data over an hour old? I appreciate the use-case is more niche, but would it not be more consistent to base the stale time on when the data was retrieved so the developer can be more confident they'll never be showing pages more than "x" seconds old? Stale-White-Revalidate Option I also think a really nice feature would be an additional option to leverage This could be useful for data that is important but not mission critical if it takes a second to refresh, such as our "saved" events and e-commerce product "availability" etc. So when a user navigates to a page either through a link or browser buttons: - If router cache expired (can set arbitrary number) - show stale data immediately, retrieve latest segment data from server, update the segment in the background with a type of - If router cache not expired - retrieve from router cache only At the moment, whilst we can use a |
Beta Was this translation helpful? Give feedback.
-
Hi Tim, Thanks very much for the time and effort you and the team have put into this write-up and discussion, and for listening to our feedback. I really appreciate it! Looking at the mutations others trigger situation, I think I understand how everything fits together – there's certainly a lot to digest – but to me, although the recommended approaches to ensure fresh content would work they seem a bit inelegant and error-prone to use. The options seem to be:
Configuring the client side cache timeout would almost certainly avoid over-caching in most situations, including the ones I care about. However it isn't yet clear how fine-grained the configuration control would be, and the level of control might determine whether or not it solves everyone's problems. For example, we would like to have minimal caching for some pages – the user's time-limited cart, pages with availability counts for high-demand tickets, etc – but have longer-lived caching of most of our other content pages. What I am not clear on is why most the discussion is focussed on clearing content from the client-side cache, in often fiddly ways, instead of avoiding caching the dynamic content in the first place? The approach that would make most sense to me, is for content from pages explicitly marked as dynamic to skip the cache altogether – or to be cached only for back / forward navigation, similar to browsers. In other words, when receiving content from dynamic RSCs the client-side cache could act like it has a 0 seconds cache timeout. Avoiding caching of dynamic content does degrade the user experience in some ways, granted, but for pages where timely data is vital I think this is the behaviour we are trying to opt into when we explicitly add For us, having dynamic pages trigger 0 seconds client-side caching behaviour would meet our needs and keep things relatively simple. |
Beta Was this translation helpful? Give feedback.
-
For my use case and if it is possible to be done, it would be to invalidate the client-side cache Example: /product/[productId]/page.tsx: unstable_cache(() => { /* fetch from database... */ }, [tag], {
tags: [tag],
}); /api/product/info/route.ts: revalidatePath(`/product/${productId}`)
revalidateTag(tag) I think its partially implemented in But while that can't be done, I'm in favor of having the 30 seconds configurable. |
Beta Was this translation helpful? Give feedback.
-
Before reading this proposal please read the initial post of this discussion to ensure you have the context. ProposalIntroducing a new property called DefaultsThe default the value for
These defaults mirror todays default values. When navigating the router still finds the common layout between two routes, however, while checking that it also checks the If the ExampleAn example:
The main difference between setting Back/Forward NavigationBack/forward navigation will still preserve the cache, it does not invalidate this cache in order to mirror bfcache, based on the replies most folks said this is fine. What I’m expecting is that others might want to do an update on back navigation based on expiring of ConclusionWe still believe the current defaults are correct and bring the experience closer to the current generation of data fetching libraries like useSWR / react-query / Apollo while allowing you to do server-side data fetching. In reading the replies on the issue and this discussion there seems to be a general consensus that that the defaults are correct but that for certain types of applications in order to iterate quicker you want to opt out of them in order to ship an already better experience (compared to Pages Router) and then optimize from there, which we understand. There’s more work to be done to show what we think would be the ideal UX for most applications We’ll share those patterns as much as possible, the focus / polling component shared in the discussion post actually came out of the Vercel codebase. It’s still early days with React Server Components and not all patterns have been as thoroughly documented compared to data fetching libraries that have been around for years. I’m expecting there will be new patterns here as well. For example something Sebastian pointed out to me yesterday is that you can build a more efficient polling system using Server Actions by using the Server Action to decide if the page should be rerendered with new data. Something like this: import { revalidateTag } from 'next/cache'
// timestamp is provided from the client in order to query if there have been updates or not
function pollingHandler(timestamp: number) {
"use server";
// example of querying the database just for the count of items added since the timestamp
const currentTimestamp = Date.now()
const count = prisma.todoTransactions.count({
where: {
date: {
lte: new Date(currentTimestamp).toISOString()
gte: new Date(timestamp).toISOString(),
},
},
})
// Only rerender when there is new data since the page was loaded, based on the timestamp.
if(count > 0) {
revalidateTag('todos')
}
// Return the new timestamp so that it can be passed back to this Server Action on subsequent calls,
// as you know that up till this point the values were handled.
// This is to ensure that after you rerender the page every poll
// doesn't cause another re-render because of the timestamp being outdated.
return currentTimestamp
}
// Then you set up a Client Component that polls / focus calling `pollingHandler()` and it'll only re-render the page when there is new data. Let me know what you think! Based on the cases described in the replies to this discussion this new configuration would solve the issue 🙏 PS: The replies on this discussion have been super helpful in highlighting where the gaps in documentation and examples are, we’ll work on explaining those gaps and providing examples for how we’re expecting some of the common patterns shared in this discussion to work. I’d like to thank everyone that got involved on this issue, I know it has taken some time but that was mostly because we wanted to make sure any changes actually covered the case folks posted, in posting this discussion with the more thorough explanation of the individual pieces the replies were a lot more constructive. Thanks for bearing with me and the rest of the team. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Hi @timneutkens , Thanks for the write up. This does help clear some things up, but one thing I'm still quite confused about is how to add revalidation tags on non-fetch functions. Say I'm using Prisma or Drizzle to query my DB inside of a server component. How do I add revalidation tags to that? |
Beta Was this translation helpful? Give feedback.
-
Can we have a |
Beta Was this translation helpful? Give feedback.
-
I think pr #62856 will solve the router cache problem. I took a look at it and if this PR gets merged, you can configure the client router cache. Still WIP and experimental though, but I hope the dx will get better if this gets successfully merged and stabilized! |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
-
I just patch the constants like THIRTY_SECONDS, I don't see a problem in it since it's only referenced in one line anyway. It seems there are some non-techical reasons why such trivial change would take months. Back/forward issue is something that I can live with... I understand framework authors have their view how things should work but ignoring the actual needs from developers is not very pragmatic. |
Beta Was this translation helpful? Give feedback.
-
Looks like thanks to @ztanner, configuration of
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
staleTimes: {
dynamic: 30,
static: 180,
},
},
}
module.exports = nextConfig |
Beta Was this translation helpful? Give feedback.
-
Hi, I also have a caching problem in my project. could you please help me with a better solution than the one I solved it through. Project description:Web site for sanatorium, Next JS + Sanity CMS. Project repository:https://github.com/mbozhik/udelnaya What was my problem?Locally running project:When adding new data via Sanity Studio, it did not display, even on reboot and even after some time had passed. Only when combining the keys Deployed on Vercel:When adding new data through Sanity Studio, they did not appear, even when reloading and even after some time. Commit that solved my cache problem:This is currently the only solution I have found to solve the caching issue when retrieving data from Sanity CMS: import {unstable_noStore as noStore} from 'next/cache'
...
const getData = async (slug) => {
noStore()
const query = `
*[_type == 'program' && slug.current == '${slug}'][0] {
name,
...
}`
const data = await client.fetch(query)
return data
}
... i added unstable function Tell me how I can achieve the same result without using unstable features and utilities
|
Beta Was this translation helpful? Give feedback.
-
Has anyone had success with the new Main layout has header with 2 navigation links: "/products" and "/likes".
Both are dynamic pages, and all fetch calls are tagged with |
Beta Was this translation helpful? Give feedback.
-
revalidateTag often does not work, the program code remains unchanged, but occasionally it fails to work, which forces me to update the tag key every once in a while. This makes me very frustrated. I want to know how much unstable_cache space can be used for Next.js projects deployed on Vercel. |
Beta Was this translation helpful? Give feedback.
-
Thank you all for the feedback here. We're making some larger changes in Next.js 15, based on this discussion.
Try out the release candidate and learn learn -> https://nextjs.org/15-rc |
Beta Was this translation helpful? Give feedback.
-
Hi @timneutkens @leerob thanks for the great post! Does Apologises if this has been answered somewhere else already, I am still getting to terms with all the lingo :) |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Hi everyone, I've been trying to understand what are the standard/best practices when it comes to clearing to cache. I've created this sandbox to showcase my case. The problem I'm trying to deal with is to figure out when the UI has done changing after a mutation. The case is simple, there's a list of records and a delete mutation. The only way to do it so far is to use a form in conjunction with There's also a conversation here trying to explain the case as well. Thank you |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, I've been exploring the behavior of revalidateTag and I noticed that it behaves differently when called from a server action compared to a route handler. The original post explains the behavior, but I'm curious about why make this choice. In the official documentation, it's mentioned that revalidateTag has a stale-while-revalidate behavior. However, I'm wondering why we need to refresh the router by default with server actions. It seems counterintuitive to have different behaviors for the same function, depending on where it's called from. Would it be possible to have revalidateTag behave only similarly to how it does inside a route handler? This way, if I want to refresh the page, I could simply call router.refresh on top of that. This would make the code more predictable and easier to understand for developers. I'm interested in your thoughts on this and any suggestions you might have. |
Beta Was this translation helpful? Give feedback.
-
Hello, may I translate the above explanation into Korean and post it on my personal blog? It seems to be an important part of the Next.js app router, and I would like to translate it into Korean to share with other Korean developers. |
Beta Was this translation helpful? Give feedback.
-
Thanks for reacting Next team, but because of not being able to opt out of route caching, trying out mixing server components and client components was very frustrating for me.. I'll think again if I really want to use server components.. |
Beta Was this translation helpful? Give feedback.
-
Can someone help me out with this issue #74272 |
Beta Was this translation helpful? Give feedback.
-
The Next.js App Router introduced new heuristics around caching and revalidating. To ensure we’re all discussing the intended behavior of how caching is designed to work, I thought it would be helpful to have a discussion going in-depth on how each piece works.
This complements the recently published caching documentation. It’s a follow up to an issue where some members of the community were confused about the client-side router cache / wanted a slightly different behavior, so I wanted this to be an opportunity to set the context for how it’s designed to work and open a discussion with the community.
Please read the entire post before replying, as it is important to have this context.
Data Mutations
The App Router was originally announced almost a year ago. At the time, we had not yet shared our plans for how we intended to handle data mutations. There Server Actions were not yet supported, and
router.refresh()
did not yet clear the route cache as expected yet.We understand this caused a bit of confusion, so we’d like to clarify that here.
Since then, we’ve added Server Actions. This is how we expect most mutations will happen when it’s stable. We’ve also made sure
router.refresh()
behaves as expected.Below, you’ll find sections on the individual pieces, how they fit together, and how you can replicate some of the behaviors you’ve used in other tools (sometimes used implicitly).
These sections are broke up into two pieces:
The mutations I trigger
Invalidation
Server Actions (revalidatePath / revalidateTag)
"use server"
adds a marker to the function at compile time and move it into a separate module. All Server Actions have their own unique ID so that they can be referred to individually. (This explanation is heavily simplified, reality is a bit more involved)<form action={serverAction}>
is submitted or you manually callserverAction()
in a client componentrevalidatePath
/revalidateTag
Will purge the specified entry from the Full Route Cache / Data Cache (the server caches)
revalidateTag
it’ll still correctly purge the Full Route Cache too.Based on which path is in
revalidatePath
the current page will re-render on the server. In case ofrevalidateTag
the current page always re-rendersrouter.refresh()
Will purge the client-side cache, depending on which path was provided. In case of
revalidateTag
the entire client-side Router Cache is purged.router.refresh()
after the fetch to the server comes back, so they purge the router cache essentially in the same way./products
and callrevalidatePath('/dashboard/settings')
in a Server Action the entire Router Cache is still purged, even though we can make sure it purges only the/dashboard/settings
path and any segments below it/dashboard/settings/environment
for example, we can’t know if the layouts below the provided path have changed so the Router Cache has to be purged for that case.revalidatePath('/')
means you purge all paths in the Router Cache, as it’s/
and any segments below it. Callingrouter.refresh()
is equivalent in purging and re-filling the Router Cache torevalidatePath('/')
.redirect()
cookies().set()
revalidateTag
, as changing cookies could affect what is rendered in layouts.revalidatePath
/revalidateTag
, in the case that you do call those the response from the server will hold both the result of your function as well as the newly rendered page, both will be applied to the application.action
prop in<form action={serverAction}>
Route Handlers (revalidatePath / revalidateTag)
unstable_cache
calls (that API will be stable in the near future)Calling
revalidatePath
/revalidateTag
will purge the specified entry from the Full Route Cache / Data CacherevalidateTag
it’ll still correctly purge the Full Route Cache too.If you were to fetch() a route handler from the user’s browser that won’t affect their Router Cache because there is no way to know in Next.js what was changed. In that case you have to call
router.refresh()
.router.refresh()
When it’s useful
When you want to purge the router cache based on an event that is outside of user interactions covered by server actions
mutate()
when you use useSWR or react-queryrevalidatePath
orrevalidateTag
which renders the new result in the same request to the server. But obviously it might not always be feasible to use server actions.When you want a stale-while-revalidate (useSWR / react-query) behavior where the page is rendered and then updated with the latest items
Example: see below code, keep in mind this is really similar to what useSWR does under the hood, you just have full control over when / how to trigger with the revalidation. Keep in mind this only affects the router cache, not server-side caches:
How it works
<html>
down to the content, it’s everything on the page but as RSC Payload instead of HTML.The mutations others trigger
Data source changes
The other side of the mutations story, what if my colleague makes an update in the CMS instead of doing it myself. If that mutation is backed by
revalidatePath
/revalidateTag
it would invalidate the Full Route Cache and Data Cache, but those are server-side, and the Router Cache in my browser tab might still have the previous result from the server, instead of the very latest data.A bunch of the comments on the issue were talking about this case where someone else does a mutation and what happens in that case, so let’s start by clarifying what the behavior is:
/products
routeaccount
in the menuproducts
in the menu/products
route again, similar to if I had clicked theback
button/products
routeIf the route was “static” (i.e. static rendering or setting
prefetch={true}
when the page is using dynamic rendering) the time where I can instantly go back to/products
is 5 minutesIf the route was using dynamic rendering (i.e. using cookies, headers,
export const dynamic = ‘force-dynamic'
) the time where I can instantly go back to/products
is 30 secondsAfter this 30 seconds or 5 minutes the cache item would be purged for router.push / router.replace navigations (which includes
<Link>
).Let’s assume
/products
is marked to use dynamic rendering. This would mean the cache time is 30 seconds, so if I spend slightly longer on/account
I don’t notice there is cache.Now as for why this behavior exists there are multiple reasons:
There is prior art and research associated with this timing, e.g. Facebook has this timing when you click around the application. react-query is adopting this timing as well.
The
/products
route might still leverage the Data Cache for the header / footer data, in order to make sure the only blocking data request is for the products themselves, which is fine.This doesn’t mean that you can’t invalidate the Router Cache, you can still use the stale-while-revalidate and focus behavior useSWR / react-query have (see router.refresh section above).
Overall we believe the default for this behavior is correct, however we’d like to figure out in what way folks in this issue want to opt-out of the behavior, as there are multiple related features that seem to be confused for this behavior.
Partial Rendering
Since the docs explain the high level concept quite well here’s an excerpt from the docs:
Partial rendering means only the route segments that change on navigation re-render on the client, and any shared segments are preserved.
For example, when navigating between two sibling routes,
/dashboard/settings
and/dashboard/analytics
, thesettings
andanalytics
pages will be rendered, and the shareddashboard
layout will be preserved.partial-rendering.avif
Without partial rendering, each navigation would cause the full page to re-render on the server. Rendering only the segment that changes reduces the amount of data transferred and execution time, leading to improved performance.
The way this is possible is that the App Router has a concept we called the “Router Tree”, this tree is a format that decides what is being rendered on the current screen you’re looking at.
The Router Tree looks somewhat like this (simplified):
Every route has a Router Tree associated with it. For example for a route like
/dashboard/settings/page.tsx
the router tree would look somewhat like this:When you first request the page a version of the Router Tree server-side holds the list of components (i.e.
layout
,loading
,page
), this is then used to render the entire page.When you navigate the Router Tree that is currently on screen is sent as part of the payload to the server, that allows the server to diff the current Router Tree with the Router Tree for the page you’re requesting. The result of that diff is then used to render only the part of the Router Tree that is different.
The Router Tree is kept in the browser history (more on this in the back/forward section)
As you might have figured out by now a tree of what is currently on screen is not enough if we can’t reuse the individual segments from a previously rendered page. That’s where the “Component Cache” part of the client-side router comes in.
Besides keeping the Router Tree in the router state, we also keep the component cache there. This cache looks somewhat like this (simplified)
The Component Cache keeps track of each rendered segment separately, in doing so it unlocks a bunch of features:
router.refresh()
purges the entire cache, butrevalidatePath('/dashboard')
will only purge the client-side router cache below the specified dashboard path in the near future).Back/forward navigation
This behavior is very similar to the bfcache shipped in browsers: https://web.dev/bfcache/.
When navigating between routes the client-side router calls pushState or replaceState to update the url that you see in the browser url bar. When it does it also pushes some metadata as part of that, specifically the Router Tree is attached to the history entry.
The Router Tree decides what is rendered on the screen (see previous section for deeper explanation).
When you navigate back/forward (popstate) the Router Tree from the history entry is applied to the router. This caused the page to render that Router Tree.
By leveraging the Component Cache we can take advantage of the default
scrollRestoration
in browsers, ensuring that you end up back in the exact place you came from when navigating backwards/forwards.This scrollRestoration behavior in the browser has a set heuristic in which you need to do all the work otherwise it bails out of restoring the scroll position.
Now you might be wondering how this is different from Pages Router.
In Pages Router there were two behaviors, both had problems that apply to the client-side router:
getStaticProps
{ [url: string]: Data }
getServerSideProps
We believe the new behavior which ensures scroll position is preserved and navigation is instant is the right behavior, it mirrors the browser bfcache behavior.
The main thing to keep in mind with this behavior is that the client-side Component Cache that is used when navigating back/forward is only purged when you call one of the purging functions (
router.refresh()
,revalidatePath
,revalidateTag
).This is not that different from what you’ve likely done before though, i.e. if you build a fully client-side SPA you’re essentially building your own cache using state management to keep track of fetches, that one would also have to be purged, similarly in useSWR / react-query.
Near-term work on the router
Our focus right now is on improving stability of the App Router and we’re spending a significant amount of time investigating and fixing reported bugs. Once those are done, there’s a bunch more work on features that improve client-side router performance.
Static Layout Optimization
Currently when a route is fully static (Static Rendering) we fetch the url for the RSC Payload and get a HIT on the Full Route Cache, this means that the request is served as a static file and thus fairly fast. However, this response could be optimized further by splitting up the layout rendering, specifically we could generate a RSC Payload for every layout which in turn means we can get more granular navigations similar to when doing Dynamic Rendering.
Batching of navigations / refresh
Currently navigation fetches happen in sequence in order to ensure the state is correct across multiple React Transitions. However, there’s a new feature in React Experimental (which is used when you enable Server Actions) that allows the
startTransition
function itself to be async and React will keep track of the order while being able to trigger multiple transitions at the same time. We’re planning to leverage that to speed up the case where you call navigations multiple times. An example of that being updating the searchParams while you type in an input field.Batching of Server Actions
Currently server actions are ran in sequence, they should still run in the sequence you call them in general for consistency, but we can do a single request to the server to run multiple actions.
Client / Offline Islands
We’re working on a proposal for marking sections of your application as more SPA-like where navigations would not fetch the server, there’s a trade-off with this in that we’d have to send a manifest of all possible JS to be loaded in that particular “island” (set of routes).
Conclusion
Now as for configuring the 30 seconds caching of dynamic rendering RSC Payloads.
Let’s preface with this: reading the feedback I noticed there is a bug in the current behavior where every subsequent navigation to the page before the 30 seconds has passed increases the time by another 30 seconds. This is a bug and we’re going to change it to 30 seconds since navigating away from the page, so if you navigate to
/account
then wait 10 seconds and navigate back to/products
it would not increase the time by another 30 seconds. Instead, the cache node would be purged after another 20 seconds (as we clicked back after 10 seconds).There’s an alternate universe where we could have gone by choosing everything is dynamic by default, this would mean we would always from the root of the application down on navigations.
This would be similar to how getServerSideProps works in Pages Router. However, in doing so that would introduce a few problems, for example:
<html>
etc. or even the dashboard layout when navigating between dashboard pages. Without Partial Navigation this would mean we refetch from the<html>
down for all these interactions.We want to avoid forking the behavior in Next.js and RSC as much as possible as otherwise this would create split in the RSC ecosystem where using a component in one application is different than using it in another application.
We’re also expecting that in the near future it will be less likely that you have to manually call revalidatePath / revalidateTag, as data fetching solutions like ORMs/SDKs could handle invalidation for you automatically as part of the library when inserting/updating data. E.g. Prisma’s SDK could automatically
With this additional context and explanation of the internals, how to handle mutations, how to get fresh data using the stale-while-revalidate pattern, and the bugfix, does this make sense or do you still have a case that is not covered. When you could configure the time to be zero instead of 30 would that solve a case you’re running into even though that wouldn’t affect back navigation and such? Let me know!
If you start reading at the end of the post, please read the entire explanation from start to finish before commenting on this discussion.
Beta Was this translation helpful? Give feedback.
All reactions