Skip to content

Commit

Permalink
Fix fetchMore for queries with no-cache fetch policies (#11974)
Browse files Browse the repository at this point in the history
Co-authored-by: Lenz Weber-Tronic <lorenz.weber-tronic@apollographql.com>
Co-authored-by: jerelmiller <jerelmiller@users.noreply.github.com>
  • Loading branch information
3 people authored Aug 1, 2024
1 parent 076bb63 commit c95848e
Show file tree
Hide file tree
Showing 8 changed files with 462 additions and 46 deletions.
5 changes: 5 additions & 0 deletions .changeset/nice-worms-juggle.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@apollo/client": patch
---

Fix an issue where `fetchMore` would write its result data to the cache when using it with a `no-cache` fetch policy.
5 changes: 5 additions & 0 deletions .changeset/pink-horses-pay.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
"@apollo/client": patch
---

Fix an issue where executing `fetchMore` with a `no-cache` fetch policy could sometimes result in multiple network requests.
7 changes: 7 additions & 0 deletions .changeset/short-scissors-speak.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@apollo/client": patch
---

**Potentially disruptive change**

When calling `fetchMore` with a query that has a `no-cache` fetch policy, `fetchMore` will now throw if an `updateQuery` function is not provided. This provides a mechanism to merge the results from the `fetchMore` call with the query's previous result.
4 changes: 2 additions & 2 deletions .size-limits.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{
"dist/apollo-client.min.cjs": 40168,
"import { ApolloClient, InMemoryCache, HttpLink } from \"dist/index.js\" (production)": 32974
"dist/apollo-client.min.cjs": 40243,
"import { ApolloClient, InMemoryCache, HttpLink } from \"dist/index.js\" (production)": 33041
}
72 changes: 71 additions & 1 deletion docs/source/pagination/core-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ const cache = new InMemoryCache({
Query: {
fields: {
feed: {
keyArgs: [],
keyArgs: false,
merge(existing, incoming, { args: { offset = 0 }}) {
// Slicing is necessary because the existing data is
// immutable, and frozen in development.
Expand All @@ -218,6 +218,41 @@ const cache = new InMemoryCache({

This logic handles sequential page writes the same way the single-line strategy does, but it can also tolerate repeated, overlapping, or out-of-order writes, without duplicating any list items.

### Updating the query with the fetch more result

At times the call to `fetchMore` may need to perform additional cache updates for your query. While you can use the [`cache.readQuery`](/caching/cache-interaction#readquery) and [`cache.writeQuery`](/caching/cache-interaction#writequery) functions to to do this work yourself, it can be cumbsersome to use both of these together.

As a shortcut, you can provide the `updateQuery` option to `fetchMore` to update your query using the result from the `fetchMore` call.

<Note>

`updateQuery` is not a replacement for your field policy `merge` functions. While you can use `updateQuery` without the need to define a `merge` function, `merge` functions defined for fields in the query will run using the result from `updateQuery`.

</Note>

Let's see the example above using `updateQuery` to merge results together instead of a field policy merge function:

```ts
fetchMore({
variables: { offset: data.feed.length },
updateQuery(previousData, { fetchMoreResult, variables: { offset }}) {
// Slicing is necessary because the existing data is
// immutable, and frozen in development.
const updatedFeed = previousData.feed.slice(0);
for (let i = 0; i < fetchMoreResult.feed.length; ++i) {
updatedFeed[offset + i] = fetchMoreResult.feed[i];
}
return { ...previousData, feed: updatedFeed };
},
})
```

<Tip>

We recommend defining field policies that contain at least a [`keyArgs`](/pagination/key-args/) value even when you use `updateQuery`. This prevents fragmenting the data unnecessarily in the cache. Setting `keyArgs` to `false` is adequate for most situations to ignore the `offset` and `limit` arguments and write the paginated data as one big array.

</Tip>

## `read` functions for paginated fields

[As shown above](#defining-a-field-policy), a `merge` function helps you combine paginated query results from your GraphQL server into a single list in your client cache. But what if you also want to configure how that locally cached list is _read_? For that, you can define a **`read` function**.
Expand Down Expand Up @@ -304,3 +339,38 @@ The `read` function for a paginated field can choose to _ignore_ arguments like
If you adopt this approach, you might not need to define a `read` function at all, because the cached list can be returned without any processing. That's why the [`offsetLimitPagination` helper function](./offset-based/#the-offsetlimitpagination-helper) is implemented _without_ a `read` function.
Regardless of how you configure `keyArgs`, your `read` (and `merge`) functions can always examine any arguments passed to the field using the `options.args` object. See [The `keyArgs` API](./key-args) for a deeper discussion of how to reason about dividing argument-handling responsibility between `keyArgs` and your `read` or `merge` functions.
## Using `fetchMore` with queries that set a `no-cache` fetch policy
<Note>
We recommend upgrading to version 3.11.3 or later to address bugs that exhibit unexpected behavior when using `fetchMore` with queries that set `no-cache` fetch policies. Please see pull request [#11974](https://github.com/apollographql/apollo-client/pull/11974) for more information.
</Note>
The examples shown above use field policies and `merge` functions to update the result of a paginated field. But what about queries that use a `no-cache` fetch policy? Data is not written to the cache, so field policies have no effect.
To update our query, we provide the `updateQuery` option to the `fetchMore` function.
Let's use the example above, but instead provide the `updateQuery` function to `fetchMore` to update the query.
```ts
fetchMore({
variables: { offset: data.feed.length },
updateQuery(previousData, { fetchMoreResult, variables: { offset }}) {
// Slicing is necessary because the existing data is
// immutable, and frozen in development.
const updatedFeed = previousData.feed.slice(0);
for (let i = 0; i < fetchMoreResult.feed.length; ++i) {
updatedFeed[offset + i] = fetchMoreResult.feed[i];
}
return { ...previousData, feed: updatedFeed };
},
})
```
<Note>
As of Apollo Client version 3.11.3, the `updateQuery` option is required when using `fetchMore` with a `no-cache` fetch policy. This is required to correctly determine how the results should be merged since field policy `merge` functions are ignored. Calling `fetchMore` without an `updateQuery` function will throw an error.
</Note>
118 changes: 76 additions & 42 deletions src/core/ObservableQuery.ts
Original file line number Diff line number Diff line change
Expand Up @@ -484,6 +484,16 @@ Did you mean to call refetch(variables) instead of refetch({ variables })?`,

const updatedQuerySet = new Set<DocumentNode>();

const updateQuery = fetchMoreOptions?.updateQuery;
const isCached = this.options.fetchPolicy !== "no-cache";

if (!isCached) {
invariant(
updateQuery,
"You must provide an `updateQuery` function when using `fetchMore` with a `no-cache` fetch policy."
);
}

return this.queryManager
.fetchQuery(qid, combinedOptions, NetworkStatus.fetchMore)
.then((fetchMoreResult) => {
Expand All @@ -493,48 +503,72 @@ Did you mean to call refetch(variables) instead of refetch({ variables })?`,
queryInfo.networkStatus = originalNetworkStatus;
}

// Performing this cache update inside a cache.batch transaction ensures
// any affected cache.watch watchers are notified at most once about any
// updates. Most watchers will be using the QueryInfo class, which
// responds to notifications by calling reobserveCacheFirst to deliver
// fetchMore cache results back to this ObservableQuery.
this.queryManager.cache.batch({
update: (cache) => {
const { updateQuery } = fetchMoreOptions;
if (updateQuery) {
cache.updateQuery(
{
query: this.query,
variables: this.variables,
returnPartialData: true,
optimistic: false,
},
(previous) =>
updateQuery(previous!, {
fetchMoreResult: fetchMoreResult.data,
variables: combinedOptions.variables as TFetchVars,
})
);
} else {
// If we're using a field policy instead of updateQuery, the only
// thing we need to do is write the new data to the cache using
// combinedOptions.variables (instead of this.variables, which is
// what this.updateQuery uses, because it works by abusing the
// original field value, keyed by the original variables).
cache.writeQuery({
query: combinedOptions.query,
variables: combinedOptions.variables,
data: fetchMoreResult.data,
});
}
},
if (isCached) {
// Performing this cache update inside a cache.batch transaction ensures
// any affected cache.watch watchers are notified at most once about any
// updates. Most watchers will be using the QueryInfo class, which
// responds to notifications by calling reobserveCacheFirst to deliver
// fetchMore cache results back to this ObservableQuery.
this.queryManager.cache.batch({
update: (cache) => {
const { updateQuery } = fetchMoreOptions;
if (updateQuery) {
cache.updateQuery(
{
query: this.query,
variables: this.variables,
returnPartialData: true,
optimistic: false,
},
(previous) =>
updateQuery(previous!, {
fetchMoreResult: fetchMoreResult.data,
variables: combinedOptions.variables as TFetchVars,
})
);
} else {
// If we're using a field policy instead of updateQuery, the only
// thing we need to do is write the new data to the cache using
// combinedOptions.variables (instead of this.variables, which is
// what this.updateQuery uses, because it works by abusing the
// original field value, keyed by the original variables).
cache.writeQuery({
query: combinedOptions.query,
variables: combinedOptions.variables,
data: fetchMoreResult.data,
});
}
},

onWatchUpdated: (watch) => {
// Record the DocumentNode associated with any watched query whose
// data were updated by the cache writes above.
updatedQuerySet.add(watch.query);
},
});
onWatchUpdated: (watch) => {
// Record the DocumentNode associated with any watched query whose
// data were updated by the cache writes above.
updatedQuerySet.add(watch.query);
},
});
} else {
// There is a possibility `lastResult` may not be set when
// `fetchMore` is called which would cause this to crash. This should
// only happen if we haven't previously reported a result. We don't
// quite know what the right behavior should be here since this block
// of code runs after the fetch result has executed on the network.
// We plan to let it crash in the meantime.
//
// If we get bug reports due to the `data` property access on
// undefined, this should give us a real-world scenario that we can
// use to test against and determine the right behavior. If we do end
// up changing this behavior, this may require, for example, an
// adjustment to the types on `updateQuery` since that function
// expects that the first argument always contains previous result
// data, but not `undefined`.
const lastResult = this.getLast("result")!;
const data = updateQuery!(lastResult.data, {
fetchMoreResult: fetchMoreResult.data,
variables: combinedOptions.variables as TFetchVars,
});

this.reportResult({ ...lastResult, data }, this.variables);
}

return fetchMoreResult;
})
Expand All @@ -544,7 +578,7 @@ Did you mean to call refetch(variables) instead of refetch({ variables })?`,
// likely because the written data were the same as what was already in
// the cache, we still want fetchMore to deliver its final loading:false
// result with the unchanged data.
if (!updatedQuerySet.has(this.query)) {
if (isCached && !updatedQuerySet.has(this.query)) {
reobserveCacheFirst(this);
}
});
Expand Down
Loading

0 comments on commit c95848e

Please sign in to comment.