Skip to content

Commit

Permalink
docs: Improvements to the README
Browse files Browse the repository at this point in the history
  • Loading branch information
viccon committed Apr 7, 2024
1 parent 27ba3f7 commit af6519c
Showing 1 changed file with 20 additions and 12 deletions.
32 changes: 20 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,8 @@ func main() {
}
```

Running this program, we're going to see output that looks something like this:
Running this program, we're going to see that the value gets refreshed once
every 2-3 retrievals:

```sh
go run .
Expand Down Expand Up @@ -245,6 +246,9 @@ func main() {
}
```

Running this program, we'll see that the record is missing during the first 3
refreshes and then transitions into having a value:

```sh
2024/04/07 09:42:49 Fetching value for key: key
2024/04/07 09:42:49 Record does not exist.
Expand All @@ -268,8 +272,8 @@ One challenge with caching batchable endpoints is that you have to find a way
to reduce the number of keys. To illustrate, let's say that we have 10 000
records, and an endpoint for fetching them that allows for batches of 20.

Now, let's calculate the number of combinations if we were to create the keys
from the query params:
Now, let's calculate the number of combinations if we were to create the cache
keys from the query params:

$$ C(n, k) = \binom{n}{k} = \frac{n!}{k!(n-k)!} $$

Expand Down Expand Up @@ -357,7 +361,8 @@ func main() {
```

Running this code, we can see that we only end up fetching the randomized ID,
while continiously getting cache hits for ids 1-10:
while continuously getting cache hits for IDs 1-10, regardless of what the
batch looks like:

```sh
...
Expand Down Expand Up @@ -422,12 +427,12 @@ func (a *OrderAPI) OrderStatus(ctx context.Context, ids []string, opts OrderOpti
}
```

The main difference from the previous example, is that we're using the
The main difference from the previous example is that we're using the
`PermutatedBatchKeyFn` function. Internally, the cache uses reflection to
extract the names and values of every exported field, and uses them to build
the cache keys.

To illustrate, we can write a small program:
To demonstrate this, we can write another small program:

```go
func main() {
Expand Down Expand Up @@ -462,8 +467,8 @@ func main() {
- UPS-2024-04-08-id1
- etc..

Next, we'll add a sleep to make that all of the records are due for a refresh,
and then request the keys:
Next, we'll add a sleep to make sure that all of the records are due for a
refresh, and then request the ids individually for each set of options:

```go
func main() {
Expand All @@ -484,7 +489,8 @@ func main() {
}
```

and the output from running this program would then look something like this:
Running this program, we can see that the records are refreshed once per unique
id+option combination:

```sh
go run .
Expand Down Expand Up @@ -512,10 +518,11 @@ endpoint is batchable when we're performing the refreshes.

To make this more efficient, we can enable the *refresh buffering*
functionality. Internally, the cache is going to create a buffer for each
permutation of our options. It is then going to collect ids to refresh until it
reaches a certain size, or exceeds a time threshold.
permutation of our options. It is then going to collect ids until it reaches a
certain size, or exceeds a time threshold.

The only change we have to make is enabling the functionality:
The only change we have to make to the example above is to enable this
functionality:

```go
func main() {
Expand All @@ -535,6 +542,7 @@ func main() {
// ...
}
```

and now we can see that the cache performs the refreshes in batches per
permutation of our query params:

Expand Down

0 comments on commit af6519c

Please sign in to comment.