From 2b09c6d7fa47404d1334af1c57d20b8815f38054 Mon Sep 17 00:00:00 2001 From: angel Date: Fri, 19 Aug 2022 12:17:40 -0400 Subject: [PATCH 1/4] placeholder --- .../get-started/improve-performance.md | 86 -------- src/gateway/get-started/proxy-caching.md | 193 ++++++++++++++++++ 2 files changed, 193 insertions(+), 86 deletions(-) delete mode 100644 src/gateway/get-started/improve-performance.md create mode 100644 src/gateway/get-started/proxy-caching.md diff --git a/src/gateway/get-started/improve-performance.md b/src/gateway/get-started/improve-performance.md deleted file mode 100644 index bc9c73f0ac50..000000000000 --- a/src/gateway/get-started/improve-performance.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Improve Performance with Proxy Caching ---- - -In this topic, you’ll learn how to use proxy caching to improve response efficiency using the Proxy Caching plugin. - -If you are following the getting started workflow, make sure you have completed [Protect your Services](/gateway/{{page.kong_version}}/get-started/comprehensive/protect-services) before continuing. - -## What is Proxy Caching? - -{{site.base_gateway}} delivers fast performance through caching. The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per Consumer or per API. - -Cache entities are stored for a configurable period of time. When the timeout is reached, the gateway forwards the request to the Upstream, caches the result and responds from cache until the timeout. The plugin can store cached data in memory, or for improved performance, in Redis. - -## Why use Proxy Caching? - -Use proxy caching so that Upstream services are not bogged down with repeated requests. With proxy caching, {{site.base_gateway}} can respond with cached results for better performance. - -## Set up the Proxy Caching plugin - -Call the Admin API on port `8001` and configure plugins to enable in-memory caching globally, with a timeout of 30 seconds for Content-Type `application/json`. - - -```sh -curl -i -X POST http://localhost:8001/plugins \ - --data name=proxy-cache \ - --data config.content_type="application/json; charset=utf-8" \ - --data config.cache_ttl=30 \ - --data config.strategy=memory -``` - -## Validate Proxy Caching - -Let’s check that proxy caching works. You'll need the Kong Admin API for this -step. - -Access the */mock* route using the Admin API and note the response headers: - - -```sh -curl -i -X GET http://localhost:8000/mock/request -``` - -In particular, pay close attention to the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`: -``` -HTTP/1.1 200 OK -... -X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 -X-Cache-Status: Miss -X-Content-Type-Options: nosniff -... -X-Kong-Proxy-Latency: 25 -X-Kong-Upstream-Latency: 37 -``` - -Next, access the */mock* route one more time. - -This time, notice the differences in the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`. Cache status is a `hit`, which means {{site.base_gateway}} is responding to the request directly from cache instead of proxying the request to the Upstream service. - -Further, notice the minimal latency in the response, which allows {{site.base_gateway}} to deliver the best performance: - -``` -HTTP/1.1 200 OK -... -X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 -X-Cache-Status: Hit -... -X-Kong-Proxy-Latency: 0 -X-Kong-Upstream-Latency: 1 -``` - -To test more rapidly, the cache can be deleted by calling the Admin API: - - -```sh -curl -i -X DELETE http://localhost:8001/proxy-cache -``` - -## Summary and Next Steps - -In this section, you: - -* Set up the Proxy Caching plugin, then accessed the `/mock` route multiple times to see caching in effect. -* Witnessed the performance differences in latency with and without caching. - -Next, you’ll learn about [securing services](/gateway/{{page.kong_version}}/get-started/comprehensive/secure-services). diff --git a/src/gateway/get-started/proxy-caching.md b/src/gateway/get-started/proxy-caching.md new file mode 100644 index 000000000000..96e32867c441 --- /dev/null +++ b/src/gateway/get-started/proxy-caching.md @@ -0,0 +1,193 @@ +--- +title: Improve Performance with Proxy Caching +--- + + +One of the ways Kong delivers performance is through caching. The proxy caching plug-in delivers fast performance by providing the ability to cache responses based on requests, response codes, and content type. This means the upstream services are not bogged down with repeated requests, because {{site.base_gateway}} can respond with cached results. {{site.base_gateway}} offers the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/). The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per consumer or per API. With that level of granularity you can create caching rules that match your specific use case.  + + +The cache timeout is configurable and once its reached, Kong will forward the request to the upstream again, cache the result and respond from cache until the timeout. The plug-in can store cached data in memory or for improved performance, Redis. More details on the caching can be found here. + + + + + +{{site.base_gateway}} delivers fast performance through caching. The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per Consumer or per API. + +Cache entities are stored for a configurable period of time. When the timeout is reached, the gateway forwards the request to the Upstream, caches the result and responds from cache until the timeout. The plugin can store cached data in memory, or for improved performance, in Redis. + +## Configure Proxy Caching plugin + +Configuring the Proxy Caching plugin is done by sending a `POST` request to the admin API and describing the caching rules: + + +```sh + +curl -i -X POST http://localhost:8001/plugins \ + --data name=proxy-cache \ + --data config.content_type="application/json" \ + --data config.cache_ttl=30 \ + --data config.strategy=memory + +``` + +If configuration was succesful, you will receive a `201` response code. The request you sent configured Proxy Caching for all `application/json` content, with a time to live (TTL) of 30 seconds. Because this plugin did not specify a route or a service, {{site.base_gateway}} has applied this configuration globally accross all services and routes. The Proxy Caching plugin can also be configured at service-level, route-level, or consumer-level. + +## Validate the configuration + +You can check that the Proxy Caching plugin is working by checking by sending a `GET` request to the route that was created in the[configure services and routes](/gateway/latest/get-started/configure-services-and-routes) guide and examining the headers. If you did not follow the guide, edit the example to reflect your configuration. Send the following request once: + + +``` +curl -i -X GET http://localhost:8000/mock/request +``` + +Depending on your configuration, the response header will be composed of many different fields. Notice the integer values in the following fields + +* `X-Kong-Proxy-Latency` + +* `X-Kong-Upstream-Latency` + +If you were to send two requests in succession, the values would be lower in the second request. That demonstrates that the content is cached, and {{site.base_gateway}} is not returning the information directly from the upstream service that your route is pointing to. + +* `X-Cache-Status` + +This header can return the following states: + +|state| Description| +|---|---| +|Miss| The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.| +|Hit| The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.| +* Miss: The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream. +* Hit: The request was satisfied and served from cache. +* Refresh: The resource was found in cache, but could not satisfy the request, due to Cache-Control behaviors or reaching its hard-coded cache_ttl threshold. +* Bypass: The request could not be satisfied from cache based on plugin configuratio + +### Cache Status + +In the validation step the client returned several validation headers. The values in these headers can help you understand how Proxy Caching is working on your system. `X-Cache-Status` + + + + + +This time, notice the differences in the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`. Cache status is a `hit`, which means {{site.base_gateway}} is responding to the request directly from cache instead of proxying the request to the Upstream service. + + +Note the: `X-Kong-Upstream-Latency` header and then send the exact request again. You will notice that the latency displayed is lower in the second request. That is because your content is cached and {{site.base_gateway}} is not returning the information directly from the upstream service that your route is pointing to. + + + +``` + +Content-Type: text/html; charset=utf-8 +Transfer-Encoding: chunked +Connection: keep-alive +X-RateLimit-Limit-30: 5 +X-RateLimit-Remaining-30: 3 +RateLimit-Limit: 5 +RateLimit-Remaining: 3 +RateLimit-Reset: 43 +X-RateLimit-Limit-Minute: 5 +X-RateLimit-Remaining-Minute: 3 +X-Cache-Key: 9728b2210d52756a68e7cbb6fd2d734b +X-Cache-Status: Bypass +Date: Fri, 19 Aug 2022 14:33:17 GMT +Vary: Accept-Encoding +Via: kong/3.0.0.0-enterprise-edition +CF-Cache-Status: DYNAMIC +Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=dPo3cl6sbSjgBM5%2FuXu9WxzgPKJEhP6sgP%2FKsZv3IfCs1LEsrBatjF72ZwXA1t%2BqxCnGnb39W4HbFP2aBr7XWN1%2FWzNLXa7RNnDozCn%2BIfV5DIvkJa811bcun78Yzw%3D%3D"}],"group":"cf-nel","max_age":604800} +NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} +Server: cloudflare +CF-RAY: 73d39a7d3a741788-EWR +alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400 +X-Kong-Upstream-Latency: 158 +X-Kong-Proxy-Latency: 10 + +``` + +### Time to live + +Time to live (TTL) governs the refresh rate of cached content, this ensures that people requesting information from your upstream services aren't served old content. A TTL of 30 seconds means that content is refreshed every 30 seconds. TTL rules should vary based on the resource type of the content the upstream service is serving. + +* Static files that are rarely updated should have a longer TTL. + +* Dynamic files can use shorter TTL's to account for the complexity in updating. + +Kong can store resource entities in the storage engine longer than the prescribed cache_ttl or Cache-Control values indicate. This allows {{site.base_gateway}} to maintain a cached copy of a resource past its expiration. This allows clients capable of using max-age and max-stale headers to request stale copies of data if necessary. + + +In this context, TTL governs the refresh rate of these copies, ideally ensuring that “stale” versions of your content aren’t served to your website visitors. + +Call the Admin API on port `8001` and configure plugins to enable in-memory caching globally, with a timeout of 30 seconds for Content-Type `application/json`. + +[Proxy Caching plugin](/hub/kong-inc/proxy-cache/) + + +Let’s call the Admin API on port 8001 and endpoint plugins to enable in memory caching globally with a timeout of 30 seconds for Content-Type application/json. +$ http -f :8001/plugins name=proxy-cache config.strategy=memory \ +config.content_type="application/json" +$ curl -i -X POST http://localhost:8001/plugins \ +--data name=proxy-cache \ +--data config.content_type="application/json" \ +--data config.cache_ttl=30 \ +--data config.strategy=memory + + + + +## Validate Proxy Caching + +Let’s check that proxy caching works. You'll need the Kong Admin API for this +step. + +Access the */mock* route using the Admin API and note the response headers: + + +```sh +curl -i -X GET http://localhost:8000/mock/request +``` + +In particular, pay close attention to the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`: +``` +HTTP/1.1 200 OK +... +X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 +X-Cache-Status: Miss +X-Content-Type-Options: nosniff +... +X-Kong-Proxy-Latency: 25 +X-Kong-Upstream-Latency: 37 +``` + +Next, access the */mock* route one more time. + +This time, notice the differences in the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`. Cache status is a `hit`, which means {{site.base_gateway}} is responding to the request directly from cache instead of proxying the request to the Upstream service. + +Further, notice the minimal latency in the response, which allows {{site.base_gateway}} to deliver the best performance: + +``` +HTTP/1.1 200 OK +... +X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 +X-Cache-Status: Hit +... +X-Kong-Proxy-Latency: 0 +X-Kong-Upstream-Latency: 1 +``` + +To test more rapidly, the cache can be deleted by calling the Admin API: + + +```sh +curl -i -X DELETE http://localhost:8001/proxy-cache +``` + +## Summary and Next Steps + +In this section, you: + +* Set up the Proxy Caching plugin, then accessed the `/mock` route multiple times to see caching in effect. +* Witnessed the performance differences in latency with and without caching. + +Next, you’ll learn about [securing services](/gateway/{{page.kong_version}}/get-started/comprehensive/secure-services). From 044aff37bb8f8dde65dca61f629f012c74ab02a8 Mon Sep 17 00:00:00 2001 From: angel Date: Fri, 19 Aug 2022 15:38:19 -0400 Subject: [PATCH 2/4] proxy-caching --- app/_data/docs_nav_gateway_3.0.x.yml | 2 +- src/gateway/get-started/proxy-caching.md | 146 ++--------------------- 2 files changed, 14 insertions(+), 134 deletions(-) diff --git a/app/_data/docs_nav_gateway_3.0.x.yml b/app/_data/docs_nav_gateway_3.0.x.yml index 900eb05394bb..a6ec67ed366a 100644 --- a/app/_data/docs_nav_gateway_3.0.x.yml +++ b/app/_data/docs_nav_gateway_3.0.x.yml @@ -63,7 +63,7 @@ items: - text: Configure Rate Limiting url: /get-started/configure-ratelimiting - text: Configure Proxy Caching - url: /get-started/improve-performance + url: /get-started/proxy-caching - text: Configure Key Authentication url: /get-started/secure-services - text: Configure Load-Balancing diff --git a/src/gateway/get-started/proxy-caching.md b/src/gateway/get-started/proxy-caching.md index 96e32867c441..154edf0cefa9 100644 --- a/src/gateway/get-started/proxy-caching.md +++ b/src/gateway/get-started/proxy-caching.md @@ -1,22 +1,14 @@ --- title: Improve Performance with Proxy Caching +content-type: tutorial --- -One of the ways Kong delivers performance is through caching. The proxy caching plug-in delivers fast performance by providing the ability to cache responses based on requests, response codes, and content type. This means the upstream services are not bogged down with repeated requests, because {{site.base_gateway}} can respond with cached results. {{site.base_gateway}} offers the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/). The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per consumer or per API. With that level of granularity you can create caching rules that match your specific use case.  +One of the ways Kong delivers performance is through caching. The proxy caching plug-in delivers fast performance by providing the ability to cache responses based on requests, response codes, and content type. This means the upstream services are not bogged down with repeated requests, because {{site.base_gateway}} can respond with cached results. {{site.base_gateway}} offers the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/). The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per consumer or per API. With that level of granularity you can create caching rules that match your specific use case.  -The cache timeout is configurable and once its reached, Kong will forward the request to the upstream again, cache the result and respond from cache until the timeout. The plug-in can store cached data in memory or for improved performance, Redis. More details on the caching can be found here. - - - - -{{site.base_gateway}} delivers fast performance through caching. The Proxy Caching plugin provides this fast performance using a reverse proxy cache implementation. It caches response entities based on the request method, configurable response code, content type, and can cache per Consumer or per API. - -Cache entities are stored for a configurable period of time. When the timeout is reached, the gateway forwards the request to the Upstream, caches the result and responds from cache until the timeout. The plugin can store cached data in memory, or for improved performance, in Redis. - -## Configure Proxy Caching plugin +## Configure Proxy Cache plugin Configuring the Proxy Caching plugin is done by sending a `POST` request to the admin API and describing the caching rules: @@ -31,7 +23,7 @@ curl -i -X POST http://localhost:8001/plugins \ ``` -If configuration was succesful, you will receive a `201` response code. The request you sent configured Proxy Caching for all `application/json` content, with a time to live (TTL) of 30 seconds. Because this plugin did not specify a route or a service, {{site.base_gateway}} has applied this configuration globally accross all services and routes. The Proxy Caching plugin can also be configured at service-level, route-level, or consumer-level. +If configuration was successful, you will receive a `201` response code. The request you sent configured Proxy Caching for all `application/json` content, with a time to live (TTL) of 30 seconds. The final option `config.strategy=memory` specifies where the cache will be stored, you can read more about this option in the strategy section of the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) documentation.Because this request did not specify a route or a service, {{site.base_gateway}} has applied this configuration globally across all services and routes. The Proxy Caching plugin can also be configured at service-level, route-level, or consumer-level. You can read more about the other configurations and how to apply them in the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) page. ## Validate the configuration @@ -52,61 +44,19 @@ If you were to send two requests in succession, the values would be lower in the * `X-Cache-Status` -This header can return the following states: +This will display `hit` expressing that proxy caching worked correctly, but this header can also return the following states: -|state| Description| +|State| Description| |---|---| |Miss| The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.| |Hit| The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.| -* Miss: The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream. -* Hit: The request was satisfied and served from cache. -* Refresh: The resource was found in cache, but could not satisfy the request, due to Cache-Control behaviors or reaching its hard-coded cache_ttl threshold. -* Bypass: The request could not be satisfied from cache based on plugin configuratio - -### Cache Status - -In the validation step the client returned several validation headers. The values in these headers can help you understand how Proxy Caching is working on your system. `X-Cache-Status` - - - - - -This time, notice the differences in the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`. Cache status is a `hit`, which means {{site.base_gateway}} is responding to the request directly from cache instead of proxying the request to the Upstream service. +|Refresh| The resource was found in cache, but could not satisfy the request, due to Cache-Control behaviors or reaching its hard-coded `cache_ttl` threshold.| +|Bypass| The request could not be satisfied from cache based on plugin configuration.| +In the initial request the value for `config.content_type` was set to "application/json". The proxy cache plugin will only cache the specific data type that was set in the initial configuration request. -Note the: `X-Kong-Upstream-Latency` header and then send the exact request again. You will notice that the latency displayed is lower in the second request. That is because your content is cached and {{site.base_gateway}} is not returning the information directly from the upstream service that your route is pointing to. - - -``` - -Content-Type: text/html; charset=utf-8 -Transfer-Encoding: chunked -Connection: keep-alive -X-RateLimit-Limit-30: 5 -X-RateLimit-Remaining-30: 3 -RateLimit-Limit: 5 -RateLimit-Remaining: 3 -RateLimit-Reset: 43 -X-RateLimit-Limit-Minute: 5 -X-RateLimit-Remaining-Minute: 3 -X-Cache-Key: 9728b2210d52756a68e7cbb6fd2d734b -X-Cache-Status: Bypass -Date: Fri, 19 Aug 2022 14:33:17 GMT -Vary: Accept-Encoding -Via: kong/3.0.0.0-enterprise-edition -CF-Cache-Status: DYNAMIC -Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=dPo3cl6sbSjgBM5%2FuXu9WxzgPKJEhP6sgP%2FKsZv3IfCs1LEsrBatjF72ZwXA1t%2BqxCnGnb39W4HbFP2aBr7XWN1%2FWzNLXa7RNnDozCn%2BIfV5DIvkJa811bcun78Yzw%3D%3D"}],"group":"cf-nel","max_age":604800} -NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} -Server: cloudflare -CF-RAY: 73d39a7d3a741788-EWR -alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400 -X-Kong-Upstream-Latency: 158 -X-Kong-Proxy-Latency: 10 - -``` - -### Time to live +## Time to live Time to live (TTL) governs the refresh rate of cached content, this ensures that people requesting information from your upstream services aren't served old content. A TTL of 30 seconds means that content is refreshed every 30 seconds. TTL rules should vary based on the resource type of the content the upstream service is serving. @@ -114,80 +64,10 @@ Time to live (TTL) governs the refresh rate of cached content, this ensures that * Dynamic files can use shorter TTL's to account for the complexity in updating. -Kong can store resource entities in the storage engine longer than the prescribed cache_ttl or Cache-Control values indicate. This allows {{site.base_gateway}} to maintain a cached copy of a resource past its expiration. This allows clients capable of using max-age and max-stale headers to request stale copies of data if necessary. - - -In this context, TTL governs the refresh rate of these copies, ideally ensuring that “stale” versions of your content aren’t served to your website visitors. - -Call the Admin API on port `8001` and configure plugins to enable in-memory caching globally, with a timeout of 30 seconds for Content-Type `application/json`. - -[Proxy Caching plugin](/hub/kong-inc/proxy-cache/) - - -Let’s call the Admin API on port 8001 and endpoint plugins to enable in memory caching globally with a timeout of 30 seconds for Content-Type application/json. -$ http -f :8001/plugins name=proxy-cache config.strategy=memory \ -config.content_type="application/json" -$ curl -i -X POST http://localhost:8001/plugins \ ---data name=proxy-cache \ ---data config.content_type="application/json" \ ---data config.cache_ttl=30 \ ---data config.strategy=memory - - - - -## Validate Proxy Caching - -Let’s check that proxy caching works. You'll need the Kong Admin API for this -step. - -Access the */mock* route using the Admin API and note the response headers: - - -```sh -curl -i -X GET http://localhost:8000/mock/request -``` - -In particular, pay close attention to the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`: -``` -HTTP/1.1 200 OK -... -X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 -X-Cache-Status: Miss -X-Content-Type-Options: nosniff -... -X-Kong-Proxy-Latency: 25 -X-Kong-Upstream-Latency: 37 -``` - -Next, access the */mock* route one more time. - -This time, notice the differences in the values of `X-Cache-Status`, `X-Kong-Proxy-Latency`, and `X-Kong-Upstream-Latency`. Cache status is a `hit`, which means {{site.base_gateway}} is responding to the request directly from cache instead of proxying the request to the Upstream service. - -Further, notice the minimal latency in the response, which allows {{site.base_gateway}} to deliver the best performance: - -``` -HTTP/1.1 200 OK -... -X-Cache-Key: d2ca5751210dbb6fefda397ac6d103b1 -X-Cache-Status: Hit -... -X-Kong-Proxy-Latency: 0 -X-Kong-Upstream-Latency: 1 -``` - -To test more rapidly, the cache can be deleted by calling the Admin API: - - -```sh -curl -i -X DELETE http://localhost:8001/proxy-cache -``` +Kong can store resource entities in the storage engine longer than the prescribed `cache_ttl` or `Cache-Control`values indicate. This allows {{site.base_gateway}} to maintain a cached copy of a resource past its expiration. This allows clients capable of using max-age and max-stale headers to request stale copies of data if necessary. -## Summary and Next Steps -In this section, you: -* Set up the Proxy Caching plugin, then accessed the `/mock` route multiple times to see caching in effect. -* Witnessed the performance differences in latency with and without caching. +## Next steps -Next, you’ll learn about [securing services](/gateway/{{page.kong_version}}/get-started/comprehensive/secure-services). +Next, you’ll learn about the [Key-Authentication plugin](/gateway/{{page.kong_version}}/get-started/comprehensive/secure-services). From 9430652b11723eaa6c855327ccbe41fc8e1b2b24 Mon Sep 17 00:00:00 2001 From: angel Date: Fri, 19 Aug 2022 16:41:58 -0400 Subject: [PATCH 3/4] fix issues --- src/gateway/get-started/configure-ratelimiting.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/gateway/get-started/configure-ratelimiting.md b/src/gateway/get-started/configure-ratelimiting.md index f71344cb65a5..98296c71315f 100644 --- a/src/gateway/get-started/configure-ratelimiting.md +++ b/src/gateway/get-started/configure-ratelimiting.md @@ -1,5 +1,5 @@ --- -title: Protect your Services +title: Configure Rate Limiting content-type: tutorial --- From 03319e81aebeaa572b1cb1907b81bd5a35269f2d Mon Sep 17 00:00:00 2001 From: Angel Date: Fri, 19 Aug 2022 16:44:34 -0400 Subject: [PATCH 4/4] Apply suggestions from code review Co-authored-by: Amy Goldsmith <59702069+acgoldsmith@users.noreply.github.com> --- src/gateway/get-started/proxy-caching.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/gateway/get-started/proxy-caching.md b/src/gateway/get-started/proxy-caching.md index 154edf0cefa9..c5ceeb9af304 100644 --- a/src/gateway/get-started/proxy-caching.md +++ b/src/gateway/get-started/proxy-caching.md @@ -23,18 +23,18 @@ curl -i -X POST http://localhost:8001/plugins \ ``` -If configuration was successful, you will receive a `201` response code. The request you sent configured Proxy Caching for all `application/json` content, with a time to live (TTL) of 30 seconds. The final option `config.strategy=memory` specifies where the cache will be stored, you can read more about this option in the strategy section of the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) documentation.Because this request did not specify a route or a service, {{site.base_gateway}} has applied this configuration globally across all services and routes. The Proxy Caching plugin can also be configured at service-level, route-level, or consumer-level. You can read more about the other configurations and how to apply them in the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) page. +If configuration was successful, you will receive a `201` response code. The request you sent configured Proxy Caching for all `application/json` content, with a time to live (TTL) of 30 seconds. The final option `config.strategy=memory` specifies where the cache will be stored. You can read more about this option in the strategy section of the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) documentation. Because this request did not specify a route or a service, {{site.base_gateway}} has applied this configuration globally across all services and routes. The Proxy Caching plugin can also be configured at service-level, route-level, or consumer-level. You can read more about the other configurations and how to apply them in the [Proxy Caching plugin](/hub/kong-inc/proxy-cache/) page. ## Validate the configuration -You can check that the Proxy Caching plugin is working by checking by sending a `GET` request to the route that was created in the[configure services and routes](/gateway/latest/get-started/configure-services-and-routes) guide and examining the headers. If you did not follow the guide, edit the example to reflect your configuration. Send the following request once: +You can check that the Proxy Caching plugin is working by sending a `GET` request to the route that was created in the [configure services and routes](/gateway/latest/get-started/configure-services-and-routes) guide and examining the headers. If you did not follow the guide, edit the example to reflect your configuration. Send the following request once: ``` curl -i -X GET http://localhost:8000/mock/request ``` -Depending on your configuration, the response header will be composed of many different fields. Notice the integer values in the following fields +Depending on your configuration, the response header will be composed of many different fields. Notice the integer values in the following fields: * `X-Kong-Proxy-Latency` @@ -58,11 +58,11 @@ In the initial request the value for `config.content_type` was set to "applicati ## Time to live -Time to live (TTL) governs the refresh rate of cached content, this ensures that people requesting information from your upstream services aren't served old content. A TTL of 30 seconds means that content is refreshed every 30 seconds. TTL rules should vary based on the resource type of the content the upstream service is serving. +Time to live (TTL) governs the refresh rate of cached content, which ensures that people requesting information from your upstream services aren't served old content. A TTL of 30 seconds means that content is refreshed every 30 seconds. TTL rules should vary based on the resource type of the content the upstream service is serving. * Static files that are rarely updated should have a longer TTL. -* Dynamic files can use shorter TTL's to account for the complexity in updating. +* Dynamic files can use shorter TTLs to account for the complexity in updating. Kong can store resource entities in the storage engine longer than the prescribed `cache_ttl` or `Cache-Control`values indicate. This allows {{site.base_gateway}} to maintain a cached copy of a resource past its expiration. This allows clients capable of using max-age and max-stale headers to request stale copies of data if necessary.