Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion configs/body_factory/default/Makefile.am
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ dist_bodyfactory_DATA = \
connect\#dns_failed \
connect\#failed_connect \
connect\#hangup \
connect\#all_dead \
connect\#all_down \
default \
interception\#no_host \
README \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
<FONT FACE="Helvetica,Arial"><B>
Description: Unable to find a valid target host.

The server was found but all of the addresses are marked dead and so there is
The server was found but all of the addresses are marked down and so there is
no valid target address to which to connect. Please try again after a few minutes.
</B></FONT>
<HR>
Expand Down
2 changes: 1 addition & 1 deletion configs/records.yaml.default.in
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ ts:
# https://docs.trafficserver.apache.org/records.yaml#origin-server-connect-attempts
##############################################################################
connect_attempts_max_retries: 3
connect_attempts_max_retries_dead_server: 1
connect_attempts_max_retries_down_server: 1
connect_attempts_rr_retries: 3
connect_attempts_timeout: 30
down_server:
Expand Down
18 changes: 9 additions & 9 deletions doc/admin-guide/files/records.yaml.en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1599,23 +1599,23 @@ Origin Server Connect Attempts

The maximum number of connection retries |TS| can make when the origin server is not responding.
Each retry attempt lasts for `proxy.config.http.connect_attempts_timeout`_ seconds. Once the maximum number of retries is
reached, the origin is marked dead (as controlled by `proxy.config.http.connect.dead.policy`_. After this, the setting
`proxy.config.http.connect_attempts_max_retries_dead_server`_ is used to limit the number of retry attempts to the known dead origin.
reached, the origin is marked down (as controlled by `proxy.config.http.connect.down.policy`_. After this, the setting
`proxy.config.http.connect_attempts_max_retries_down_server`_ is used to limit the number of retry attempts to the known down origin.

.. ts:cv:: CONFIG proxy.config.http.connect_attempts_max_retries_dead_server INT 1
.. ts:cv:: CONFIG proxy.config.http.connect_attempts_max_retries_down_server INT 1
:reloadable:
:overridable:

Maximum number of connection attempts |TS| can make while an origin is marked dead per request. Typically this value is smaller than
`proxy.config.http.connect_attempts_max_retries`_ so an error is returned to the client faster and also to reduce the load on the dead origin.
Maximum number of connection attempts |TS| can make while an origin is marked down per request. Typically this value is smaller than
`proxy.config.http.connect_attempts_max_retries`_ so an error is returned to the client faster and also to reduce the load on the down origin.
The timeout interval `proxy.config.http.connect_attempts_timeout`_ in seconds is used with this setting.

.. ts:cv:: CONFIG proxy.config.http.connect.dead.policy INT 2
.. ts:cv:: CONFIG proxy.config.http.connect.down.policy INT 2
:overridable:

Controls what origin server connection failures contribute to marking a server dead. When set to 2, any connection failure during the TCP and TLS
handshakes will contribute to marking the server dead. When set to 1, only TCP handshake failures will contribute to marking a server dead.
When set to 0, no connection failures will be used towards marking a server dead.
Controls what origin server connection failures contribute to marking a server down. When set to 2, any connection failure during the TCP and TLS
handshakes will contribute to marking the server down. When set to 1, only TCP handshake failures will contribute to marking a server down.
When set to 0, no connection failures will be used towards marking a server down.

.. ts:cv:: CONFIG proxy.config.http.server_max_connections INT 0
:reloadable:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,10 +144,10 @@ HTTP Connection
This metric tracks the number of server connections currently in the server session sharing pools. The server session sharing is
controlled by settings :ts:cv:`proxy.config.http.server_session_sharing.pool` and :ts:cv:`proxy.config.http.server_session_sharing.match`.

.. ts:stat:: global proxy.process.http.dead_server.no_requests integer
.. ts:stat:: global proxy.process.http.down_server.no_requests integer
:type: counter

Tracks the number of client requests that did not have a request sent to the origin server because the origin server was marked dead.
Tracks the number of client requests that did not have a request sent to the origin server because the origin server was marked down.

.. ts:stat:: global proxy.process.http.http_proxy_loop_detected integer
:type: counter
Expand Down
4 changes: 2 additions & 2 deletions doc/admin-guide/plugins/lua.en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4082,8 +4082,8 @@ Http config constants
TS_LUA_CONFIG_HTTP_PER_SERVER_CONNECTION_MAX
TS_LUA_CONFIG_HTTP_PER_SERVER_CONNECTION_MATCH
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER
TS_LUA_CONFIG_HTTP_CONNECT_DEAD_POLICY
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER
TS_LUA_CONFIG_HTTP_CONNECT_DOWN_POLICY
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT
TS_LUA_CONFIG_HTTP_DOWN_SERVER_CACHE_TIME
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ TSOverridableConfigKey Value Config
:c:enumerator:`TS_CONFIG_HTTP_CACHE_WHEN_TO_REVALIDATE` :ts:cv:`proxy.config.http.cache.when_to_revalidate`
:c:enumerator:`TS_CONFIG_HTTP_CHUNKING_ENABLED` :ts:cv:`proxy.config.http.chunking_enabled`
:c:enumerator:`TS_CONFIG_HTTP_CHUNKING_SIZE` :ts:cv:`proxy.config.http.chunking.size`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER` :ts:cv:`proxy.config.http.connect_attempts_max_retries_dead_server`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER` :ts:cv:`proxy.config.http.connect_attempts_max_retries_down_server`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES` :ts:cv:`proxy.config.http.connect_attempts_max_retries`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES` :ts:cv:`proxy.config.http.connect_attempts_rr_retries`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT` :ts:cv:`proxy.config.http.connect_attempts_timeout`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Enumeration Members
.. c:enumerator:: TS_CONFIG_HTTP_TRANSACTION_ACTIVE_TIMEOUT_OUT
.. c:enumerator:: TS_CONFIG_HTTP_ORIGIN_MAX_CONNECTIONS
.. c:enumerator:: TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES
.. c:enumerator:: TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER
.. c:enumerator:: TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER
.. c:enumerator:: TS_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES
.. c:enumerator:: TS_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT
.. c:enumerator:: TS_CONFIG_HTTP_POST_CONNECT_ATTEMPTS_TIMEOUT
Expand Down
18 changes: 9 additions & 9 deletions doc/developer-guide/core-architecture/hostdb.en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,13 @@ about the protocol to use to the upstream.

The last failure time tracks when the last connection failure to the info occurred and doubles as
a flag, where a value of ``TS_TIME_ZERO`` indicates a live target and any other value indicates a
dead info.
down info.

If an info is marked dead (has a non-zero last failure time) there is a "fail window" during which
If an info is marked down (has a non-zero last failure time) there is a "fail window" during which
no connections are permitted. After this time the info is considered to be a "zombie". If all infos
for a record are dead then a specific error message is generated (body factory tag
"connect#all_dead"). Otherwise if the selected info is a zombie, a request is permitted but the
zombie is immediately marked dead again, preventing any additional requests until either the fail
for a record are down then a specific error message is generated (body factory tag
"connect#all_down"). Otherwise if the selected info is a zombie, a request is permitted but the
zombie is immediately marked down again, preventing any additional requests until either the fail
window has passed or the single connection succeeds. A successful connection clears the last file
time and the info becomes alive.

Expand Down Expand Up @@ -135,10 +135,10 @@ Issues
======

Currently if an upstream is marked down connections are still permitted, the only change is the
number of retries. This has caused operational problems where dead systems are flooded with requests
number of retries. This has caused operational problems where down systems are flooded with requests
which, despite the timeouts, accumulate in ATS until ATS runs out of memory (there were instances of
over 800K pending transactions). This also made it hard to bring the upstreams back online. With
these changes requests to dead upstreams are strongly rate limited and other transactions are
these changes, requests to upstreams marked down are strongly rate limited and other transactions are
immediately terminated with a 502 response, protecting both the upstream and ATS.

Future
Expand Down Expand Up @@ -176,14 +176,14 @@ This version has several major architectural changes from the previous version.
* Single and multiple address results are treated identically - a singleton is simply a multiple
of size 1. This yields a major simplification of the implementation.

* Connections are throttled to dead upstreams, allowing only a single connection attempt per fail
* Connections are throttled to upstreams marked down, allowing only a single connection attempt per fail
window timing until a connection succeeds.

* Timing information is stored in ``std::chrono`` data types instead of proprietary types.

* State information has been promoted to atomics and updates are immediate rather than scheduled.
This also means the data in the state machine is a reference to a shared object, not a local copy.
The promotion was necessary to coordinate zombie connections to dead upstreams across transactions.
The promotion was necessary to coordinate zombie connections to upstreams marked down across transactions.

* The "resolve key" is now a separate data object from the HTTP request. This is a subtle but
major change. The effect is requests can be routed to different upstreams without changing
Expand Down
10 changes: 5 additions & 5 deletions doc/locale/ja/LC_MESSAGES/admin-guide/files/records.config.en.po
Original file line number Diff line number Diff line change
Expand Up @@ -2268,17 +2268,17 @@ msgid ""
"The maximum number of connection retries Traffic Server can make when the "
"origin server is not responding. Each retry attempt lasts for `proxy.config."
"http.connect_attempts_timeout`_ seconds. Once the maximum number of "
"retries is reached, the origin is marked dead. After this, the setting "
"`proxy.config.http.connect_attempts_max_retries_dead_server`_ is used to "
"limit the number of retry attempts to the known dead origin."
"retries is reached, the origin is marked down. After this, the setting "
"`proxy.config.http.connect_attempts_max_retries_down_server`_ is used to "
"limit the number of retry attempts to the known down origin."
msgstr ""

#: ../../../admin-guide/files/records.yaml.en.rst:1363
msgid ""
"Maximum number of connection retries Traffic Server can make while an "
"origin is marked dead. Typically this value is smaller than `proxy.config."
"origin is marked down. Typically this value is smaller than `proxy.config."
"http.connect_attempts_max_retries`_ so an error is returned to the client "
"faster and also to reduce the load on the dead origin. The timeout interval "
"faster and also to reduce the load on the down origin. The timeout interval "
"`proxy.config.http.connect_attempts_timeout`_ in seconds is used with this "
"setting."
msgstr ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ msgid ":ts:cv:`proxy.config.http.connect_attempts_max_retries`"
msgstr ""

#: ../../../developer-guide/api/functions/TSHttpOverridableConfig.en.rst:114
msgid ":ts:cv:`proxy.config.http.connect_attempts_max_retries_dead_server`"
msgid ":ts:cv:`proxy.config.http.connect_attempts_max_retries_down_server`"
msgstr ""

#: ../../../developer-guide/api/functions/TSHttpOverridableConfig.en.rst:115
Expand Down
16 changes: 16 additions & 0 deletions doc/release-notes/upgrading.en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,22 @@ The following incompatible changes to the configurations have been made in this

The records.yaml entry proxy.config.http.down_server.abort_threshold has been removed.

The records.yaml entry proxy.config.http.connect_attempts_max_retries_dead_server has been renamed to proxy.config.http.connect_attempts_max_retries_down_server.

The records.yaml entry proxy.config.http.connect.dead.policy has been renamed to proxy.config.http.connect.down.policy.

Plugins
-------

Lua Plugin
~~~~~~~~~~
The following Http config constants have been renamed:

TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER has been renamed to TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER.

TS_LUA_CONFIG_HTTP_CONNECT_DEAD_POLICY has been renamed to TS_LUA_CONFIG_HTTP_CONNECT_DOWN_POLICY.

Metrics
------------------

The HTTP connection metric proxy.process.http.dead_server.no_requests has been renamed to proxy.process.http.down_server.no_requests.
4 changes: 2 additions & 2 deletions include/ts/apidefs.h.in
Original file line number Diff line number Diff line change
Expand Up @@ -804,7 +804,7 @@ typedef enum {
TS_CONFIG_HTTP_TRANSACTION_NO_ACTIVITY_TIMEOUT_OUT,
TS_CONFIG_HTTP_TRANSACTION_ACTIVE_TIMEOUT_OUT,
TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES,
TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER,
TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER,
TS_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES,
TS_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT,
TS_CONFIG_HTTP_DOWN_SERVER_CACHE_TIME,
Expand Down Expand Up @@ -878,7 +878,7 @@ typedef enum {
TS_CONFIG_SSL_CLIENT_CA_CERT_FILENAME,
TS_CONFIG_SSL_CLIENT_ALPN_PROTOCOLS,
TS_CONFIG_HTTP_HOST_RESOLUTION_PREFERENCE,
TS_CONFIG_HTTP_CONNECT_DEAD_POLICY,
TS_CONFIG_HTTP_CONNECT_DOWN_POLICY,
TS_CONFIG_HTTP_MAX_PROXY_CYCLES,
TS_CONFIG_PLUGIN_VC_DEFAULT_BUFFER_INDEX,
TS_CONFIG_PLUGIN_VC_DEFAULT_BUFFER_WATER_MARK,
Expand Down
10 changes: 5 additions & 5 deletions iocore/hostdb/HostDB.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1550,7 +1550,7 @@ HostDBRecord::select_best_http(ts_time now, ts_seconds fail_window, sockaddr con
{
ink_assert(0 < rr_count && rr_count <= hostdb_round_robin_max_count);

// @a best_any is set to a base candidate, which may be dead.
// @a best_any is set to a base candidate, which may be down.
HostDBInfo *best_any = nullptr;
// @a best_alive is set when a valid target has been selected and should be used.
HostDBInfo *best_alive = nullptr;
Expand All @@ -1567,8 +1567,8 @@ HostDBRecord::select_best_http(ts_time now, ts_seconds fail_window, sockaddr con
// Check and update RR if it's time - this always yields a valid target if there is one.
if (now > ntime && rr_ctime.compare_exchange_strong(ctime, ntime)) {
best_alive = best_any = this->select_next_rr(now, fail_window);
Dbg(dbg_ctl_hostdb, "Round robin timed interval expired - index %d", this->index_of(best_alive));
} else { // pick the current index, which may be dead.
Debug("hostdb", "Round robin timed interval expired - index %d", this->index_of(best_alive));
} else { // pick the current index, which may be down.
best_any = &info[this->rr_idx()];
}
Dbg(dbg_ctl_hostdb, "Using timed round robin - index %d", this->index_of(best_any));
Expand Down Expand Up @@ -2201,8 +2201,8 @@ HostDBRecord::select_best_srv(char *target, InkRand *rand, ts_time now, ts_secon
// Array of live targets, sized by @a live_n
HostDBInfo *live[rr.count()];
for (auto &target : rr) {
// skip dead upstreams.
if (rr[i].is_dead(now, fail_window)) {
// skip down targets.
if (rr[i].is_down(now, fail_window)) {
continue;
}

Expand Down
20 changes: 10 additions & 10 deletions iocore/hostdb/I_HostDBProcessor.h
Original file line number Diff line number Diff line change
Expand Up @@ -139,15 +139,15 @@ struct HostDBInfo {
bool is_alive();

/// Target has failed and is still in the blocked time window.
bool is_dead(ts_time now, ts_seconds fail_window);
bool is_down(ts_time now, ts_seconds fail_window);

/** Select this target.
*
* @param now Current time.
* @param fail_window Failure window.
* @return Status of the selection.
*
* If a zombie is selected the failure time is updated to make it look dead to other threads in a thread safe
* If a zombie is selected the failure time is updated to make it appear down to other threads in a thread safe
* manner. The caller should check @c last_fail_time to see if a zombie was selected.
*/
bool select(ts_time now, ts_seconds fail_window);
Expand Down Expand Up @@ -234,7 +234,7 @@ HostDBInfo::is_alive()
}

inline bool
HostDBInfo::is_dead(ts_time now, ts_seconds fail_window)
HostDBInfo::is_down(ts_time now, ts_seconds fail_window)
{
auto last_fail = this->last_fail_time();
return (last_fail != TS_TIME_ZERO) && (last_fail + fail_window < now);
Expand Down Expand Up @@ -360,10 +360,10 @@ class HostDBRecord : public RefCountObj
* attempt to connect to the selected target if possible.
*
* @param now Current time to use for aliveness calculations.
* @param fail_window Blackout time for dead servers.
* @param fail_window Blackout time for down servers.
* @return Status of the updated target.
*
* If the return value is @c HostDBInfo::Status::DEAD this means all targets are dead and there is
* If the return value is @c HostDBInfo::Status::DOWN this means all targets are down and there is
* no valid upstream.
*
* @note Concurrency - this is not done under lock and depends on the caller for correct use.
Expand Down Expand Up @@ -404,7 +404,7 @@ class HostDBRecord : public RefCountObj
/** Select an upstream target.
*
* @param now Current time.
* @param fail_window Dead server blackout time.
* @param fail_window Down server blackout time.
* @param hash_addr Inbound remote IP address.
* @return A selected target, or @c nullptr if there are no valid targets.
*
Expand Down Expand Up @@ -632,13 +632,13 @@ struct ResolveInfo {
*/
bool resolve_immediate();

/** Mark the active target as dead.
/** Mark the active target as down.
*
* @param now Time of failure.
* @return @c true if the server was marked as dead, @c false if not.
* @return @c true if the server was marked as down, @c false if not.
*
*/
bool mark_active_server_dead(ts_time now);
bool mark_active_server_down(ts_time now);

/** Mark the active target as alive.
*
Expand Down Expand Up @@ -844,7 +844,7 @@ ResolveInfo::mark_active_server_alive()
}

inline bool
ResolveInfo::mark_active_server_dead(ts_time now)
ResolveInfo::mark_active_server_down(ts_time now)
{
return active != nullptr && active->mark_down(now);
}
Expand Down
8 changes: 4 additions & 4 deletions plugins/lua/ts_lua_http_config.c
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ typedef enum {
TS_LUA_CONFIG_HTTP_PER_SERVER_CONNECTION_MAX = TS_CONFIG_HTTP_PER_SERVER_CONNECTION_MAX,
TS_LUA_CONFIG_HTTP_PER_SERVER_CONNECTION_MATCH = TS_CONFIG_HTTP_PER_SERVER_CONNECTION_MATCH,
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES = TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES,
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER = TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER,
TS_LUA_CONFIG_HTTP_CONNECT_DEAD_POLICY = TS_CONFIG_HTTP_CONNECT_DEAD_POLICY,
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER = TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER,
TS_LUA_CONFIG_HTTP_CONNECT_DOWN_POLICY = TS_CONFIG_HTTP_CONNECT_DOWN_POLICY,
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES = TS_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES,
TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT = TS_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT,
TS_LUA_CONFIG_HTTP_DOWN_SERVER_CACHE_TIME = TS_CONFIG_HTTP_DOWN_SERVER_CACHE_TIME,
Expand Down Expand Up @@ -203,8 +203,8 @@ ts_lua_var_item ts_lua_http_config_vars[] = {
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_TRANSACTION_NO_ACTIVITY_TIMEOUT_OUT),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_TRANSACTION_ACTIVE_TIMEOUT_OUT),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_DEAD_POLICY),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DOWN_SERVER),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_DOWN_POLICY),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_RR_RETRIES),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_CONNECT_ATTEMPTS_TIMEOUT),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_DOWN_SERVER_CACHE_TIME),
Expand Down
Loading