Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added ShardablePhpRedis backend #5

Open
wants to merge 12 commits into
base: 8.x-1.x
Choose a base branch
from
Open
42 changes: 42 additions & 0 deletions TODOLIST.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
TODOLIST
--------

If you want a really fast Drupal 8:

* Drupal\Core\KeyValueStore\DatabaseStorage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also note that there is DatabaseStorageExpirable, which should be easier to use on most sites, because you can't just switch backend, you need to migrate your data.. for most (all) expirable things, that shouldn't be necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes true, I will add this as a side note.

Drupal\Core\KeyValueStore\DatabaseStorageExpirable
Notes:
- Both are easy to implement.
- Must be able to separate it from the sharded pool since it needs to
be reliable and consistent over time. The client/server pool
implementation from 7.x-3.x must be port too.
- The first bring the complexity of the data migration.

* Drupal\Core\Routing
Notes:
- Quite easy one too
- I'm not sure if there is other components using it or not, case in
which this is not sure anymore this is easy.

* Drupal\Core\Config\DatabaseStorage
Note:
- Easy one.

* Drupal\Core\Path\AliasStorage
Note:
- Already done in 7.x-2.x version, and if the schema didn't change much
this a rather easy one too.
- If the same schema is used that the 7.x version, then there is no use
in sharding it, and should be stored along the router table replacement.

* Drupal\Core\Session\SessionHandler
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just got a lot easier with https://www.drupal.org/node/2228393, yes.

Note:
- Easy one.

The first two will get rid of almost 30 out of the 50 remaining SQL queries
on a simple homepage with no content displayed. The third one will get rid of
5 or so remaining.

If all of those are took care of, it will remain less than 10 SQL queries on
a standard profile home page. After that, real profiling needs to be done over
a site with contents, blocks and views all around the place, on various pages.
2 changes: 1 addition & 1 deletion redis.services.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ services:
class: Drupal\redis\Cache\CacheBackendFactory
arguments: ['@redis.factory', '@cache_tags.invalidator.checksum']
redis.factory:
class: Drupal\redis\ClientFactory
class: Drupal\redis\ClientFactory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Newline?

40 changes: 15 additions & 25 deletions src/Cache/CacheBase.php
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,9 @@
*/
abstract class CacheBase implements CacheBackendInterface {

use RedisPrefixTrait;
use RedisPrefixTrait {
getKey as getParentKey;
}

/**
* Temporary cache items lifetime is infinite.
Expand All @@ -34,17 +36,6 @@ abstract class CacheBase implements CacheBackendInterface {
*/
const LIFETIME_PERM_DEFAULT = 31536000;

/**
* Computed keys are let's say arround 60 characters length due to
* key prefixing, which makes 1,000 keys DEL command to be something
* arround 50,000 bytes length: this is huge and may not pass into
* Redis, let's split this off.
* Some recommend to never get higher than 1,500 bytes within the same
* command which makes us forced to split this at a very low threshold:
* 20 seems a safe value here (1,280 average length).
*/
const KEY_THRESHOLD = 20;

/**
* Latest delete all flush KEY name.
*/
Expand Down Expand Up @@ -132,18 +123,6 @@ public function invalidate($cid) {
$this->invalidateMultiple([$cid]);
}

/**
* Return the key for the given cache key.
*/
public function getKey($cid = NULL) {
if (NULL === $cid) {
return $this->getPrefix() . ':' . $this->bin;
}
else {
return $this->getPrefix() . ':' . $this->bin . ':' . $cid;
}
}

/**
* Calculate the correct expiration time.
*
Expand All @@ -158,7 +137,7 @@ protected function getExpiration($expire) {
if ($expire == Cache::PERMANENT || $expire > $this->permTtl) {
return $this->permTtl;
}
return $expire - REQUEST_TIME;
return $expire - time();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drupal recommends to use REQUEST_TIME instead time()

}

/**
Expand Down Expand Up @@ -205,4 +184,15 @@ public function setPermTtl($ttl = NULL) {
}
}

/**
* {@inheritdoc}
*/
public function getKey($parts) {
if (is_string($parts)) {
$parts = [$parts];
}
array_unshift($parts, $this->bin);
return $this->getParentKey($parts);
}

}
5 changes: 3 additions & 2 deletions src/Cache/PhpRedis.php
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ public function set($cid, $data, $expire = Cache::PERMANENT, array $tags = array

// Build the cache item and save it as a hash array.
$entry = $this->createEntryHash($cid, $data, $expire, $tags);
$pipe = $this->client->multi(\REdis::PIPELINE);
$pipe = $this->client->multi(\Redis::PIPELINE);
$pipe->hMset($key, $entry);
$pipe->expire($key, $ttl);
$pipe->exec();
Expand All @@ -127,6 +127,7 @@ public function deleteAll() {
// was written in the same millisecond.
// @todo This is needed to make the tests pass, is this safe enough for real
// usage?
// @todo (pounard) Using the getNextIncrement() will make it safe.
usleep(1000);
$this->lastDeleteAll = round(microtime(TRUE), 3);
$this->client->set($this->getKey(static::LAST_DELETE_ALL_KEY), $this->lastDeleteAll);
Expand Down Expand Up @@ -240,7 +241,7 @@ protected function expandEntry(array $values, $allow_invalid) {
// Check expire time, allow to have a cache invalidated explicitly, don't
// check if already invalid.
if ($cache->valid) {
$cache->valid = $cache->expire == Cache::PERMANENT || $cache->expire >= REQUEST_TIME;
$cache->valid = $cache->expire == Cache::PERMANENT || $cache->expire >= time();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to my fixup in generic cache unit tests in core, this usage of the REQUEST_TIME constant then caused invalid results in the get() method.


// Check if invalidateTags() has been called with any of the items's tags.
if ($cache->valid && !$this->checksumProvider->isValid($cache->checksum, $cache->tags)) {
Expand Down
79 changes: 42 additions & 37 deletions src/Cache/RedisCacheTagsChecksum.php
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ class RedisCacheTagsChecksum implements CacheTagsChecksumInterface, CacheTagsInv
*
* @var array
*/
protected $tagCache = array();
protected $tagCache = [];
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the noise here, just found this to be more readable, even though I don't really care about writing it back as "array()".


/**
* A list of tags that have already been invalidated in this request.
Expand All @@ -33,7 +33,7 @@ class RedisCacheTagsChecksum implements CacheTagsChecksumInterface, CacheTagsInv
*
* @var array
*/
protected $invalidatedTags = array();
protected $invalidatedTags = [];

/**
* @var \Redis
Expand All @@ -51,22 +51,21 @@ function __construct(ClientFactory $factory) {
* {@inheritdoc}
*/
public function invalidateTags(array $tags) {
$keys_to_increment = [];
foreach ($tags as $tag) {
// Only invalidate tags once per request unless they are written again.
if (isset($this->invalidatedTags[$tag])) {
// Only invalidate tags once per request unless they are written again.
continue;
}

$tagKey = $this->getKey(['tag', $tag]);
$current = $this->client->get($tagKey);

$current = $this->getNextIncrement($current);
$this->client->set($tagKey, $current);

// Rightly populate the tag cache with the new values.
$this->invalidatedTags[$tag] = TRUE;
unset($this->tagCache[$tag]);
$keys_to_increment[] = $this->getTagKey($tag);
}
if ($keys_to_increment) {
$multi = $this->client->multi(\Redis::PIPELINE);
foreach ($keys_to_increment as $key) {
$multi->incr($key);
}
$multi->exec();
$this->tagCache[$tag] = $current;
}
}

Expand All @@ -88,7 +87,7 @@ public function getCurrentChecksum(array $tags) {
* {@inheritdoc}
*/
public function isValid($checksum, array $tags) {
return $checksum == $this->calculateChecksum($tags);
return $this->calculateChecksum($tags) <= $checksum;
}

/**
Expand All @@ -97,16 +96,35 @@ public function isValid($checksum, array $tags) {
public function calculateChecksum(array $tags) {
$checksum = 0;

$fetch = array_values(array_diff($tags, array_keys($this->tagCache)));
if ($fetch) {
$keys = array_map(array($this, 'getTagKey'), $fetch);
foreach ($this->client->mget($keys) as $index => $invalidations) {
$this->tagCache[$fetch[$index]] = $invalidations ?: 0;
foreach ($tags as $tag) {

if (isset($this->tagCache[$tag])) {
$current = $this->tagCache[$tag];
}
else {
$tagKey = $this->getKey(['tag', $tag]);
$current = $this->client->get($tagKey);

if (!$current) {
// Tag has never been created yet, so ensure it has an entry in Redis
// database. When dealing in a sharded environment, the tag checksum
// itself might have been dropped silently, case in which giving back
// a 0 value can cause invalided cache entries to be considered as
// valid back.
// Note that doing that, in case a tag key was dropped by the holding
// Redis server, all items based upon the droppped tag will then become
// invalid, but that's the definitive price of trying to being
// consistent in all cases.
$current = $this->getNextIncrement();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting implementation, pinging @LionsAd, he might be interested in this for a memcache implementation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my implementation should be memcache compatible, as I heard they had some troubles with tags handling, this is in my todolist to give it to them. Please ping them.

$this->client->set($tagKey, $current);
}

$this->tagCache[$tag] = $current;
}
}

foreach ($tags as $tag) {
$checksum += $this->tagCache[$tag];
if ($checksum < $current) {
$checksum = $current;
}
}

return $checksum;
Expand All @@ -116,21 +134,8 @@ public function calculateChecksum(array $tags) {
* {@inheritdoc}
*/
public function reset() {
$this->tagCache = array();
$this->invalidatedTags = array();
}

/**
* Return the key for the given cache tag.
*
* @param string $tag
* The cache tag.
*
* @return string
* The prefixed cache tag.
*/
protected function getTagKey($tag) {
return $this->getPrefix() . ':cachetags:' . $tag;
$this->tagCache = [];
$this->invalidatedTags = [];
}

}
Loading