Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport caching of the constant lookup #2079

Merged

Conversation

dylanahsmith
Copy link

Purpose

Backport #833 to the 0-9-stable branch

Currently, each time a model is serializer is tries to look up the Serializer class using safe_constantize. This can be slow when serializing a large number of models.

Changes

This adds a cache to return the serializer class for each class, which reduced the amount of time looking up the serializer classes.

This difference is that the cache is around ActiveMode::Serializer::Utils._const_get and uses Concurrent::Map (the successor to ThreadSafe::Cache) from concurrent-ruby.

@mention-bot
Copy link

@dylanahsmith, thanks for your PR! By analyzing the history of the files in this pull request, we identified @spastorino, @konukhov and @polmiro to be potential reviewers.

@@ -4,6 +4,7 @@
require 'active_model/serializer/config'

require 'thread'
require 'concurrent/map'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

concurrent is not an dependency of this project in this branch, so we need to add it as a dependency.

Object.send method, const
rescue NameError
const.safe_constantize
Serializer.serializers_cache.fetch_or_store(const) do

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we cache in the call of _const_get? It doesn't seems right to Utils know about serializers_cache.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two places where _const_get is called, so ActiveModel::Serializable#namespace and ActiveModel::Serializer.serializer_for. That is why I decided to move it into _const_get instead of doing it in ActiveModel::Serializer.serializer_for. Should I be duplicating the code between both call sites? Or only worry about the latter? Since I am not sure if the former is actually a performance problem.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I'd duplicate the cache call to avoid coupling between the utils model and the Serializer constant.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, done

@yosiat
Copy link

yosiat commented Mar 17, 2017

Hi!

I did something like this but I did two changes:

I can submit for those changes if you would like.

bf4 referenced this pull request in yosiat/active_model_serializers Mar 17, 2017
Adding serializers cache on two levels:
* On serializer_for - so we don't do many const_get for known types over
and over again
* On array serializer level - so we don't ask Serializer.serializer_for
in a loop.

Some notes:
* Allocations dropped - because we don't build strings for types and
allocating new consts.
* For our cache keys I am using array and hashing him instead of using
ActiveSupport::Cache key feature, because it's much faster and allocate
only one object (did benchmark).

Results:
```
Single: DefaultSerializer 28,950.242747880035 ip/s  76 objects
ArraySerializer 14,841.207166293754 ip/s  132 objects
ArraySerializer: each_serializer: DefaultSerializer 17,915.510566589117 ip/s  123 objects
FlatAuthorSerializer 27,564.927875021513 ip/s  85 objects
ArraySerializer: each_serializer: FlatAuthorSerializer 19,636.05147274016 ip/s  115 objects
AuthorWithDefaultRelationshipsSerializer 396.4305350908634 ip/s  5847 objects
ArraySerializer: each_serializer: AuthorWithDefaultRelationshipsSerializer 404.78139009879015 ip/s  5877 objects
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants