Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a cacheable no-op scalar normalizer #2665

Merged

Conversation

teohhanhui
Copy link
Contributor

Q A
Bug fix? yes
New feature? no
BC breaks? no
Deprecations? no
Tests pass? yes
Fixed tickets #2593 (partially)
License MIT
Doc PR N/A

/cc @bastnic

This avoids hitting supportsNormalization method of a non-cacheable normalizer
@teohhanhui
Copy link
Contributor Author

teohhanhui commented Mar 27, 2019

Screenshots of Blackfire profiles, for comparison:

without NoOpScalarNormalizer

without NoOpScalarNormalizer

with NoOpScalarNormalizer

with NoOpScalarNormalizer

This is with an empty cache in prod environment, with 8 items of the dummy Greeting entity included in api-platform/api-platform, with an added getNumbers generator method which yields 0-999 (lol)

@bastnic
Copy link
Contributor

bastnic commented Mar 27, 2019

nice!

For the fun, did you try just moving the is_scalar ... return check before the getNormalizer in Serialier::normalize?

@teohhanhui
Copy link
Contributor Author

For the fun, did you try just moving the is_scalar ... return check before the getNormalizer in Serialier::normalize?

No, we cannot do that. :)

@bastnic
Copy link
Contributor

bastnic commented Mar 27, 2019

of course... at least with the current serializer implementation.

@soyuka
Copy link
Member

soyuka commented Mar 28, 2019

This is good! Thanks @teohhanhui @bastnic

@soyuka soyuka merged commit 7893a72 into api-platform:2.4 Mar 28, 2019
@teohhanhui teohhanhui deleted the fix/cacheable-noop-scalar-normalizer branch March 28, 2019 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants