Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Field Filters #1411

Closed
wants to merge 6 commits into from
Closed

Field Filters #1411

wants to merge 6 commits into from

Conversation

rmosolgo
Copy link
Owner

@rmosolgo rmosolgo commented Apr 10, 2018

Inspired by hints about Shopify's structure in this comment.

Make a new way to modify field resolution. This is roughly equivalent to instrumentation and middleware, but a bit simpler.

  • Fields accept a filters: [...] option
  • Classes in the list will be initialized with new(field:)
  • Filters' resolve_field method will be called like and around_filter on the field, they must yield to continue flow.

The goal is the same one described in the comment above, a pagination system with more fine-grained control over which-fields-get-what.

Additionally, this could do a better job with client_mutation_id and, I think, rescuing from errors.

Any thoughts?

I won't ship this til a stable release of 1.8 hits rubygems

TODO

  • Maintain compatibility with old connection API, don't remove that one test I removed
  • Add a simple opt-in for pagination, port the old connections to the new way
  • Implement a better error rescuer as a proof of concept
  • Abstract away SKIP and lazy? from user code, don't make people check then in filter field_resolve methods

@rmosolgo rmosolgo mentioned this pull request Apr 10, 2018
after_lazy(obj.context.schema, nodes) do |value|
if value.nil?
nil
elsif value.is_a?(GraphQL::Execution::Execute::Skip)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great if the filter base class could handle these cases somehow (and the lazy resolution case). In our filters we ended up with two methods a la before/after instead of a single yieldable resolve, so that the base class could do this logic in the middle. That said I do really prefer the yield pattern because it's easy to understand and more in line with how middleware works everywhere else...

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm needing this same thing in another branch, and trying out Schema#after_lazy:

# Call the given block at the right time, either:
# - Right away, if `value` is not registered with `lazy_resolve`
# - After resolving `value`, if it's registered with `lazy_resolve` (eg, `Promise`)
# @api private
def after_lazy(value)
if (lazy_method = lazy_method_name(value))
GraphQL::Execution::Lazy.new do
result = value.public_send(lazy_method)
yield(result)
end
else
yield(value)
end
end

It's a bit odd to have it as a method on schema, but it's because each schema maintains a map of class => method pairs for resolution.

There's a crazy (🙈 ) easier-to-use solution which would involve adding .graphql_lazy?, .graphql_then(&:block) and .graphql_sync to Object, and then letting people re-implement those methods on should-be lazy objects (eg, Promise). So you could always use obj.graphql_then { |resolved_obj| ... } , and for most objects, it would just be yield_self.

I think I'll try it on another branch sometime.

(Funny, it's technically a feature where refinements could work, but I think we support some Rubies that never implemented refine.)

@rmosolgo
Copy link
Owner Author

I'm coming back to this from time to time, but having a hard time finding an implementation which:

  • Cleanly abstracts the pre-Promise and post-Promise steps of field resolution; AND
  • Doesn't add significant runtime overhead

For example, if the Promise-related API allocates new objects, and is applied by default, then it will add a lot of overhead to scalar fields, even when they don't return Promises.

I need something like this, but I don't have a good solution yet! 😖

@swalkinshaw
Copy link
Collaborator

@rmosolgo does your work on supporting lazy resolve in authorization unblock this at all? Or is the pre/post Promise steps still an issue?

@rmosolgo
Copy link
Owner Author

rmosolgo commented Jul 9, 2018

Yes, it adds a weird api, context.schema.after_lazy(whatever_thing) do ... end, which will call immediately or enqueue the block after whatever_thing.

It's such a bad API, but the thing is, you need schema to be involved because it has the registry of lazy_class => method_name for resolving those things.

If I'm going to go public with it, I'd like it to be better. I have an idea for a new way to do lazy resolution in GraphQL: Any object that wants to be waited for can implement:

  • .graphql_then(&block) to enqueue a block
  • .graphql_sync to resolve itself and return a value or raise an error

Then, you can (😲) open up Object and implement those two methods with

class Object 
  alias :graphql_then :yield_self 
  alias :graphql_sync :itself 
end 

So that all objects respond to those methods, but objects that want special behavior can provide it. Then, the GraphQL runtime could be updated to call .graphql_then on returned objects instead of looking up a schema-local method.

I think this would also be a nice performance boost for responses with lots of values (15% of time was spent in that lookup in this benchmark: #861 (comment)). Instead of this hand-rolled lookup, we'd use Ruby's method lookup which I'm sure will be faster (and we won't have to worry about that threadsafe, inheritance-aware cache ... because that's what Ruby does! 😅 ).

But, it would be breaking change, since, for example, ::Promise would need some new methods added. So, I'm not sure how to go about the migration.

Your thoughts?

@dylanahsmith
Copy link
Contributor

It sounds like the problem is that you want to do something like Promise#then on promises and Object#yield_self on regular objects without the overhead of allocating objects. The promise.rb gem already has Promise.sync and Promise.map_value for cases where you want to deal with objects that may or may not be promises. Promise.map_value only allocates an object if the passed in object is a Promise. So I don't think object allocation should be necessary to solve this problem, you could just use conditionals and methods not defined on the object itself.

This wouldn't be as performant as monkey patching, but it is worth considering this alternative, since the performance improvements of monkey patching over conditionals might be negligible outside of micro benchmarks.

It's such a bad API, but the thing is, you need schema to be involved because it has the registry of lazy_class => method_name for resolving those things.

So essentially you are trying to replace that registry with the graphql_then and graphql_sync monkey patches? So if you still think monkey patching Object is the way forward, then I think this monkey patch of ::Promise should be done in the graphql-batch gem rather than the promise.rb gem, since currently the graphql-batch is doing this mapping. ::Promise shouldn't have to know anything about graphql, so I don't plan on adding any monkey patches to that gem.

I agree that depending on these monkey patches internally to the graphql gem would be a breaking change, so it should its major version should be bumped when making this change. That major version bump will force the graphql-batch gem to be upgraded as well, which would avoid confusion on such an upgrade.

@rmosolgo
Copy link
Owner Author

rmosolgo commented Aug 9, 2018

Something struck me while I was waking up this morning so I spent some time hacking on it again, maybe this will be fixed by: #1758

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants