-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: how do you flush relay store? #6
Comments
Very good question! Unfortunately, Relay 0.5 containers support only the global store. So, the data is shared among requests. But if we are lucky, that might change in the next Relay release with introduction of |
Denis, thank you for the prompt reply! What are the current problems on the way to production?
Is there any other problems I didn't note here? Looking to above problems, I have an idea. We could use relay methods to extract and compile queries from the router and then execute qraphql query this way: Once we have our data we need to populates routes. What do you think? |
The first two problems are really important, and have to be resolved. Unfortunately, that cannot be done cleanly until the local context and the local network layer are supported by Relay. Another way is to add more dirty hacks. I don't think that the third issue poses a real problem though. And probably it is already solved by relay-local-schema. |
I guess the first two make it clear that relay wasn't really designed with isomorphism in mind like they say. And it means that isomorphic solution is the wrong objective, we don't have to change anything on the client side because it already works. What we really have to do is to create a server module which would extract queries out of relay routes, flatten them, execute within a request context and then populate routes with data. |
Sorry, I don't follow that logic.
What do you mean? |
I'm only getting around yet, so I may miss some details. I wonder why do we use relay to query on server when we aware of blocking problems it has. I'll try to play with relay and relay router today to see what else we can do... |
What blocking problems? |
1 and 2 prevent you from using relay on server in production. |
1 does not necessarily cause memory overflows. Data access collisions are also not a problem, because Relay store is capable of handling concurrent data access by multiple Relay containers. The only serious problem with the global Relay store that I see, is that it is not easy to have different data views on the same business object for different users. But it can be solved by including the user ID in each Relay node ID, so different users will access different Relay nodes (having data specific to the user) for the same business object. Also, many applications do not require user specific data views at all. 2 is also just an annoying inconvenience, and not a mission critical problem. It is always possible to send the user ID as a query parameter. And again, many applications do not require session specific data at all. |
1 Imagine you have 2 GB database and 1GB memory limit in node, how would you flush old objects from the store? How do you know you have to update store if objects in database were updated? 2 session id is not the only param in request context, you just can't do it on client it's insecure |
Restart Node script after certain number of HTTP-requests?
Currently, isomorphic-relay update data on each request (not all data of course, but only the requested data), so data cannot be stale.
Can you give more details, e.g. a concrete example? I don't follow yet. |
No, sorry
What about concurrency?
Normally node-passport is used to handle sessions and it has a lot of different policies. If you ignore that I'll have to create own authentication layer in model. It's weird. |
I also would prefer not to do so. Of course, that have to be fixed.
Write data to store, read data from store both are synchronous operations, and data in the store is not request specific. So, there shouldn't be any concurrency problems.
Why in model and not in GraphQL resolvers as usual? |
you said:
imagine req 1 read and req 2 write happen the same time |
As I said, read and write to the store are synchronous operations. And as you probably know, JavaScript has single-threaded execution model. So, there can't be a read and a write at the same moment of time. Next operation is possible only when the previous operation has finished. |
I know about single-threaded execution model but I don't know about relay store implementation. Are you exactly sure they use blocking model to perform store operations? Or maybe they use events which does not guarantee the order of performed operations. |
Yes, all Relay store data is stored in RAM. So, there is no need for asynchronicity. |
How is the work on 1 and 2 going? I see from relay issue #683 that work is progressing on isolating the global data. Is there anything I can help with? |
@DylanSale Fix for 1 is waiting facebook/relay#698. And fix for 2 was submitted as facebook/relay#704. I can't see yet how you can help here. It's all depends on promptness of Facebook. I wish they were quicker in reviewing PRs... |
Since both of these have been merged into relay master, does it mean that next release of relay would make this possible? |
@abhishiv The second one is not merged yet. But the first, the most important one is merged, so with the next release of Relay it should be possible to use request local Relay store. |
Since version 0.5 of isomorphic-relay Relay store is not global, but local to each request. So I'm closing this as the main problem is solved. And I'll probably open a separate issue on passing request specific context (e.g. using cookies) to graphql server. |
thumb up! |
Hello!
In example we can see that relay instance lives in persistent scope.
Does it mean that query data will be cached in node memory and shared between requests?
Or does this line
const storeData = RelayStoreData.getDefaultInstance();
create an empty instance for each request?The text was updated successfully, but these errors were encountered: