You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are looking at datastar and the concepts and the abstractions seems really nice. We have some questions about the batches. Here is the scenario:
We have a rest api and we have users and the threads resources. When we are posting a user we have to create a thread as well. The two tables that we currently have are:
users with fields (user_id, first_name, last_name, password)
threads with fields (thread_id, subject, user_id)
The fact that we've defined the threads schema above with the user_id as lookupKey it will create under the hood the table threads_by_user_id and when we are acting against the threads model/schema datastar will update the values in both tables using batches. Is this the case?
The question is how can we insert/update in batch between different model/schemas? For example, we would like to insert in the users table and the threads table (and sub-tables) and we would like to do it in a batch. Finally to insert to the following tables
users
threads
threads_by_user_id
As we see here on the blogpost one of the next features will be the batch update on multiple models. Is it the same thing that we are asking for and can we help somehow on that one?
Another thing that we'd like to ask is about the types. There are quite a lot of types defined, but we are using smallint as well. Is it enough to do the change on the joi-of-cql project or do we need to adapt this project accordingly?
Many thanks and sorry for the long post!
The text was updated successfully, but these errors were encountered:
@szavrakas apologies for the late and brief answer. The best example I have for how to batch multiple models is how I use it here. Check line 395 for usage of this method. The idea is that you build an array of functions to be executed by async.waterfall which handles the batch combining machinery (since the api is async). You then get a returned statements object to your last function which you then execute. Final bits can be seen here
Hi guys,
We are looking at datastar and the concepts and the abstractions seems really nice. We have some questions about the batches. Here is the scenario:
We have a rest api and we have users and the threads resources. When we are posting a user we have to create a thread as well. The two tables that we currently have are:
We are going to define the schema like that:
The fact that we've defined the threads schema above with the
user_id
aslookupKey
it will create under the hood the tablethreads_by_user_id
and when we are acting against the threads model/schema datastar will update the values in both tables using batches. Is this the case?The question is how can we insert/update in batch between different model/schemas? For example, we would like to insert in the users table and the threads table (and sub-tables) and we would like to do it in a batch. Finally to insert to the following tables
As we see here on the blogpost one of the next features will be the batch update on multiple models. Is it the same thing that we are asking for and can we help somehow on that one?
Another thing that we'd like to ask is about the types. There are quite a lot of types defined, but we are using smallint as well. Is it enough to do the change on the joi-of-cql project or do we need to adapt this project accordingly?
Many thanks and sorry for the long post!
The text was updated successfully, but these errors were encountered: