You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In getCollectionDataAndSendAsObject we are getting the missing keys from sqlite storage leveraging promise.all and then setting it to cache as well. Now, we have this function being called for each Onyx.connect which has waitForCollectionCallback: true and for each withOnyx calls. Below is what we have now:
The get method, if it doesn’t have a key in OnyxCache will fetch it from the Storage and this will in return be slow when number of keys is rather big.
Solution
We can improve this by leveraging multiGet from Storage for all the missing keys. And then doing a cache.merge to add the new keys to the cache.
function getCollectionDataAndSendAsObject(matchingKeys, mapping) {
// Keys that are not in the cache
const missingKeys = [];
// Tasks that are pending
const pendingTasks = [];
// Keys for the tasks that are pending
const pendingKeys = [];
// We are going to combine all the data from the matching keys into a single object
const data = {};
/**
* We are going to iterate over all the matching keys and check if we have the data in the cache.
* If we do then we add it to the data object. If we do not then we check if there is a pending task
* for the key. If there is then we add the promise to the pendingTasks array and the key to the pendingKeys
* array. If there is no pending task then we add the key to the missingKeys array.
*
* These missingKeys will be later to use to multiGet the data from the storage.
*/
matchingKeys.forEach((key) => {
const cacheValue = cache.getValue(key);
if (cacheValue) {
data[key] = cacheValue;
return;
}
const pendingKey = `get:${key}`;
if (cache.hasPendingTask(pendingKey)) {
pendingTasks.push(cache.getTaskPromise(pendingKey));
pendingKeys.push(key);
} else {
missingKeys.push(key);
}
});
Promise.all(pendingTasks)
// We are going to wait for all the pending tasks to resolve and then add the data to the data object.
.then((values) => {
values.forEach((value, index) => {
data[pendingKeys[index]] = value;
});
return Promise.resolve();
})
// We are going to get the missing keys using multiGet from the storage.
.then(() => {
if (missingKeys.length === 0) {
return Promise.resolve();
}
return Storage.multiGet(missingKeys);
})
// We are going to add the data from the missing keys to the data object and also merge it to the cache.
.then((values) => {
if (!values || values.length === 0) {
return Promise.resolve();
}
// temp object is used to merge the missing data into the cache
const temp = {};
values.forEach((value) => {
data[value[0]] = value[1];
temp[value[0]] = value[1];
});
cache.merge(temp);
return Promise.resolve();
})
// We are going to send the data to the subscriber.
.finally(() => {
sendDataToConnection(mapping, data, undefined, true);
});
}
This bring us about ~2 seconds of reduction in the app startup time. The gain depends on the length of missing keys, the greater the missing keys the higher the gain we will have from this refactor. Here’s the PR with the implementation details.
Problem
In
getCollectionDataAndSendAsObject
we are getting the missing keys from sqlite storage leveraging promise.all and then setting it to cache as well. Now, we have this function being called for eachOnyx.connect
which haswaitForCollectionCallback: true
and for eachwithOnyx
calls. Below is what we have now:The
get
method, if it doesn’t have a key inOnyxCache
will fetch it from the Storage and this will in return be slow when number of keys is rather big.Solution
We can improve this by leveraging multiGet from Storage for all the missing keys. And then doing a cache.merge to add the new keys to the cache.
This bring us about ~2 seconds of reduction in the app startup time. The gain depends on the length of missing keys, the greater the missing keys the higher the gain we will have from this refactor. Here’s the PR with the implementation details.
cc @hurali97
The text was updated successfully, but these errors were encountered: