You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The get transactions query is very slow, on some addresses over 20s when also filtering by asset. This query is just very inefficient.
I tried to use joins vs the current syntax, and got the execution fast, but the number of rows affected the time it took to fetch. If you were to query 1 row, I could get the query to run in ~150ms. But when you ask for 25 rows, the query time was at 3s, and 50 pushes it close to 6s.
I don't have that much information as to what is making things so slow, so some investigation is needed. I have noticed that things are slower with more tables. We currently make batches and transactions different tables, but it might be better to make it 1 table. The duplication of data wouldn't be that much given most txs are a batch of 1. If it improves our query time, the duplicated data is worth the cost.
The text was updated successfully, but these errors were encountered:
This is the longest query I could find: curl --data-binary "{\"jsonrpc\": \"2.0\", \"id\": 0, \"method\": \"get-transactions\", \"params\": {\"address\": \"FA35VhUNxoyK5jH9N2H5MjX3TqCNmp1LwzMNCi3air9RnBLRxZ7i\", \"asset\":\"PEG\", \"desc\": true}}" -H "content-type:text/plain;" http://localhost:8070/v2
Time taken: 72.5855ms
(subsequent executions of the same query took 15ms)
The get transactions query is very slow, on some addresses over 20s when also filtering by asset. This query is just very inefficient.
I tried to use joins vs the current syntax, and got the execution fast, but the number of rows affected the time it took to fetch. If you were to query 1 row, I could get the query to run in ~150ms. But when you ask for 25 rows, the query time was at 3s, and 50 pushes it close to 6s.
I don't have that much information as to what is making things so slow, so some investigation is needed. I have noticed that things are slower with more tables. We currently make batches and transactions different tables, but it might be better to make it 1 table. The duplication of data wouldn't be that much given most txs are a batch of 1. If it improves our query time, the duplicated data is worth the cost.
The text was updated successfully, but these errors were encountered: