-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(nft): add log_index to history table and use in PK #1926
Conversation
@anarkiVan can you please check if this PR fixes this error KomodoPlatform/komodo-wallet#1239 (comment) or not? |
@shamardy the error is gone. I was able to see nfts and transactions using seed from the issue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
@@ -63,7 +64,8 @@ fn create_tx_history_table_sql(chain: &Chain) -> MmResult<String, SqlError> { | |||
collection_name TEXT, | |||
image_url TEXT, | |||
token_name TEXT, | |||
details_json TEXT | |||
details_json TEXT, | |||
PRIMARY KEY (transaction_hash, log_index, chain) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: I don't know about the frequency of write operations this table will get, but this PK design will definetly slow down these operations a lot specially when there are too many rows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can remove this primary key all together, we already avoid writing duplicate transfers that have the same chain/transaction_hash/log_index by using the last block number when requesting new transfers that will be saved. It will be difficult though to detect problems/bugs like the one that this PR fixes if I did that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we already avoid writing duplicate transfers that have the same chain/transaction_hash/log_index by using the last block number when requesting new transfers that will be saved.
If it's done with the thread-safe way, seems fine yeah.
It will be difficult though to detect problems/bugs like the one that this PR fixes if I did that.
The error seems quite clear if I am not missing something.
I don't have hard feelings to remove this PK, just wanted to leave a note on it. I leave it up to you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
KomodoPlatform/komodo-wallet#1239 (comment) seems quite clear if I am not missing something.
It was a db error due to the unique constraint on transaction hash, if moralis duplicates data for some reason or there are problems with our implementation that we are trying to save duplicate data it won't be caught without some kind of constraint. Adding a constraint on these values instead of a primary key will also slow down operations afaik.
I don't have hard feelings to remove this PK, just wanted to leave a note on it. I leave it up to you :)
Let's leave it as is then and see if there are performance problems in the future :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a constraint on these values instead of a primary key will also slow down operations afaik.
The idea is not merging multiple values as PK/constrait because on each write each row in the table will need to check all these fields. The best way to deal with this would be creating another field(maybe id
?) and inserting the values with pre-calculated id
(which can be combination of the values used as PK atm) from mm2.
Let's leave it as is then and see if there are performance problems in the future :)
Totally fine 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the column necessary?
It's not, but I guess it was added to make this
komodo-defi-framework/mm2src/coins/nft/storage/sql_storage.rs
Lines 201 to 209 in be413ca
fn nft_from_row(row: &Row<'_>) -> Result<Nft, SqlError> { | |
let json_string: String = row.get(0)?; | |
json::from_str(&json_string).map_err(|e| SqlError::FromSqlConversionFailure(0, Type::Text, Box::new(e))) | |
} | |
fn transfer_history_from_row(row: &Row<'_>) -> Result<NftTransferHistory, SqlError> { | |
let json_string: String = row.get(0)?; | |
json::from_str(&json_string).map_err(|e| SqlError::FromSqlConversionFailure(0, Type::Text, Box::new(e))) | |
} |
transfer_history_from_row_and_chain
and insert the chain value into the json object before returning it or just leave it as is. What do you think is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think rather than keeping a field all with the same values, inserting the value while deserializing better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inserting the value after deserialization and removing it before serialization would complicate the code a lot. Additionally, adding the chain
field after fetching the row/s requires parsing the JSON string into a serde_json::Value
, modifying it, and then deserializing it into an Nft
/NftTransferHistory
structs. This process likely has a greater impact on performance than simply fetching the chain value directly from the database and deserializing the entire row in one operation. Ideally, the chain
field should not have been included in the Nft
/NftTransferHistory
structs from the begining, but it cannot be removed now as it is part of the responses used by GUIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can of course change these structs
komodo-defi-framework/mm2src/coins/nft/nft_structs.rs
Lines 218 to 227 in 717e376
#[derive(Clone, Debug, Deserialize, Serialize)] | |
pub struct Nft { | |
#[serde(flatten)] | |
pub(crate) common: NftCommon, | |
pub(crate) chain: Chain, | |
pub(crate) block_number_minted: Option<u64>, | |
pub(crate) block_number: u64, | |
pub(crate) contract_type: ContractType, | |
pub(crate) uri_meta: UriMeta, | |
} |
komodo-defi-framework/mm2src/coins/nft/nft_structs.rs
Lines 392 to 405 in 717e376
#[derive(Clone, Debug, Deserialize, Serialize)] | |
pub struct NftTransferHistory { | |
#[serde(flatten)] | |
pub(crate) common: NftTransferCommon, | |
pub(crate) chain: Chain, | |
pub(crate) block_number: u64, | |
pub(crate) block_timestamp: u64, | |
pub(crate) contract_type: ContractType, | |
pub(crate) token_uri: Option<String>, | |
pub(crate) collection_name: Option<String>, | |
pub(crate) image_url: Option<String>, | |
pub(crate) token_name: Option<String>, | |
pub(crate) status: TransferStatus, | |
} |
to make all fields other than chain part of a new struct that is used for DB operations and use
#[serde(flatten)]
for this new struct, but the Nft
/NftTransferHistory
structs will get complicated too. I think I would rather keep the chain
column :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I thought it would be simple. In that case, it can stay as it is.
For the future PRs, we can split the changes into multiple commits(for this PR it would be 1 for the renaming, 1 for the actual fix), it would be very useful for the reviewers. It was quite confusing for me :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work. only one comment
Yeah, sorry about that. I thought name changes are gonna be minimal and limited to db methods when I started renaming but it went out of control and I couldn't separate it into a different commit. I will keep that in mind in the future and try to separate commits even if I thought they will be small at the start. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I only have some questions
@@ -63,7 +64,8 @@ fn create_tx_history_table_sql(chain: &Chain) -> MmResult<String, SqlError> { | |||
collection_name TEXT, | |||
image_url TEXT, | |||
token_name TEXT, | |||
details_json TEXT | |||
details_json TEXT, | |||
PRIMARY KEY (transaction_hash, log_index, chain) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't there a different transfer history table for each chain, why do we need the chain as part of the primary key?
komodo-defi-framework/mm2src/coins/nft/storage/db_test_helpers.rs Lines 382 to 410 in 717e376
@laruh It's not related to this PR, but when looking at it I saw that these two tests are doing the same thing. Did you mean to do something else with the |
fixes KomodoPlatform/komodo-wallet#1239 (comment)
related comment KomodoPlatform/komodo-wallet#1239 (comment)
log_index
as part of the transfers history table primary key.nft_tx_history
table is now callednft_transfer_history
andtx
/txs
are renamed totransfer
/transfers
throughout the NFT code since what's added/retrieved from DB is NFT transfers not transactions (Multiple NFT transfers can be in one transaction). By renaming the table, there are no need for db migrations due to the addition oflog_index
column. Although NFT is not used in production yet, if anybody used it, transfers will be re-fetched and saved to the new DB table when using the related API methods.