You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I plan to use graph convolution for constructing item embeddings in a recommendation system. When generating the dataset, I found that attaching all features of the items to the nodes would result in an excessively large dataset, causing IO bottlenecks and low GPU utilization. Is there a way to generate a dataset only containing node IDs and dynamically complete the features for nodes in the graph during training?
The text was updated successfully, but these errors were encountered:
I plan to use graph convolution for constructing item embeddings in a recommendation system. When generating the dataset, I found that attaching all features of the items to the nodes would result in an excessively large dataset, causing IO bottlenecks and low GPU utilization. Is there a way to generate a dataset only containing node IDs and dynamically complete the features for nodes in the graph during training?
The text was updated successfully, but these errors were encountered: