some questions around the encoding #252
Replies: 2 comments 5 replies
-
|
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. Here is the value of flattened: I have tried to encode the feature in the movie example in the following link: |
Beta Was this translation helpful? Give feedback.
-
Hello, I am trying to use pytorch-frame in pytorch geometric. I have some questions.
Can I use column-wise encoding for each column, not stype-wise?
I have used the following way as it was provided in the examples:
text_encoder = TextToEmbedding(model=args.model, pooling=args.pooling,
device=device)
text_embedder_cfg = TextEmbedderConfig(text_embedder=text_encoder,
batch_size=5)
But it takes a long time to embed texts:
Embedding texts in mini-batch: 95%|█████████▍| 1848/1949 [1:37:36<05:47, 3.44s/it]
Am I doing wrong?
-3) How do I know which possible encoders can be used for each stype?
I want to create graph before I go through defining a neural model. I saw in the examples that the encoding of each stype is assigned in the model. I want to present the graph with the attribute values (according to the encoding) before defining the models.
Is it possible?
my nodes and edges have some attributes of different types. Is it possible to use this library even in situation where there is no need to consider attributes? (I mean one code for both situations)
what the following error is?
finite_mask = np.isfinite(flattened)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
I am new in this field. I am waiting for your helps.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions