You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This line is used to find out in which timestamp the first labeled node appear.
After that for each batch we check if the last timestamp is above label_t here
Till here its all good
The problem is that after that we immeadatly change label_t to be the current timestamp here
In my opinion this cause the training to consider only the first batch at every timestamp and not all the batches that are labeled.
Could you please provide clarification regarding this?
Thanks in advance
Yaniv
The text was updated successfully, but these errors were encountered:
YanivDorGalron
changed the title
using only 1 batch per timestamp in the training examples
Using only 1 batch per timestamp in the training examples
Jun 14, 2024
I think the answer to this is as follows: In some datasets, the batch size is smaller than the total amount of interactions in a single timestamp. In those cases, we would like to evaluate ourselves only on the first batch of the timestamp; otherwise, the next batches in the discussed timestamp will be affected by the first batch and so on.
A solution to this could have been - to process edges only after all interactions from a specific timestamp have been evaluated
Hey,
Would really appreciate your help here:
It seems like only 1 batch is used per timestamp
label_t = dataset.get_label_time() # check when does the first label start
This line is used to find out in which timestamp the first labeled node appear.
After that for each batch we check if the last timestamp is above label_t here
Till here its all good
The problem is that after that we immeadatly change label_t to be the current timestamp here
In my opinion this cause the training to consider only the first batch at every timestamp and not all the batches that are labeled.
Could you please provide clarification regarding this?
Thanks in advance
Yaniv
The text was updated successfully, but these errors were encountered: