You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I really appreciate the work, sharing domain-invariant structure knowledge. I have some concerns.
Although it can be seen from the ablation experiments that sharing structure encoder can indeed bring some benefits, it seems that most of the performance gains come from the decoupling mechanism[1].
In addition, the comparison with the baselines in the experiment is a bit unfair as the decoupling brings too many additional parameters compared to FedAvg.
[1] Graph Neural Networks with Learnable Struc- tural and Positional Representations
The text was updated successfully, but these errors were encountered:
Hello, I share your concerns.
I did some tentative experiments to imitate the part of Table 2 in the paper work[1], what come to the same conclusion you did. It seems that most of the performance gains come from the decoupling mechanism.
If there is a chance, we can communicate, appreciate it a lot.
[1] Graph Neural Networks with Learnable Structural and Positional Representations
Hi,
I really appreciate the work, sharing domain-invariant structure knowledge. I have some concerns.
Although it can be seen from the ablation experiments that sharing structure encoder can indeed bring some benefits, it seems that most of the performance gains come from the decoupling mechanism[1].
In addition, the comparison with the baselines in the experiment is a bit unfair as the decoupling brings too many additional parameters compared to FedAvg.
[1] Graph Neural Networks with Learnable Struc- tural and Positional Representations
The text was updated successfully, but these errors were encountered: