-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
模型效果复现相关 #6
Comments
+1,我跑zhihu也跑不出来论文中的结果 |
我的GPU:Tesla V100 |
作者你好,我想了解下对于yc和ks数据集,大概要训练多少个epoch才能差不多达到论文中的效果呢? |
你好,我的实验中yc在100个epoch左右,ks应该不到50个epoch。 |
I trained for 600 epochs, but never achieved a case where HR@20 was greater than 2.00% or NDCG@20 was greater than 0.70%, haha.🤡 |
你好,后面你是怎么查看ks的结果的呢? |
对代码中提供的zhihu数据集,进行相关实验:
使用README.md中给出的命令:
python -u DreamRec.py --data zhihu --timesteps 500 --lr 0.01 --beta_sche linear --w 4 --optimizer adamw --diffuser_type mlp1 --random_seed 100
达不到论文中的效果,HR@20 < 2.00%,NDCG@20<0.70%,
使用optuna调参后,得到最大的ndcg@20效果;

ndcg@20次优结果:

不过所有的HR@20 均达不到论文中给出的效果,(调参实验中:HR@20 <= 0.020380)
调参范围严格按照论文中的:
“We leverage AdamW as the optimizer. The embedding dimension of items is fixed as 64 across all models. The learning rate is tuned in the range of [0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005]. Despite that DreamRec does not require L2 regularization, we tune the weight of L2 regularization for all baselines in the range of [1e-3, 1e-4, 1e-5, 1e-6, 1e-7]. For all baselines, we conduct negative sampling from the uniform distribution at the ratio of 1: 1, which is not conducted in DreamRec. For our DreamRec, we fix the unconditional training probability pu as 0.1 suggested by [36]. We search the total diffusion step T in the range of [50, 100, 200, 500, 1000, 2000], and the personalized guidance strength w in the range of [0, 2, 4, 6, 8, 10].”
论文中的模型效果参数是如何设置?感谢。
The text was updated successfully, but these errors were encountered: