-
Notifications
You must be signed in to change notification settings - Fork 79
-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A GNN-based Meta-Learning Method for Sparse Portfolio Optimization #58
Comments
Dear @lerrytang, I apologize for the rest response. I must have missed the notice. Thanks for referring to @LlionJ's work, I will check in-detail. I have made a few updates, meanwhile, modified the code for the previous repo shared to perform, for robust optimization (by introducing behavioral diversity), in a noisy reward environment (due to monte-carlo optimization, and action noise/corrupting). P.S. This basically shows the effectiveness of the optimizer in quality-diversity optimization, on a noisy/stochastic environment. I also created another repo with examples of solving 100k problems (as well as 500k param MovieLens 1M matrix factorization): Please let me know if you would have any questions. Have a great week. Sincerely, Kamer |
@LlionJ's weight visualization is similar to my meta-learning method (the heart-beats are exploitation-exploration episodes). |
Hello,
Let me start by saying that I am a fan of your work here. I have recently open-sourced by GNN-based meta-learning method for optimization. I have applied it to the sparse index-tracking problem from real-world (after an initial benchmarking on Schwefel function), and it seems to outperform Fast CMA-ES significantly both in terms of producing robust solutions on the blind test set and also in terms of time (total duration and iterations) and space complexity. I include the link to my repository here, in case you would consider adding the method or the benchmarking problem to your repository. Note: GNN, which learns how to generate populations of solutions at each iteration, is trained using gradients retrieved from the loss function, as opposed to black-box ones.
Sincerely, K
The text was updated successfully, but these errors were encountered: