Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] Implementing GLOP #253

Merged
merged 21 commits into from
Mar 3, 2025
Merged

[Feat] Implementing GLOP #253

merged 21 commits into from
Mar 3, 2025

Conversation

Furffico
Copy link
Member

Description

Following #182, this PR includes the implementation of Global and Local Optimization Policies (GLOP) (Ye et al., 2023), along with these new features for reproducing the results:

  • New model: GLOP (based on the implementation in [Feat] Adding GLOP model #182)
  • New environments:
  • New embeddings: Init/Edge embeddings based on polar coordinates.
  • Two adapters for decomposing and composing TSP/CVRP solutions.

Types of changes

  • New feature (non-breaking change which adds core functionality)
  • Documentation (update in the documentation)

Checklist

  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

CC: @henry-yeh @fedebotu

@Furffico Furffico linked an issue Feb 27, 2025 that may be closed by this pull request
@Furffico
Copy link
Member Author

Despite all these efforts, the current version of the GLOP model still refuses to learn 😞. However, I've obtained a working checkpoint in Oct. 2024. Since then, I’ve tried numerous times to replicate the success, but no luck so far. It seems like getting it to learn requires a bit of magic 🤔. At least this proves that the basic code logic is working as expected.

In theory, this checkpoint should work with this version of code. I’ll upload the checkpoint soon after some tests!

@Furffico Furffico requested a review from fedebotu February 27, 2025 17:32
@fedebotu
Copy link
Member

Despite all these efforts, the current version of the GLOP model still refuses to learn 😞. However, I've obtained a working checkpoint in Oct. 2024. Since then, I’ve tried numerous times to replicate the success, but no luck so far. It seems like getting it to learn requires a bit of magic 🤔. At least this proves that the basic code logic is working as expected.

In theory, this checkpoint should work with this version of code. I’ll upload the checkpoint soon after some tests!

Great job @Furffico !

I think it may also be a problem of the environment in reproducing the main paper results, as seen here. Perhaps downgrading to e.g. PyTorch 2.2 / changing to FP32 could do the trick?

This said, given the code is indeed correct per se if it's indeed a matter of environment as it seems to be I would be for merging!

@fedebotu fedebotu marked this pull request as ready for review February 28, 2025 10:40
@Furffico Furffico merged commit cb72927 into ai4co:main Mar 3, 2025
9 checks passed
@Furffico Furffico added the feature New Feature label Mar 3, 2025
@fedebotu fedebotu mentioned this pull request Mar 3, 2025
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New Feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Code for running GLOP
2 participants