Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory-and-disk-friendly data processing for extremely large dataset #29

Merged
merged 5 commits into from
Sep 6, 2024

Conversation

Benzoin96485
Copy link
Owner

  1. Compress the integral data fields into one single datapoint if possible
  2. Option of on-the-fly neighbor list
  3. Memory-friendly data preprocessing and storage
  4. Progress bar for all time-consuming data preprocessing
  5. Option of max memory

@Benzoin96485 Benzoin96485 added bug Something isn't working enhancement New feature or request labels Sep 5, 2024
@Benzoin96485 Benzoin96485 linked an issue Sep 5, 2024 that may be closed by this pull request
@Benzoin96485 Benzoin96485 merged commit 06fe1a9 into devel Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimize the storage of the full neighbor list
1 participant