Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory management in BreakingCondition #493

Open
CaratheodoryRR opened this issue Aug 6, 2024 · 8 comments · May be fixed by #502
Open

Memory management in BreakingCondition #493

CaratheodoryRR opened this issue Aug 6, 2024 · 8 comments · May be fixed by #502
Labels

Comments

@CaratheodoryRR
Copy link

For context, I ran several simulations inside a loop, varying mass composition to see the energy spectrum at the Earth. I noticed that the RAM consumption was exceedingly high after some time. Further inspection revealed that RAM usage was steadily increasing during my 3D simulations, and the only way to free up that memory was by closing the program.

After debugging, I think the problem is related to the breaking conditions I used, which are MinimimEnergy and MaximumTrajectoryLength. I came up with a minimum reproducible example by modifying this 1D example on the documentation.

To Reproduce

from crpropa import *

# simulation setup
sim = ModuleList()
sim.add( SimplePropagation(1*Mpc, 100*Mpc) )
sim.add( Redshift() )
sim.add( MaximumTrajectoryLength( 2000 * Mpc ) ) # Important part

# observer and output
obs = Observer()
obs.add( Observer1D() )
sim.add( obs )

# source
emissionDirection = Vector3d(-1, 0, 0) # Change to (1, 0, 0) to trigger MaximumTrajectoryLength
source = Source()
source.add( SourceUniform1D(1 * Mpc, 1000 * Mpc) )
source.add( SourceDirection( emissionDirection ) )
source.add( SourceRedshift1D() )

# power law spectrum with charge dependent maximum energy Z*100 EeV
# elements: H, He, N, Fe with equal abundances at constant energy per nucleon
composition = SourceComposition(1 * EeV, 10 * EeV, -3)
composition.add(1,  1,  1)  # H
composition.add(4,  2,  1)  # He-4
composition.add(14, 7,  1)  # N-14
composition.add(56, 26, 1)  # Fe-56
source.add( composition )

# run simulation
sim.setShowProgress(True)
sim.run(source, 100_000_000, True)

In this example, I noticed two things. First, the Observer object correctly manages inactive particles, since running this simulation as it is didn't increase my RAM usage significantly, as shown in the following htop command output:
image

But if I trigger the breaking condition by changing the source emission direction to Vector3d(1, 0, 0), the RAM usage increases and there is no obvious way for me to decrease it:
image

Expected behavior
Based on what I showed earlier, I expected the breaking conditions to behave similarly to the observer object. That is, freeing up
particles once they meet the condition, thus making them inactive.

Additional context
This is a problem for me because, at some point, I am using ParticleCollector to rerun simulations from saved files, but after some time the outputs of these extra simulations are like this:
image

Which gave me the hint that this was a memory-related problem.

@lukasmerten
Copy link
Member

lukasmerten commented Aug 7, 2024

Hi @CaratheodoryRR
I can confirm this problem with an even simpler set up (removing the source composition and the redshift). I have not yet any idea where this is exactly coming from. Maybe the dereferencing for break conditions is handled differently from observers...
I'll keep you updated on any progress.

Edit:

  • Adding an output for the break condition is not helping.
  • A similar behaviour is found for other break condition, e.g., SphericalBoundary(Vector3d(0), 2000 * Mpc)). So I guess this is connected to the underlying class AbstractCondition

@lukasmerten
Copy link
Member

A hacky solution would of course be to cut the simulation into smaller pieces (reducing the number of Candidates per run) so that the memory is large enough.

If anyone knows the difference of the memory handling between Observers and AbstractCondition it might help to understand why one is working and the other not.

@CaratheodoryRR
Copy link
Author

CaratheodoryRR commented Aug 7, 2024

A hacky solution would of course be to cut the simulation into smaller pieces (reducing the number of Candidates per run) so that the memory is large enough.

If anyone knows the difference of the memory handling between Observers and AbstractCondition it might help to understand why one is working and the other not.

Did you test it? Because I already do that in my program and I am facing the same problem. The RAM usage keeps growing monotonically over time, and the only way to free up memory is terminating the program.

PD: I accidentally closed this issue, sorry for that.

@JulienDoerner
Copy link
Member

Hey
I also found this strange memory consumption. I have no Idea for the reason.
A workaround would be switching to observer based detection of the maximum trajectory length.
You can either use the ObserverTimeEvolution or write a simple Observer feature for this (see https://github.com/JulienDoerner/CRPropa3/tree/issue_493). In this case I tested that the memory does not increase.

@lukasmerten
Copy link
Member

Did you test it? Because I already do that in my program and I am facing the same problem. The RAM usage keeps growing monotonically over time, and the only way to free up memory is terminating the program.

Yes, the RAM will still grow over time, but if the simulation finishes before the RAM is full the results should be fine, shouldn't they?
This is by no means a nice solution but might help in cases where you cannot use an observer as described by @JulienDoerner.

@CaratheodoryRR
Copy link
Author

Hey I also found this strange memory consumption. I have no Idea for the reason. A workaround would be switching to observer based detection of the maximum trajectory length. You can either use the ObserverTimeEvolution or write a simple Observer feature for this (see https://github.com/JulienDoerner/CRPropa3/tree/issue_493). In this case I tested that the memory does not increase.

Hello @JulienDoerner, thanks to you and @lukasmerten for your replies. This observer-based approach solved my memory issue. I also added some code for the minimum energy condition as an observer feature, and it works!

This memory problem haunted me for a couple of months and it prevented me from making progress in my research. I understand this is a temporary solution though, so any updates regarding the memory handling of the AbstractCondition class would be gladly received.

@lukasmerten lukasmerten added the bug label Aug 9, 2024
@JulienDoerner
Copy link
Member

I reproduced the bug by running a simpler version only with c++

#include "CRPropa.h"

using namespace crpropa;

int main(void) {
    ModuleList sim;

    sim.add(new SimplePropagation(1 * Mpc, 100 * Mpc));
    sim.add(new Redshift());
    sim.add(new MaximumTrajectoryLength(2000 * Mpc));

    ref_ptr<Observer> obs = new Observer();
    obs -> add(new Observer1D());
    sim.add(obs);

    Source source;
    source.add( new SourceUniform1D(1 * Mpc, 1000 * Mpc) );
    source.add( new SourceDirection(Vector3d(1, 0, 0)) );
    source.add( new SourceRedshift1D() );

    // run 
    sim.setShowProgress(true);
    sim.run(&source, 100000000, true);
}

The memory issue seems not be related to the SWIG interface but rather the C++ module handling.

@JulienDoerner
Copy link
Member

I traced the issue down to the rejectFlag which is set on each deactivation.

candidate->setProperty(rejectFlagKey, rejectFlagValue);

By default the key is "Rejected" but there is no rejectFlagValue.

One can solve the issue by setting the rejectFlagVlaue directly:

sim = ModuleList() 
max_tra = MaximumTrajectoryLength(1 * Gpc) 
max_tra.setRejectFlag("Rejected", "maxTra")
sim.add(max_tra)

I would suggest to add a default value for the flag based on the module name. This should solve the problem for all abstractConditions

@JulienDoerner JulienDoerner linked a pull request Aug 28, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants