Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panicks when more than 16 lasers #55

Closed
damienBloch opened this issue Jun 4, 2021 · 14 comments
Closed

Panicks when more than 16 lasers #55

damienBloch opened this issue Jun 4, 2021 · 14 comments

Comments

@damienBloch
Copy link

I was trying to increase the number of cooling lasers to replicate sidebands.
Every time I add more than 16 lasers to the world, a thread panicks.

Any thoughts on what could be causing the problem ?

@ElliotB256
Copy link
Collaborator

Currently the laser samplers store per-laser quantities in a fixed-size array of length 16, see laser::COOLING_BEAM_LIMIT. We originally used a variable length (and heap allocated) Vec<> for laser samplers, but there was a performance increase for moving to fixed-size arrays. You can see more about the performance tweaks here.

Short fix, you could raise the cap (just modify the COOLING_BEAM_LIMIT). Long term, we can return to whether it's worth using Vec<> again (or some other faster container)

@ElliotB256
Copy link
Collaborator

Note that performance was faster with fixed arrays, but we could debate whether it's worth the flexibility. A few posts down, it says:

17.7s compared to 22.5s using vec!

@damienBloch
Copy link
Author

Ok, thanks ! I'll have a look.

It just that the 16 limits seemed a bit low. A simple MOT already has 6 beams so it's fairly easy to go above 16.

@ElliotB256
Copy link
Collaborator

Yeah agreed, especially when you also have e.g. sidebands in some setups.

@ElliotB256
Copy link
Collaborator

Could you profile the benchmark simulation as a function of the array size?

@damienBloch
Copy link
Author

I simply measured the duration time for the benchmark example on a i7-1165G7 @ 2.80GHz.
Not sure how to do a detailled profiling though.

benchmark

I went a bit high for the upper limit because I want to have a MOT with many sidebands but I imagine it's an extreme case.

I had to remove #[derive(Serialize, Deserialize)] for ExpectedPhotonsScatteredVector and ActualPhotonsScatteredVector for this test, because Rust doesn't automatically implement these traits for arrays of size > 32.

@ElliotB256
Copy link
Collaborator

Thanks Damien, there's obviously an overhead with shuffling larger arrays around in memory (as expected). It looks like 32 would be a safe short term extension (which allows serialize,deserialize traits to be preserved), and we can return to whether a vec approach would be more suitable in other cases (I'd be interested to see how the overhead scales for vec versus larger cooling_beam_limit

@damienBloch
Copy link
Author

Also maybe a more explicit error message could be helpful ?
Otherwise it's unclear why the number of entities with CoolingLight components is limited.

I also just noticed that the lasers are cached with a LASER_CACHE_SIZE in some files, but I didn't change this value in the previous runs.

@ElliotB256
Copy link
Collaborator

The LASER_CACHE_SIZE plays a different role. It defines a fixed size slice which we iterate over. It was more important in the original vec implementation, because it allowed us to process chunks of the vec in batches of a size known at compile time (and so enable SIMD floating point ops). Since moving to fixed size arrays, it's possible any performance gain is no longer worth the extra maintenance/complexity. However, let's bear it in mind when we return to profiling whether a fixed size/vec approach is worth it.

@minghuaw
Copy link
Contributor

Have you considered using constant generics to pass the control over maximum number of lasers to the user while still having a fixed-length array?

@ElliotB256
Copy link
Collaborator

That's a great idea. There will be a rewrite at some point in the future to support other cooling transitions/models of scattering rates, and we should add the change then.

@zpbaldo
Copy link

zpbaldo commented Dec 21, 2024

Hi,
I can't seem to be able to add a generic number of beams if not by modifying also LASER_CACHE_SIZE to the desired beam number in every file it is contained, but I'm not sure it should be meant to be done this way.
What is the proper way to set a generic number of beams? Thanks!

@ElliotB256
Copy link
Collaborator

@zpbaldo I hadn't actually added constant generic support yet, although I do think minghuaw's suggestion is a good one. My intention has been to return to atomecs once bevy has stabilised a bit more (I was working on #3 after specs got deprecated; there's good benching data in there and to be fair it has now been a bit of time and bevy has matured, so I should return to this).

But this does also sound like a bug, as in #55 (comment) this should just refer to a batch size for looping through a more generalisable number of lasers. What specific setup are you encountering this on?

@zpbaldo
Copy link

zpbaldo commented Dec 28, 2024

@ElliotB256 Yes I read LASER_CACHE_SIZE is the batch size but I found that modifying it would allow me to add a generic number of beams in the simulation.
To reproduce that, one can run any example (1d mot for instance) and loop over a certain number of beams, say 128. It will panic and give something like this "thread '' panicked at 'range end index 32 out of range for slice of length 16'".
Changing the LASER_CACHE_SIZE in the files 'intensity.rs','doppler.rs', 'force.rs', 'sampler.rs' will allow to add beams until the set value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants