-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TruthParticle Update and Efficiency Measurements, main branch (2024.05.08.) #582
TruthParticle Update and Efficiency Measurements, main branch (2024.05.08.) #582
Conversation
ee05523
to
b6612b7
Compare
So, the code is now producing some track finding efficiency results. But let's start at the beginning. I switched to doing the truth measurement <-> reco measurement matching a little differently. Instead of comparing their properties "percentage-wise", I now rather check whether the measurements are within certain absolute distance to each other. (On top of being on the same surface of course.) As that is just more justified in this case. (We're not looking for the floating-point precision of our code in this case, but the physics performance of it. For which absolute value comparisons seem more reasonable.) You can absolutely disagree with the 1 millimeter distance that I allow in the code currently. Unfortunately when I go to a smaller value, the final efficiencies drop drastically. So we'll have to review the measurement creation code a bit... 🤔 With this code, on a large-statistics 100 GeV muon sample, I get: We will have plenty to understand... 🤔 But at least we now have a sense of what the ODD reconstruction is doing. 😉 |
Also note that when I run:
on the files that we have in our v7 data, I get an even weirder plot. 😕 So something is a bit fishy around the simulations as well. (Since this plot should just be lower-statistics compared to the Though now that I think about it, it's probably because of a mismatch in the digitization parameters used. 🤔 (Since the measurement matching code requires the uncertainty on the measurements to match as well. Which only happens if the digitization parameters are used consistently between Acts's simulation and this project's reconstruction.) But it seems that some of the strip measurements just don't match up in this latter plot. 😕 |
e1ea7f7
to
5251c34
Compare
Great! I think it would also be a very interesting topic for an Thanks for the ping by the way :) |
5251c34
to
e176791
Compare
So... good news and bad news.
I'll turn this PR into digestible PRs in the coming days, so that it could be merged in. Then we can start with the optimisation of the seeding options. 😉 |
2133906
to
6d3a7b6
Compare
It performs the matching based on the measurements associated with the truth and reconstructed particles.
While loosening the requirement on the variance of the measurements, so that they would still be considered the same measurement. This is because "true measurements" and the ones reconstructed by our code tend to be just a little different for the pixels.
While changing how the truth<->reco track matching would be done exactly by the class. Also removed the currently ineffective tools from traccc_seq_example.
So that it would be easier to introduce additional performance measurements in the next step(s).
So that it would overlay the seed-finding and track-finding efficiencies on top of each other. In a super hard-coded way.
947a010
to
2637ab0
Compare
With the traccc integration into Acts providing way-way better performance monitoring than what we should put into this project itself, let's abandon this update. And eventually we should probably remove most of the physics performance measuring code from the repository. It's enough to maintain that sort of thing in Acts... |
This is the thing that I was teasing @beomki-yeo and @SylvainJoube with for a few days now. 🤔
Basically, I believe that what we desperately need is a well defined reconstruction and truth EDM in the project. With those in place, doing the physics performance studies should become easier. Now... the ultimate form of the EDM will be an SoA one, that's not up for debate. But for now, as a quick hack, I want to introduce
traccc::particle_container_types
as:using particle_container_types = container_types<particle, measurement>;
So that the container would neatly associate (true) measurements with the particles in a "container form".
With this setup,
traccc::track_candidate_container_types
and thistraccc::particle_container_types
container become directly comparable. And this is what this PR's toy class (traccc::performance::track_finding_analysis
) does. Right now it only does some silly stuff that I used for debugging purposes, but a class with that sort of API should be able to perform some reconstruction efficiency measurements. 🤔Eventually of course I'll want to have a "truth EDM" that is:
Note that we have all the information in our CSV files for building this sort of EDM in memory. 🤔 We shouldn't be running clusterization as part of some helper classes, like we do currently in: https://github.com/acts-project/traccc/blob/main/io/src/mapper.cpp#L227-L230 Instead, we'll just need to write a relatively simple piece of code that establishes the particle->measurement->cell connections, based on the information that we have in the CSV files. 🤔
I intend to keep this PR in a draft status for a bit. I hope to be able to produce some simple efficiency plots with this sort of code until next Thursday. And then, once I'm back from the HSF workshop, we can see about doing all of this in a properly clean way. 🤔