Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement axes logscales #328

Merged
merged 54 commits into from
Nov 2, 2023
Merged

Conversation

lukashergt
Copy link
Collaborator

@lukashergt lukashergt commented Aug 9, 2023

This PR implements logarithmic scaling of select plotting axes in triangle/rectangle plots. Effectively this means that a quantity log(x) is plotted while the axes labels show x with logarithmically spaced tick labels.

NOTE: This has to be set up at axes creation. A simple re-scaling of x- or y-axis (e.g. ax.set_xscale('log')) won't work, but lead to distorted plots. To correctly compute histogram bins or KDEs, the computation has to be performed in logspace from the start.

The way I set this up here is through a logx kwarg for make_1d_axes or plot_1d, and through a logxy kwarg for make_2d_axes or plot_2d. This relates to the pandas kwargs logx and logy, e.g. in pandas.DataFrame.plot.

Example usage:

from anesthetic import read_chains
samples = read_chains("./tests/example_data/pc")
params = ['x0', 'x1', 'x2', 'x3', 'x4']
samples.plot_2d(params, logxy=['x2'])

To see the added documentation:
https://anesthetic.readthedocs.io/en/logscale/plotting.html#log-scale

This PR was motivated by a discussion with @williamjameshandley and might be useful for some upcoming Gambit runs.

Checklist:

  • I have performed a self-review of my own code
  • My code is PEP8 compliant (flake8 anesthetic tests)
  • My code contains compliant docstrings (pydocstyle --convention=numpy anesthetic)
  • New and existing unit tests pass locally with my changes (python -m pytest)
  • I have added tests that prove my fix is effective or that my feature works
  • I have appropriately incremented the semantic version number in both README.rst and anesthetic/_version.py

@lukashergt lukashergt added the enhancement New feature or request label Aug 9, 2023
@lukashergt lukashergt self-assigned this Aug 9, 2023
@codecov
Copy link

codecov bot commented Aug 9, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (2c01d13) 100.00% compared to head (4fd8726) 100.00%.

Additional details and impacted files
@@            Coverage Diff             @@
##            master      #328    +/-   ##
==========================================
  Coverage   100.00%   100.00%            
==========================================
  Files           34        34            
  Lines         2868      2969   +101     
==========================================
+ Hits          2868      2969   +101     
Files Coverage Δ
anesthetic/_version.py 100.00% <100.00%> (ø)
anesthetic/plot.py 100.00% <100.00%> (ø)
anesthetic/plotting/_matplotlib/hist.py 100.00% <100.00%> (ø)
anesthetic/samples.py 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

anesthetic/plot.py Outdated Show resolved Hide resolved
docs/source/plotting.rst Outdated Show resolved Hide resolved
@lukashergt
Copy link
Collaborator Author

Outcome is excellent. I have some inline comments on the API/implementation that are related.

@williamjameshandley, anything else that needs addressing here?

@AdamOrmondroyd
Copy link
Collaborator

Just had chance to read this properly, stunning work @lukashergt

@AdamOrmondroyd
Copy link
Collaborator

AdamOrmondroyd commented Oct 3, 2023

There's something odd going on with fastkde, look at the figures produced by tests/test_samples.py::test_plot_logscale:

"kde"
Screenshot 2023-10-03 at 17 58 02

"fastkde"
Screenshot 2023-10-03 at 17 57 17

The "fastkde" case is responsible for the remaining new warnings about setting limits, which is why I spotted this. This is why I'm allergic to warnings!

@AdamOrmondroyd
Copy link
Collaborator

Could this be because kde_contour_plot_2d() uses cut_and_normalise_gaussian() to give the hard cutoff at x=0, which fastkde_contour_plot_2d() does not?

@AdamOrmondroyd
Copy link
Collaborator

Now I'm wondering if the x and y limits should actually be the same if only one of them is logarithmic

… bounds, but they ruin Gaussian results, this is very different from gaussian_kde, so for fastkde we actually would need to provide prior bounds and it is not good enough to infer bounds from the data...
@lukashergt
Copy link
Collaborator Author

lukashergt commented Oct 19, 2023

There's something odd going on with fastkde, look at the figures produced by tests/test_samples.py::test_plot_logscale:

Thanks for catching this. This was a subtle thing to do with set_xlim which we emit to both horizontally and vertically shared plots, which we shouldn't do if only one is logarithmic and the other is not.

More concretely, the problem here was coming from the non-logarithmic plots with data close to zero such that after accounting for a margin the limits had negative numbers which then are wrong for the logarithmic plots.

I've added tests in 18209f8.

Now I'm wondering if the x and y limits should actually be the same if only one of them is logarithmic

I have decoupled them now.

@AdamOrmondroyd
Copy link
Collaborator

lgtm. Great work!

Copy link
Collaborator

@williamjameshandley williamjameshandley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this looks great. I will test it out in the wild for my GAMBIT runs in the near future and get back to you if I think anything needs adjusting.

@lukashergt lukashergt merged commit 10f032f into handley-lab:master Nov 2, 2023
22 checks passed
@lukashergt
Copy link
Collaborator Author

Sorry, missed the approval. Thanks for the reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants