Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Boot::multiprecision as a DGtal::BigInteger model #1749

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

dcoeurjo
Copy link
Member

@dcoeurjo dcoeurjo commented Dec 5, 2024

PR Description

Adding boost::multiprecision for DGtal::BigInteger.

Checklist

  • Unit-test of your feature with Catch.
  • Doxygen documentation of the code completed (classes, methods, types, members...)
  • Documentation module page added or updated.
  • New entry in the ChangeLog.md added.
  • No warning raised in Debug mode.
  • All continuous integration tests pass (Github Actions)

@dcoeurjo
Copy link
Member Author

dcoeurjo commented Dec 5, 2024

ping @JacquesOlivierLachaud @troussil

@dcoeurjo
Copy link
Member Author

dcoeurjo commented Dec 6, 2024

Hi @rolanddenis , I may need your help with this PR (cc @JacquesOlivierLachaud )

This PR aims to include boost::multiprecision as a (header only) alternative to GMP for mutliprecision integer computations. DGtal::BigInteger are now enabled by default, and the backend can be switched to GMP if the user wants that.

I'm fixing a few issues related to WITH_GMP usages and I'm facing a problem in PointVector and the convert you did for explicit casts.

On this PR, if you have a look to

auto p = p1 - p2;
the operator-() is not defined for DGtal::BigInteger. It seems to be related to ArithmeticConversionTraits you wrote a while ago.

Could you please help us here?

thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant