Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast exponentiation for tensor product #809

Merged
merged 3 commits into from
Oct 11, 2024
Merged

Conversation

tnemoz
Copy link
Contributor

@tnemoz tnemoz commented Sep 30, 2024

Description

Implements the fact exponentiation algorithm to compute the tensor powers of a matrix. Fixes #803.

Changes

  • Implements the fact exponentiation algorithm to compute the tensor powers of a matrix.

Checklist

Before marking your PR ready for review, make sure you checked the following locally. If this is your first PR, you might be notified of some workflow failures after a maintainer has approved the workflow jobs to be run on your PR.

Additional information is available in the documentation.

  • Use ruff for errors related to code style and formatting.
  • Verify all previous and newly added unit tests pass in pytest.
  • Check the documentation build does not lead to any failures. Sphinx build can be checked locally for any failures related to your PR
  • Use linkcheck to check for broken links in the documentation
  • Use doctest to verify the examples in the function docstrings work as expected.

Copy link

codecov bot commented Sep 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.8%. Comparing base (14549b5) to head (658634f).
Report is 9 commits behind head on master.

Additional details and impacted files
@@          Coverage Diff           @@
##           master    #809   +/-   ##
======================================
  Coverage    97.8%   97.8%           
======================================
  Files         168     168           
  Lines        3259    3263    +4     
  Branches      800     800           
======================================
+ Hits         3189    3193    +4     
  Misses         46      46           
  Partials       24      24           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@vprusso
Copy link
Owner

vprusso commented Oct 2, 2024

Nice. Thanks for those changes, @tnemoz !

That looks good to me, but I'll also defer to @purva-thakre for any additional comments that she may have. Thanks for your quick turn around, @tnemoz --great work as always!

Copy link
Collaborator

@purva-thakre purva-thakre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too!

I just have one suggested change for something you were not responsible for. Would it be possible to change *args in the user input field? In the docstring, we state that we allow 3 different types of inputs. Mentioning those types using | would make it easier to read the API.

You will also have to make changes to :param args:.

@tnemoz
Copy link
Contributor Author

tnemoz commented Oct 7, 2024

Would it be possible to change *args in the user input field? In the docstring, we state that we allow 3 different types of inputs. Mentioning those types using | would make it easier to read the API.

I'm unsure about how that would go to be fair. From what I now, if *args is type hinted, then it means that all possible additional arguments must be of that type. According to this rule, I don't see how it would be possible to assign a type to args here, since it could either by a collection of np.ndarray or a np.ndarray accompanied by an int. So this implementation:

def tensor(*args: np.ndarray) -> np.ndarray:

doesn't work since the possibility of passing an int isn't represented. This one fixes the problem:

def tensor(*args: np.ndarray | int | list[np.ndarray]) -> np.ndarray:

but it somehows also indicates that something like tensor(2, 2) is valid, while it doesn't really make sense. Also, it indicates that something like tensor([e0, e1], [e0, e1]) should be accepted, while that isn't the case.

All in all, I'm not sure adding type hinting here would be beneficial, as it could indicate that several cases which should be valid, actually aren't.

That being said, I'm no expert in the topic. If you want to add the type hinting here, what is the approach that I should consider here?

@purva-thakre
Copy link
Collaborator

purva-thakre commented Oct 10, 2024

If you want to add the type hinting here, what is the approach that I should consider here?

So, this is a personal preference where I would rather user named_param rather than *args as the former provides easily readable details.

Reading the docstring entry for *args, I would rename it to input_mat:

def tensor(input_mat: list[np.ndarray]| np.ndarray, int| np.ndarray, ... , np.ndarray) -> np.ndarray:

Maybe you have to sandwich the last two allowed inputs in () or maybe not.

We have a couple of functions like this and of course, I can't remember which ones right now to link them here. lol.

@purva-thakre
Copy link
Collaborator

Hi @tnemoz, for future scenarios, please make sure you create a branch from the master of your fork before you start working on a PR. This makes it easier for us to make changes to a PR branch, if needed.

I tried to check out this branch locally but I don't have permission to push changes into this.

@purva-thakre
Copy link
Collaborator

Going to go ahead and merge this. The remaining discussion could be resolved as part of #299

@purva-thakre purva-thakre merged commit 7b15429 into vprusso:master Oct 11, 2024
18 checks passed
@tnemoz
Copy link
Contributor Author

tnemoz commented Oct 11, 2024

Hey @purva-thakre! Sorry for that :(
I'm not sure I understand the problem though: is it that I merged my master branch instead of a branch that I would have created from it (similar to my other current PR for instance)?

Something along the lines of "since my master branch is protected, you can't push on it"? Or not at all? Just to be sure I don't repeat the same mistake!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implementing tensor products of an object with itself using fast exponentiation
3 participants