-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fmpz mod mpoly and nmod mpoly #164
Fmpz mod mpoly and nmod mpoly #164
Conversation
fmpz_mod_mpoly.pyx to come in a later commit
The flint docs don't specify *when* this can fail so I'm not sure it fits with a DomainError or the like
For now I've crammed the |
That seems right to me. Since |
I think that's also reasonable, currently I'm just checking the first time, storing the result and just returning that instead for later calls. |
Either approach is reasonable I think. Not doing it when the context is created saves the one time cost of the check if it is never needed. In exchange we get a slight bit of overhead each time the attribute needs to be checked. I don't know whether that overhead is even measurable though. What would be measurable I think is actually computing the prime check every time division happens. Particularly in the case of |
That sounds reasonable, is that something we'd like in this PR? I've brought the nmod_mpoly up to the same spec as fmpz_mod_mpoly. I've refrained from cramming them into the other nmod_poly and fmpz_mod_poly tests respectively just because of large API differences. Happy to have a crack at it but would either require a little more than just adding a for loop + some lambdas. Whoops I looks like I missed that numpy isn't required here, was just using it for 32bit/64bit dependent memoryviews in place of a malloc of ulongs. |
That's up to you. If it makes things easier in this PR then go ahead. Otherwise we can leave that to a separate PR. More generally I think that we want all types to have associated contexts so that there is a uniform interface like We would also need contexts for all types for generic rings as well. I haven't looked enough into the generic rings interfaces to see if that should affect the overall design of contexts for the non-generic types. For arb we need a contexts that can hold non-global precision. Likewise for the number of terms in a series. It isn't possible for SymPy, mpmath etc to make use of things like If you want to add contexts for |
The test failure is
I think that is a test added in gh-168 so you may need to merge master to see the same failure locally. |
Yep that was it thanks! Seems like CI was doing something and merging the PR before testing, definitely through me off for a minute.
I think something like that might be best suited in it's own PR, this one only touches on |
This seems like it could be done as system that builds upon the existing contexts. It would have to be Python types rather than Cython types as
Just from a quick skim they seem like quite the beast.
Would this be replacing the global |
That is usually how CI works at least for Travis and GitHub Actions. We were using Cirrus as well before which did not seem to work like that. When you push to a PR (or close and open it) the CI system will make a temporary branch somewhere effectively like: git checkout master
git checkout -b tmp_branch
git merge pr_branch # merge the PR into temporary copy of master
<run CI checks> Sometimes you need to close and open a PR to get it to merge with latest master for CI (asking GA to "rerun" a job does not trigger a new merge). There are also a bunch of complicated access rules in GitHub so if the PR modifies the GA workflow files then it might still run with the master version of the workflows depending on the event type that triggers the workflow. |
Looks like another possible memory error. I'll hunt that down tomorrow. |
These are details that need to be worked out. For The thread-safe part could be managed by using a thread-local variable in python-flint. I don't know what the overheads would be for that in the context of The cleanest solution is to use separate contexts with context-local precision etc settings. Then you can do e.g.: ctx = arb_ctx(53)
result = ctx.sin(1) Then SymPy and mpmath can each make their own contexts and manipulate the precision separately. Of course if SymPy makes this context a global variable and modifies its precision then we're back to not thread-safe etc. In the case of something like the RR domain though we can attach a separate context to each domain object and we never need to modify its precision. Ideally creating this The problem with having separate arb contexts everywhere is that then you need to do e.g. Then we could ensure that any Now the problem is what are you going to do when you have In [1]: import mpmath
In [2]: ctx1 = mpmath.MPContext()
In [3]: ctx2 = mpmath.MPContext()
In [4]: ctx2.dps = 50
In [5]: ctx1.mpf(1) / ctx2.mpf(3)
Out[5]: mpf('0.33333333333333331')
In [6]: ctx2.mpf(1) / ctx1.mpf(3)
Out[6]: mpf('0.33333333333333333333333333333333333333333333333333311') It basically just takes the context from the left-hand operand. I think that it would be better to say something like:
There are likely two very different groups of users here:
Perhaps a way to satisfy both groups is to say that implicit coercions between contexts are allowed only when mixing the global context with a non-global context (the global context can coerce all others). Then SymPy can never use the global context but end users who do use it can take the output of SymPy functions and mix it into their own types. Similar considerations apply to all of the other operations like the number of terms in a series how exactly everything gets printed. The problem with printing is you might have something like:
Now the list's
Again the ideal solution here is probably different for different groups of users. SymPy for example already has its own complicated printing machinery so being able to call |
There should be some handling equality checking against scalars: In [26]: flint.fmpz_poly([1]) == 1
Out[26]: True
In [27]: ctx = flint.fmpz_mpoly_ctx(2, flint.Ordering.lex, ['x', 'y'])
In [28]: ctx.from_dict({(0,0):1})
Out[28]: 1
In [29]: ctx.from_dict({(0,0):1}) == 1
Out[29]: False |
The suggested solution from Cython maintainer is to do:
That is marginally better than using It is also apparently considered to be a Cython bug that the code for
Cython should be expected to respect the signed-ness of the types but otherwise not make any assumptions about the actual size or whether it really is typedef'd to It doesn't look like we actually use |
Tests are failing but only under Python 3.12. Specifically this test:
The output is definitely quite different from that expected in the test:
It looks like subs is the problem:
If we set The python-flint/src/flint/types/fmpz_mod_mpoly.pyx Lines 894 to 925 in 043ca9d
|
src/flint/test/test_all.py
Outdated
if P is flint.fmpz_mod_mpoly or P is flint.nmod_mpoly: | ||
if is_field: | ||
assert (f * g).factor() == (S(8), [(mpoly({(0, 1): 1, (1, 0): 1}), 1), (f / 4, 1)]) | ||
else: | ||
assert (f * g).factor() == (S(2), [(mpoly({(0, 1): 1, (1, 0): 1}), 1), (f, 1)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If factor would raise an exception for non-prime modulus then that should be tested here so that the test shows the behaviour in all cases.
Also can these not be written with g
in the output?
characteristic_zero = not (P is flint.fmpz_mod_mpoly or P is flint.nmod_mpoly)
if characteristic_zero:
# Primitive factors over Z for fmpz_mpoly and fmpq_mpoly
assert (f * g).factor() == (S(2), [(g / 2, 1), (f, 1)])
elif is_field:
# Monic polynomials over Z/pZ for nmod_mpoly and fmpz_mod_mpoly
assert (f * g).factor() == (S(8), [(g / 2, 1), (f / 4, 1)])
else:
# Factorisation not allowed over Z/nZ for n not prime.
# Flint would abort so we raise an exception instead:
assert raises(...)
One thing that I hadn't noticed until now is that fmpq_mpoly.factor
behaves differently compared to fmpq_poly.factor
. With fmpq_poly
factors are always monic:
In [29]: f = fmpq_poly([1, 4])
In [30]: g = fmpq_poly([2, 2])
In [31]: f
Out[31]: 4*x + 1
In [32]: g
Out[32]: 2*x + 2
In [33]: (f*g).factor()
Out[33]: (8, [(x + 1/4, 1), (x + 1, 1)])
However fmpq_mpoly
gives factors primitive over Z:
In [35]: ctx = fmpq_mpoly_ctx(1, Ordering.lex, ["x"])
In [36]: f = fmpq_mpoly('4*x + 1', ctx)
In [37]: g = fmpq_mpoly('2*x + 2', ctx)
In [38]: (f*g).factor()
Out[38]: (2, [(x + 1, 1), (4*x + 1, 1)])
I'm wondering whether it makes sense to try to make this consistent somehow. One sort of consistency is that whenever we have a field all factors can be monic which would suggest that fmpq_mpoly should also return monic factors.
On the other hand though it is computationally better for Q to use primitive factors and absorb any denominator into the constant factor as fmpq_mpoly does so we could change fmpq_poly
to return primitive factors instead:
python-flint/src/flint/types/fmpq_poly.pyx
Lines 396 to 416 in ba1a7fe
def factor(self): | |
""" | |
Factors *self* into irreducible polynomials. Returns (*c*, *factors*) | |
where *c* is the leading coefficient and *factors* is a list of | |
(*poly*, *exp*) pairs with all *poly* monic. | |
>>> fmpq_poly.legendre_p(5).factor() | |
(63/8, [(x, 1), (x^4 + (-10/9)*x^2 + 5/21, 1)]) | |
>>> (fmpq_poly([1,-1],10) ** 5 * fmpq_poly([1,2,3],7)).factor() | |
(-3/700000, [(x^2 + 2/3*x + 1/3, 1), (x + (-1), 5)]) | |
""" | |
c, fac = self.numer().factor() | |
c = fmpq(c) | |
for i in range(len(fac)): | |
base, exp = fac[i] | |
lead = base[base.degree()] | |
base = fmpq_poly(base, lead) | |
c *= lead ** exp | |
fac[i] = (base, exp) | |
return c / self.denom(), fac |
In SymPy it is
fmpq_poly
that is inconsistent with SymPy's behaviour and needs to be special-cased to remove precisely the denominators that python-flint inserts:https://github.com/sympy/sympy/blob/31e957f47a636df9b7e62cd3951ec0afcc266369/sympy/polys/polyclasses.py#L2285-L2304
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering whether it makes sense to try to make this consistent somehow. One sort of consistency is that whenever we have a field all factors can be monic which would suggest that fmpq_mpoly should also return monic factors.
On the other hand though it is computationally better for Q to use primitive factors and absorb any denominator into the constant factor as fmpq_mpoly does so we could change fmpq_poly to return primitive factors instead:
In my mind I would expect python-flint to be largely transparent with the results it returns from flint, if flint is returning primitive factors then python-flint should as well, if not to just to be consistent. I've seen your #189 PR and will take a look now
I had a couple of comments about the tests but generally I think this looks good and could be merged. Is there anything outstanding to do? |
I don't believe so, nothing comes to mind. I will however open another PR to remove the |
Okay, this looks good. Let's get it in! |
This is nice because it is mote of the mpoly types from Flint now. The only ones missing I think are Certainly these are all of the mpoly types that SymPy would currently use until we add |
This PR aims to add support for the
fmpz_mod_mpoly
andnmod_mpoly
flint objects. I'm using the recently addedfmpz_mpoly
as a basis, there may be remants of the large copy-paste that have escaped my changes.I haven't given the docs any more attention that a quick search-replace as of yet so while they (and all tests locally) may be passing I'll have to give them a once over before I start on
nmod_mpoly
.