Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'mpmath' has no attribute 'rational' #26273

Closed
Tracked by #26169
ravi160822 opened this issue Feb 25, 2024 · 54 comments
Closed
Tracked by #26169

AttributeError: module 'mpmath' has no attribute 'rational' #26273

ravi160822 opened this issue Feb 25, 2024 · 54 comments
Milestone

Comments

@ravi160822
Copy link

File "/usr/local/lib/python3.11/site-packages/torch-2.0.0-py3.11-linux-x86_64.egg/torch/_guards.py", line 14, in
import sympy # type: ignore[import]
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sympy-1.12-py3.11.egg/sympy/init.py", line 30, in
from sympy.core.cache import lazy_function
File "/usr/local/lib/python3.11/site-packages/sympy-1.12-py3.11.egg/sympy/core/init.py", line 9, in
from .expr import Expr, AtomicExpr, UnevaluatedExpr
File "/usr/local/lib/python3.11/site-packages/sympy-1.12-py3.11.egg/sympy/core/expr.py", line 4159, in
from .mul import Mul
File "/usr/local/lib/python3.11/site-packages/sympy-1.12-py3.11.egg/sympy/core/mul.py", line 2193, in
from .numbers import Rational
File "/usr/local/lib/python3.11/site-packages/sympy-1.12-py3.11.egg/sympy/core/numbers.py", line 4567, in
_sympy_converter[type(mpmath.rational.mpq(1, 2))] = sympify_mpmath_mpq
^^^^^^^^^^^^^^^
AttributeError: module 'mpmath' has no attribute 'rational'

@oscarbenjamin
Copy link
Collaborator

What version of mpmath are you using?

@ravi160822
Copy link
Author

seems the issue appeared as mpmath made a new release on 23-Feb-2024, https://pypi.org/project/mpmath/#history, I have made the version to 1.3.0 again, it is working fine now.
But by default it installed 1.4.0a0.

@oscarbenjamin
Copy link
Collaborator

That release is an alpha prerelease. It should only get installed if you opt in somehow e.g. by passing --pre to pip.

How did you end up having mpmath 1.4.0a0?

@WeizhuoZhang-intel
Copy link

In pytorch installation step, use python setup.py develop to build pytorch will install mpmath 1.4.0a0.

#13 666.7 Searching for mpmath>=0.19
#13 666.7 Reading https://pypi.org/simple/mpmath/
#13 666.7 Downloading https://files.pythonhosted.org/packages/bf/43/28116b6ad78fcf009071a12f702aeae6c9fbc2cfd814c5eefaacda1a52aa/mpmath-1.4.0a0-py3-none-any.whl#sha256=1b60ef0e9ea0e13e11580e31dc0c47284b9a26b1384bcef577ddda5e29b2836b
#13 666.8 Best match: mpmath 1.4.0a0
#13 666.8 Processing mpmath-1.4.0a0-py3-none-any.whl
#13 666.8 Installing mpmath-1.4.0a0-py3-none-any.whl to /opt/conda/lib/python3.8/site-packages
#13 666.8 Adding mpmath 1.4.0a0 to easy-install.pth file
#13 666.8 detected new path './MarkupSafe-2.1.5-py3.8-linux-x86_64.egg'
#13 666.8 
#13 666.8 Installed /opt/conda/lib/python3.8/site-packages/mpmath-1.4.0a0-py3.8.egg
#13 666.8 Searching for typing-extensions==4.9.0
#13 666.8 Best match: typing-extensions 4.9.0
#13 666.8 Adding typing-extensions 4.9.0 to easy-install.pth file
#13 666.8 detected new path './mpmath-1.4.0a0-py3.8.egg'
#13 666.8 
#13 666.8 Using /opt/conda/lib/python3.8/site-packages
#13 666.8 Finished processing dependencies for torch==2.3.0a0+git53f5035

@ravi160822
Copy link
Author

ravi160822 commented Feb 26, 2024

yes same I installed torch, I did not specifically install mpmath, it took the latest one by default, and import torch failed

now I have mentioned mpmath==1.3.0, and it is working fine, but someone else might also face the same issue.

@oscarbenjamin
Copy link
Collaborator

Which version of SymPy is being used and why is python setup.py develop being used?

The modern version of python setup.py develop is pip install -e .:
https://packaging.python.org/en/latest/discussions/setup-py-deprecated/

@oscarbenjamin
Copy link
Collaborator

The AttributeError was fixed in gh-26269.

@oscarbenjamin
Copy link
Collaborator

The modern version of python setup.py develop is pip install -e .:

Note that this does make a difference because develop uses easy_install which installs prereleases whereas pip does not install prereleases by default.

Which version of SymPy is being used

I'm asking this because I can only presume that reason for using develop is that sympy is being installed from git rather than from the PyPI wheels. If this is using the latest commit from git then it should already be fixed after gh-26269.

@WeizhuoZhang-intel
Copy link

Which version of SymPy is being used and why is python setup.py develop being used?

The modern version of python setup.py develop is pip install -e .: https://packaging.python.org/en/latest/discussions/setup-py-deprecated/

The python setup.py develop is for building PyTorch, not SymPy. Pytorch will install its dependencies through pip, such as SymPy

@oscarbenjamin
Copy link
Collaborator

Pytorch will install its dependencies through pip, such as SymPy

Then why is the mpmath pre-release being installed? It won't be installed by pip unless --pre is being passed.

SebastianAment added a commit to SebastianAment/botorch that referenced this issue Feb 26, 2024
Summary:
This commit restricts `mpmath` to versions `<=1.3`, since the latest alpha release breaks `sympy` -> `pytorch` -> `gpytorch` -> `botorch`.

See also sympy/sympy#26273 and [the commit removing `rational` from `mpmath`](mpmath/mpmath@721e257).

Reviewed By: Balandat

Differential Revision: D54214691
SebastianAment added a commit to SebastianAment/botorch that referenced this issue Feb 26, 2024
Summary:
This commit restricts `mpmath` to versions `<=1.3`, since the latest alpha release breaks `sympy` -> `pytorch` -> `gpytorch` -> `botorch`.

See also sympy/sympy#26273 and [the commit removing `rational` from `mpmath`](mpmath/mpmath@721e257).

Reviewed By: Balandat

Differential Revision: D54214691
facebook-github-bot pushed a commit to pytorch/botorch that referenced this issue Feb 26, 2024
Summary:
This commit restricts `mpmath` to versions `<=1.3`, since the latest alpha release breaks `sympy` -> `pytorch` -> `gpytorch` -> `botorch`.

See also sympy/sympy#26273 and [the commit removing `rational` from `mpmath`](mpmath/mpmath@721e257).

Reviewed By: Balandat

Differential Revision: D54214691

fbshipit-source-id: 759d4d4f1ee6178cf7ab0f2b3c9596a0ffde21c4
stefanpricopie pushed a commit to stefanpricopie/botorch that referenced this issue Feb 27, 2024
Summary:
This commit restricts `mpmath` to versions `<=1.3`, since the latest alpha release breaks `sympy` -> `pytorch` -> `gpytorch` -> `botorch`.

See also sympy/sympy#26273 and [the commit removing `rational` from `mpmath`](mpmath/mpmath@721e257).

Reviewed By: Balandat

Differential Revision: D54214691

fbshipit-source-id: 759d4d4f1ee6178cf7ab0f2b3c9596a0ffde21c4
github-merge-queue bot pushed a commit to openvinotoolkit/openvino that referenced this issue Mar 21, 2024
@ezyang
Copy link
Contributor

ezyang commented Mar 26, 2024

@rgommers @asmeurer can sympy make a release to work with the newest mpmath? It seems like it's affecting many projects in the DL ecosystem

@oscarbenjamin
Copy link
Collaborator

We are planning a sympy release ASAP but there are a few issues that need to be resolved first.

Those projects in the DL ecosystem should fix their build configuration in any case though. It is not the newest mpmath but rather a prerelease of mpmath (1.4 alpha 0). If you always install the earliest prereleases of all transitive dependencies then this kind of breakage should be expected.

Any projects using python setup.py develop should change to using pip install -e . which would not install the mpmath prerelease. Using python setup.py develop is deprecated:
https://packaging.python.org/en/latest/discussions/setup-py-deprecated/

@oscarbenjamin oscarbenjamin added this to the SymPy 1.13 milestone Mar 26, 2024
Shubham-Sahoo added a commit to Shubham-Sahoo/openvino that referenced this issue Mar 26, 2024
[Specification] MaxPool-14 and AvgPool-14 - new ceiling mode `CEIL_TORCH` (openvinotoolkit#22930)

 - Add specification for `MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

- [Reference and
Core](openvinotoolkit#22796)
 - [Python API](openvinotoolkit#22966)
 - [PT FE](openvinotoolkit#23027)
- [Downgrade
transformations](openvinotoolkit#23381)

 - 131961

openvinotoolkit#18731

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[TF FE] Support ApproximateEqual operation for TensorFlow (openvinotoolkit#23351)

 - *Adding operation support for ApproximateEqual operation*
 - *Addresses issue openvinotoolkit#22082 *

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[OV JS] Expose export_model()/import_model() (openvinotoolkit#23366)

- Expose `compiledModel::export_model()`, a method to export a compiled
model to the binary data stream.
- Expose `core::import_model(model_file : Buffer, device_name : str)`, a
method to import a compiled model from a previously exported one.

 - *134820* *134818*

---------

Co-authored-by: Vishniakov Nikolai <nikolai.vishniakov@intel.com>

[core] Low precision element iterator and `u2, u3, u6` types (openvinotoolkit#23279)

 - Introduce new low precision types `u2`, `u3`, `u6`.
- Introduce `ov::element::Iterator` for low precision types like `u1,
u2, u3, u4, i4, u6`:
- Gives pointer like access to low precision values in Tensor,
containers etc.
- Can be used by STL algorithms to access data in unified algorithms for
data manipulation.
- Can be used in Constant, Convert operators to replace duplicate
implementations for accessing low precision data (bin-size reduction).
- Can be used for operator reference implementation or plugin if there
is no hardware specific solution.

 - [CVS-126998](https://jira.devtools.intel.com/browse/CVS-126998)
- Part of
[CVS-128024](https://jira.devtools.intel.com/browse/CVS-128024)

[DOCS]  Updated file (openvinotoolkit#23509)

 - *item1*
 - *...*

 - *ticket-id*

Add 'pad' operator support for ov::preprocess::PrePostProcessor (openvinotoolkit#23093)

 - Add 'pad' preprocessor operator
- openvinotoolkit#23068

 - [CVS-121548](https://jira.devtools.intel.com/browse/CVS-121548)

[API][AUTO] Fail to get PERF_COUNT from compiled_model (openvinotoolkit#23123)

 - *Fail to get PERF_COUNT from compiled_model*

 - *CVS-130349*

[GPU] Fix dynamic loop's not matched issue during multiple shapes are inferenced (openvinotoolkit#22806)

- *Fix the issue which second infer with updated shape in dynamic loop
doesn't update sliced layout.*
- *Fix the issue that the optimized reshape doesn't reinterpret output
memory in update_output_layout()*

 - *122739*
 - *131544*

[DOCS] Add docs about ignored subgraphs (openvinotoolkit#23435)

- Add documentation about `nncf.Subgraph`

 - 100999

[TRANSFORMATIONS] Fix Optional to match even with no inputs (openvinotoolkit#23471)

[TRANSFORMATIONS] Fix Optional to match even with no inputs

The Optional pattern type may create a wrong pattern to match if no
inputs are provided to the Optional node. If no inputs present to the
Optional type, it will not create an alternative branch(es) to check
against resulting in the incorrect matching.

Fix that by adding a check for the number of inputs being 0.

Do a minor refactoring/renaming for the readability purposes.

 CSV-133523

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

---------

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

Enable Paddle FastSpeech2 model (openvinotoolkit#23311)

 - *Enable Paddle FastSpeech2 model*
     - *fix issue in 'set_value'*
     - *add 'round' op*

 - *CVS-134638*

[Conformance Test] Fix cache test case failure for auto plugin (openvinotoolkit#23473)

- check if the blob size remains the same as it was during the initial
caching of the compiled model, rather than comparing it with a specified
number, such as 1 in this case.
- count the size of cached blobs after the model compilation is
completed on all HW plugin within AUTO plugin.

 - CVS-130395

[GPU] Remove unused formats (openvinotoolkit#23431)

+ Most of them are in onednn weights format.

 - *119476*

[CPU][ARM] Make f16 precision as default for CNN (openvinotoolkit#22839)

Remove mentioning of compatibility folder in mac docs (openvinotoolkit#23542)

 - *item1*
 - *...*

 - *ticket-id*

[TRANSFORMATIONS] Fix ReshapeAMatMul pattern to work with shared node as reshape input (openvinotoolkit#23535)

- *`ReshapeAMatMul` worked incorrect in case of using shared nodes as
reshape input*
 - *Fix: to reconnect reshape input to new `shape_of` pattern*

 - *[CVS-134625](https://jira.devtools.intel.com/browse/CVS-134625)*

[TF FE] Support complex tensors for Reciprocal operations (openvinotoolkit#23355)

- *Extended loader Reciprocal by propagating ComplexTypeMark from input
to output and to represent output complex type tensor as a
floating-point type tensor with an auxiliary dimension that concatenates
real and imaginary parts of complex tensor.*
- *Performed reciprocal for complex numbers.*
- *Wrapped the complex result with ComplexTypeMark and returned the
result*

 - openvinotoolkit#23234

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[GPU] Fix SIMD for non supporting platforms (openvinotoolkit#23540)

 - Check is simd 8 is supported

 - *[CVS-133769](https://jira.devtools.intel.com/browse/CVS-133769)*

[PT FE] Fix typo and improve the error info. (openvinotoolkit#23507)

 - *Fix the typo of the code (then -> than)*
- *Improve the error info here, to let developer know the size of output
if the assertion fails.*

 - *No ticket id*

[PT FE] Fix sporadic issue in quantized tests (openvinotoolkit#23520)

 - *Relax quantized tests condition to remove sporadicity.*

 - *CVS-129734*

[GPU] Fixed not to set GATHER_AXIS_SHAPE_INFO_INDEX when input0 is static (openvinotoolkit#23548)

- This PR fixes `Gather` not to set GATHER_AXIS_SHAPE_INFO_INDEX when
input0 is static.
 - It enables some functional tests again.

Add test for CoreImpl::get_versions() (openvinotoolkit#23336)

Closes [23298](openvinotoolkit#23298)
- [CVS-132140](https://jira.devtools.intel.com/browse/CVS-132140)

---------

Co-authored-by: Oleg Pipikin <oleg.pipikin@intel.com>

[PT FE] Add ModuleExtension (openvinotoolkit#23536)

 - *Continuation of openvinotoolkit#22867*

 - *CVS-133733*

---------

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

[api conformance] Fix batch/hetero plugins config (openvinotoolkit#23547)

 - *item1*
 - *...*

 - *ticket-id*

[Transformations] Added If operation to NMS path propagation for ignore negative indices in Gather (openvinotoolkit#23451)

 - *127874*

[TF FE] Test TextVectorization on white-space string input and Equal on empty string tensor (openvinotoolkit#23572)

**Details:** Test `tf.keras.TextVectorization` on white-space string
input and Equal on empty string tensor.

**Ticket:** 135749

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

[PT FE] Make ModuleExtension  patching in independent function scope (openvinotoolkit#23584)

 - *Make ModuleExtension patching in independent function scope*

 - *ticket-id*

[GPU] Increase FC tile_b size for INT4 shape agnostic kernel  (openvinotoolkit#23532)

- Increased FC tile_B size for INT4 shape agnostic kernel for improving
context processing

 - 133444

[GPU] Enable 8bit compression support on dGPU via oneDNN (openvinotoolkit#22740)

 - Enable 8bit compression support on dGPU via oneDNN
 - Update oneDNN version
 - Enable oneDNN primitives cache

Ticket: 124115

[CPU] Add PagedAttention support (openvinotoolkit#23524)

 - *Support PagedAttention support, depends on:*
- openvino_contrib:
openvinotoolkit/openvino_contrib#867
    - vLLM: ilya-lavrenov/vllm#4
 - *TODO*
    - Models with alibi feature

 - *[134329](https://jira.devtools.intel.com/browse/CVS-134329)*
 - *[134327](https://jira.devtools.intel.com/browse/CVS-134327)*

[GPU] In gemm_tile_kernel, applied to use block read when N and K byte-size is aligned 4. (openvinotoolkit#23400)

- *Element by element read is the bottle-neck in gemm_tiled kernel.
Enable block-read when N and K size are aligned 4byte with N and K are
leftover*.
- *Increasing tile_n_size has performance improvement when m_size and
n_size are not shallow and n_size is aligned at 32.*
 - *Add GEMM_TILE_M/N/K/SIMD environment variables for convenience.*

 - *134279*

---------

Signed-off-by: hyunback <hyunback.kim@intel.com>

[CPU] [ARM64] jit eltwise: int8 support (openvinotoolkit#22687)

 - *int8 support*

 - *CVS-128643*

[ONNX] Extended ReduceMax by opsets 13,18,20 (openvinotoolkit#23475)

 - Extended ReduceMax by opsets 13,18,20
 - Updated a using opset for ONNX to 20
 - Added tests for additional supported types
 - Enabled backend tests

  - Closes openvinotoolkit#20555

[CPU] Enable concat nspc layout inplace for urlnet model cases (openvinotoolkit#23454)

- *enable concat nspc layout inplace for channel only cases, with these
concat node use inplace impl, urlnet model gain performance benefits,
and this(intermediate concat node is nspc layout but actually is one
dimension) could be common case especially for models with 1D input*

 - *130282*

[CPU]Fix GPT-J RoPE fusion (openvinotoolkit#23519)

 - *Support new RoPE pattern of GPT-J*
- *Local test shows 17 % improvement for 2nd token latency for BF16 in
`Intel(R) Xeon(R) Platinum 8468`*

 - *CVS-134949*

Torch Compile - New Op Support (openvinotoolkit#23310)

New op support for:
 - torch.export updates
 - benchmarking model support
 - chatglm2 support

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

[DOCS] Latency highlight for OV devices + update of Optimize Inference for master (openvinotoolkit#23575)

Jira: 133389

* Added an indication on Latency being the default use for OV devices
* Streamlined the Optimize Inference article for better clarity.

[TF FE] Support complex tensors for OnesLike operation (openvinotoolkit#23445)

 - *Adding support for OnesLike operation on complex type tensor*
- Closes openvinotoolkit#22953

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[CI] [GHA] Remove usage of the `SimenB/github-actions-cpu-cores` action (openvinotoolkit#23583)

 - The action does not have a License.
 - `cmake` should figure out the # of cores for parallel.

[CPU] [ARM64] jit select (openvinotoolkit#23450)

 - *[CPU] [AARCH64] jit select*

 - *CVS-135445*

New DB schema for GitHub metrics script (openvinotoolkit#23606)

Improvements and fixes for the script which sends GitHub Workflow
metrics to a database. See also:
[23484](openvinotoolkit#23484)

[JS API] Extract code from CompiledModel getters (openvinotoolkit#23515)

- Extract the same logic structure from `CompileModel::input` and
`CompileModel::output`
- Add a private `CompileModel::get_node` method that gets the specified
input or output node.

Note:
No changes to argument validation or conversion.

 - *127617*

constraints openvino-dev: Limit mpmath<1.4 (openvinotoolkit#23601)

- Limit mpmath because of
pytorch/pytorch#120995 and
sympy/sympy#26273

[GPU] Re-enable memory reuse for gemm (openvinotoolkit#23600)

- Since openvinotoolkit#22726 gemm is derived from multi-stage impl which had memory
reuse flag enforced to false for all sub-classes.
- This patch enables memory reuse back for gemm kernel to reduce memory
consumption.

 - *135361*

[TF FE] Support TensorFlow 2.16 (openvinotoolkit#23562)

**Details:** Support TensorFlow 2.16

**Ticket:** TBD

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

[IE TESTS][OP CONFORMANCE] Move `ConstRanges` range calculation to `InGenData` constructor (openvinotoolkit#23427)

 - *Move static const range initialization to `InData` structure*

 - *[125993](https://jira.devtools.intel.com/browse/CVS-125993)*

Enable new property model_distribution_policy for CPU inference (openvinotoolkit#23077)

 - *Enable new property model_distribution_policy for CPU inference*
 -- *Add C++ interface and test cases*
 -- *Add Python interface and test cases*

 - *CVS-127844*

[CPU] optimize PagedAttention's shape inference (openvinotoolkit#23603)

 - *Specific shape inference for PagedAttention*
 - *...*

 - *ticket-id*

[CPU] [ARM64] jit equal (openvinotoolkit#23266)

 - *[CPU] [AARCH64] jit eltwise Equal

 - *CVS-134691*

[GPU] Fix count non zero for empty input (openvinotoolkit#23597)

- Adds buffer reset to 0 in `count_nonzero` impl in case of empty input
tensor as currently we may try to allocate random amount of memory in
subsequent `gather_nonzero` call

[PyOV] Add Python API for MaxPool-14 and AvgPool-14 (openvinotoolkit#22966)

 - Extend Python API with`MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

 - openvinotoolkit#22930
 - openvinotoolkit#22796
 - openvinotoolkit#23027
 - openvinotoolkit#23381
 - openvinotoolkit#23582

 - 131961

openvinotoolkit#18731

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[Spec] Clarify specification for StridedSlice (openvinotoolkit#23039)

- Add notes with descriptions of: Out of Bounds, Indexing in Reverse,
Negative Indices
 - Clarified length of masks
 - Clarified the definition of `-1` value
- Described in detail the behavior of masks, aligned with Reference
Implementation
 - Added more latex-like style, add the examples for the missing masks.

 - 90128

[TRANSFORMATIONS] Remove use of legacy names from transformations (openvinotoolkit#23574)

[TRANSFORMATIONS] Remove use of legacy names from transformations

API function create_ie_output_name() and get_ie_output_name() are
deprecated in a28a000 ("Deprecated functions to operate with legacy
port names (openvinotoolkit#22717)")

Remove usages of create_ie_output_name() in Transformations

CVS-132087
Signed-off-by: Andrii Staikov andrii.staikov@intel.com

---------

Signed-off-by: Andrii Staikov andrii.staikov@intel.com

[Opset14][Spec] ConvertPromoteTypes-14 specification (openvinotoolkit#23264)

- *This PR introduces specification for ConvertPromoteTypes-14 op -
conversion op used to align two inputs to common type*
- *Operator was introduced for PyTorch Frontend, rules also match
Tensorflow https://www.tensorflow.org/guide/tf_numpy_type_promotion*
- PR with core implementation:
openvinotoolkit#22566
- Draft PR with improvements to core + replacement it PTFe:
openvinotoolkit#22770

 - *129197*

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[CPU][ARM] Upgrade to ACL v24.02.1 (openvinotoolkit#22598)

oneDNN PR: openvinotoolkit/oneDNN#227

[API CONFORMANCE] Modify API conformance suite for SW plugins (openvinotoolkit#23557)

 - *Move some properties from mandatory to optional for sw plugins*
 - *...*

 - *[133459](https://jira.devtools.intel.com/browse/CVS-133459)*

Calculate model weights hash in parallel (openvinotoolkit#23605)

- Calculate model weights hash in parallel in case of reading model from
buffer

 - CVS-134771

[DOCS] improve legacy section formatting (openvinotoolkit#23512)

[DOCS] ai legal disclaimer (openvinotoolkit#23587)

[TRANSFORMATIONS] Create python binding for pattern::Optional (openvinotoolkit#23558)

[TRANSFORMATIONS] Create python binding for pattern::Optional

Expose the C++ op::pattern::Optional to Python in order to
simplify patterns creation.
Cover the functionality with the dedicated tests.

CVS-133523

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

---------

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

[CPU] Fix SDPA pattern matching (openvinotoolkit#23581)

Limit the Concat layer to have maximum 3 children. The third one is
allowed to be a ShapeOf op only (to support Mixtral).

 - 135375

[chore] Use debug loglevel for github metrics script (openvinotoolkit#23633)

We can switch log level for GitHub metrics script only when the workflow
is restarted with debug logging

[TF FE] Enable parallel execution of TensorFlow Layer 2 python tests (openvinotoolkit#23344)

Addresses issue: openvinotoolkit#20919

- Enables parallel execution of TensorFlow Layer 2 python tests
- Fixes test_tf2_keras_conv_lstm_2d.py and test_tf2_map_fn.py to not
fail during parallel execution
- Appends args in github workflow to enable parallel execution

Errors fixed:
- Due to varying Kera activation function addresses causing the workers
to get different parameter inputs and thus failing. See [known
issue](https://pytest-xdist.readthedocs.io/en/stable/known-limitations.html#order-and-amount-of-test-must-be-consistent)
```
-tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 4, 'kernel_size': (3, 3), 'padding': 'same', 'return_sequences': False, 'activation': <function swish at 0x7f1fadf364d0>}, 'input_shapes': [[2, 5, 20, 30, 2]]} ]
-tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 6, 'kernel_size': (2, 3), 'padding': 'valid', 'dilation_rate': 3, 'recurrent_activation': <function elu at 0x7f1fe6a1a830>, 'return_sequences': True, 'use_bias': True, 'data_format': 'channels_first'}, 'input_shapes': [[2, 5, 1, 40, 30]]} ]
+tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 4, 'kernel_size': (3, 3), 'padding': 'same', 'return_sequences': False, 'activation': <function swish at 0x7f635e4d24d0>}, 'input_shapes': [[2, 5, 20, 30, 2]]} ]
+tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 6, 'kernel_size': (2, 3), 'padding': 'valid', 'dilation_rate': 3, 'recurrent_activation': <function elu at 0x7f6396fa2830>, 'return_sequences': True, 'use_bias': True, 'data_format': 'channels_first'}, 'input_shapes': [[2, 5, 1, 40, 30]]} ]
```

- Due to lambda function definitions giving varying addresses as inputs
```
-tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP32 - params:{'fn': <function TestMapFN.<lambda> at 0x7f66c2c63c70>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
-tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP16 - params:{'fn': <function TestMapFN.<lambda> at 0x7f66c2c63c70>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
+tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP32 - params:{'fn': <function TestMapFN.<lambda> at 0x7f211b56fd00>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
+tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP16 - params:{'fn': <function TestMapFN.<lambda> at 0x7f211b56fd00>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
```

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[ IE TESTS ] Update tensor comparation function according plugin requirments (openvinotoolkit#23226)

- *Comparation function was changed to compare tensors based on element
comparation*
- *`std::abs(ref_value - plugin_value) <= abs_threshold + rel_threshold
* ref_value`*
- *`abs_threshold ` =
std::max(std::numeric_limits::eps<plugin_element_type>(),
std::numeric_limits::eps<ref_element_type>())*
- *`ref_threshold = eps_by_expected_type()`, which is based on half `bit
length of mantissa`*

 - [CVS-133173](https://jira.devtools.intel.com/browse/CVS-133173)
 - [CVS-135540](https://jira.devtools.intel.com/browse/CVS-135540)

---------

Co-authored-by: sbalandi <sofya.balandina@intel.com>

[TF FE] Support Angle operation for TensorFlow models (openvinotoolkit#23028)

 - *Support Angle operation for TensorFlow models*

 - Closes openvinotoolkit#22083

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[GPU] Extend gemm to fuse broadcast and reshape layers (openvinotoolkit#23513)

- Fuse `broadcast` and `reshape` layers into `gemm` layer for LLM's 2nd
latency optimization
     - before : [`broadcast`] --> [`reshape`] --> `gemm`
     - after : `gemm`
- `gemm` is extended to have `input0_target_shape`,
`input1_target_shape`, `input0_output_pattern` and
`input1_output_pattern` from `broadcast` and `reshape` layers

 - 128343

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>

[GPU] Extend pattern for ClampFP16Output (openvinotoolkit#23592)

- By PR(openvinotoolkit#22245),
`clamp_fp16_output` opt pass was moved to ngraph
- Because nodes such as eltwise(`Add`, `Subtract`, `Multiply`, `Divide`)
that were fused into target node `gemm` are not supported in pattern,
corresponding pattern was extended for this purpose

 - 135060

Fix the aten::mv for pytorch models openvinotoolkit#22073 (openvinotoolkit#22677)

 - *item1*
 - *...*
Add aten::mv operator
close openvinotoolkit#22073
 - *ticket-id*

---------

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>

Remove NGraphFunctions namespace (openvinotoolkit#23627)

 - Remove NGraphFunctions namespace

 - CVS-133379

[PY API] Fix the preoblem that Node.get_attributes() cannot return all attributes (openvinotoolkit#23530)

- extend the `util::DictAttributeSerializer::on_adapter()` method,
making it compatible with `ov::PartialShape` and
`ov::op::util::Variable` types;
 - add extra tests to test the correctness of `Node.get_attributes()`

 - openvinotoolkit#23455

---------

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

[CPU] Correct type configuration for i8 inner_product with f16 output (openvinotoolkit#23610)

 - 136298
 - 136163

Support aten::bucketize for pytorch models openvinotoolkit#23328 (openvinotoolkit#23527)

](openvinotoolkit#23328)
 - Support aten::bucketize for pytorch models

Move ConvertConvertPromoteTypes transformation from Common to MOC (openvinotoolkit#23630)

Move ConvertConvertPromoteTypes transformation from Common to MOC

 N/A

[CPU][ARM] Enable both f16 and f32 kernels for aarch64 and introduce runtime f16 support check (openvinotoolkit#22992)

Inherited from openvinotoolkit#22437

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

[ONNX] Reduced memory consumption while running tests (openvinotoolkit#23628)

 - Significantly reduced amount of using RAM while testing
- May introduce test regression in multi-worker scenario (-n auto), but
it isn't detected while validation

 - 129958

[TF FE] Add testing StringLower and TextVectorization operations on non-ASCII sentences (openvinotoolkit#23641)

**Details:** Add testing non-ASCII sentences for StringLower operation.
Needs to be merged after
openvinotoolkit/openvino_tokenizers#80.

**Ticket:** 135752

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Symbol Tracking API updated and made public (openvinotoolkit#23136)

- dev_api `ov::DimensionTracker` and `ov::TableOfEquivalence` classes
deleted, logic moved to `ov::Symbol` which is now stored by
`ov::Dimension`
- new implementation moves responsibility to store and report relations
between Symbols directly to the Symbol object. Hence, there is no need
for `ov::TableOfEquivalence` and no need for synchronization point
anymore.
- Equivalence is being tracked by using
[Disjoint-set_data_structure](https://en.wikipedia.org/wiki/Disjoint-set_data_structure)
which uses less memory than previous implementation.

![image](https://github.com/openvinotoolkit/openvino/assets/55839243/f1266f32-976d-44f9-a6ea-cd04dce07407)

![image](https://github.com/openvinotoolkit/openvino/assets/55839243/3108d1ad-0d30-4041-aa93-c4de1f1fb979)

 - *CVS-133123*

Align friendly names uniqueization (openvinotoolkit#22729)

Removed code that makes friendly names unique from Serialization and a
name uniqueness check from Deserializator.
Enabled the mode of ResolveNameCollisions transformation to uniqueize
all friendly names, not only autogenerated in Frontends

 - *CVS-131567*

---------

Co-authored-by: Evgenya Nugmanova <evgeniia.nugmanova@intel.com>
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>

[CPU][REFACTORING] Use memory access helper methods where possible (openvinotoolkit#23442)

fix coverity issue 1540833 and 1540832 (openvinotoolkit#23635)

 - *fix coverity scan  issue1540833 and issue1540832*

 - *ticket-id*

[CPU] Prohibit fc avx2_vnni_2 decompression for bf16 input (openvinotoolkit#23638)

- The FC changes made in scope of openvinotoolkit#20486 were missed when rebasing
- The context is: Even the system and the node does support bf16
precision we have to fall back to f32 in/out precision
due to lack of support for decompression with bf16 avx2_vnni_2 in oneDNN
fork.
- To cover this limitation an additional type mapping parameter in form
of std::function was introduced for disabling particular type mapping
entry using a runtime check (isa support in this case)

 - 122347
 - 136163

Merged master changes

Update src/frontends/tensorflow_common/src/op/gelu.cpp

updated approximation access
@rgommers
Copy link

rgommers commented Mar 26, 2024

This does indeed look pretty disruptive, so a SymPy release soon will be quite helpful (I'm not a SymPy dev, so I'll leave that part to @oscarbenjamin and @asmeurer).

There are a couple of issues on the PyTorch side that make it way too likely that pre-releases of dependencies get pulled in. I commented on that in pytorch/pytorch#120995 (comment). Fixing that up should mitigate the problem quickly, because it's all about building from source and installing nightlies with --pre - current released versions of PyTorch don't have a problem.

@rgommers
Copy link

@moorepants with all due respect, I think you misunderstand the problem just slightly here. If you add the upper bound, you only have hypothetical issues with some user who cannot use the versions of sympy and mpmath that were compatible at release time (why? some new feature in a dev version or the very latest mpmath that the user must really have, while at the same time an issue in latest sympy that prevents an upgrade?). While if the upper bound is added, it demonstrably prevents many real-world issues right now, and known incompatibilities that are certain to break pip install sympy==1.12 with a future mpmath release like Oscar pointed out.

I can tell you from long-time experience in managing the upper bounds that scipy puts on numpy every release: this works fine and prevents lots of problems, while causing very few issues. The reality is that if you have a transitive dependency as a package author, you have to carefully manage it. "don't use an upper bound" is an excellent default, but a "never add any bounds ever" stance is a little dogmatic and far from optimal.

@moorepants
Copy link
Member

Thanks Ralf. I don't think I misunderstand this aspect, but surely may be misunderstanding other things. I am very much concerned about hypothetical installation issues that may occur if an upper bound is added to dependencies of SymPy. My objection to having upper bound pins is specifically this hypothetical, but regular and real, issue.

If we have patched SymPy 1.12 and SymPy current master to work with mpmath 1.4a01, then why would an upper bound be needed? If we haven't patched SymPy with the fix, then I can see how an upper bound could be used to ensure SymPy 1.12.1 works with mpmath 1.4a01 (and likely the upcoming 1.4).

Let's say mpmath 1.4 is released today. Then all prior released SymPy versions are broken when installed with mpmath 1.4.0. If I am a user of any of these prior releases of SymPy, then I would have to at least do something like this pip install sympy mpmath<1.4 to have a working environment. As far as I can tell, that is what everyone already has done that depends on pytorch nightly. If SymPy never releases again, then SymPy users are stuck with mpmath <1.4 forever.

But now that mpmath 1.4 is out, SymPy can make a release that is compatible with mpmath 1.1.0 to and including 1.4. This is SymPy 1.12.1. Now as a user I can install SymPy with mpmath 1.4 (but only SymPy >= 1.12.1). We could also release this bugfix to as many prior versions of SymPy (1.11, 1.10, ...) as we desire so that users can use many versions of SymPy with mpmath 1.4. When we release SymPy 1.13, it also includes the patch, so it works with mpmath 1.1.0-1.4. As far as I understand, the issue would be solved at this point without any upper bound pins. (maybe my misunderstanding is here)

Now if our SymPy 1.12.1 has an upper bound of mpmath >=1.1.0,<=1.4 and a new mpmath 1.5 comes out, then I cannot install mpmath 1.5 with SymPy 1.12.1 even though it may work with SymPy 1.12.1. If we leave the upper bound off mpmath dependency, then this never occurs.

As far as having a tight release coupling between mpmath and sympy (like numpy and scipy), that's fine if we can arrange it, but I don't think that is currently the case. The recent changes were made to mpmath without deprecation (as far as I understand).

@oscarbenjamin
Copy link
Collaborator

Again, if we are actually able to be on top of releases, including patch and backport releases, it doesn't matter a whole lot because we can fix whatever issues arise as they come up,

A new patch release cannot fix it so that pip install sympy==1.12 works because if the 1.12 release has no version caps then that cannot be changed retrospectively. Being on top of releases means that we can regularly update the version caps. If there are no version caps though then all of those releases will ultimately break once new enough versions of mpmath are available.

@moorepants
Copy link
Member

A new patch release cannot fix it so that pip install sympy==1.12 works because if the 1.12 release has no version caps then that cannot be changed retrospectively.

Maybe this hits the difference in views on this. I would never expect that to work some time further in the future without carefully pinning sympy's dependencies in the install command, i.e. you have to type pip install sympy==1.12 mpmath==1.3.0.

@rgommers
Copy link

Maybe this hits the difference in views on this. I would never expect that to work some time further in the future without carefully pinning sympy's dependencies in the install command, i.e. you have to type pip install sympy==1.12 mpmath==1.3.0.

I think that that is indeed the difference. I fully expect this to work, and my goal as a package author is to make sure it does. It can be done with few downsides.

Basically, the problem is: if sympy doesn't manage its runtime dependency on mpmath, then every package author and end user that depends on sympy will have to do manage that dependency for themselves instead (unless they're never trying to use anything but the very latest sympy release, which isn't realistic).

@oscarbenjamin
Copy link
Collaborator

you have to type pip install sympy==1.12 mpmath==1.3.0.

How would you know in future which versions of mpmath are compatible with each sympy release if the sympy package metadata has no version caps?

Bear in mind that downstream projects might have larger dependency trees than just sympy and mpmath e.g. pytorch has something like 25 transitive dependencies.

@moorepants
Copy link
Member

Basically, the problem is: if sympy doesn't manage its runtime dependency on mpmath, then every package author and end user that depends on sympy will have to do manage that dependency for themselves instead (unless they're never trying to use anything but the very latest sympy release, which isn't realistic).

Yes, absolutely correct, but only if you are trying to install very old versions. Every time I've tried to install old Python packages over the last 20 years, this is exactly what I have to do. I don't want to measure how much time I used to spend getting an old version of Plone running, even with the == pinning in zc.buildout. I would say it isn't realistic to expect old software to install without careful attention even if the developer tried to future proof it in some way. These upper bound pins are a relatively new phenomena in Python land, with poetry and pushes for semantic versioning bringing them about.

How would you know in future which versions of mpmath are compatible with each sympy release if the sympy package metadata has no version caps?

You don't know this, at least not from any information in the source tarball because the code was released in real time before any future dependencies that may have broken it. You can usually deduce this from other historical artifacts and yes it is more painful if the dependency stack is deep.

I do agree that you can manage upper bound pins without the detrimental effects to users, but you really have to be on top of things because you are always leaving guaranteed incompatible software out in the wild. Without upper bound pins you are only leaving might-be-incompatible software out in the wild. Oscar dismissed me posting Henry Schreiner's blog post earlier, but I don't really know how to articulate the rational behind avoiding upper bounds better than what he wrote: https://iscinumpy.dev/post/bound-version-constraints/. Apologies if you'd already read it.

@oscarbenjamin
Copy link
Collaborator

By the way it is possible to force install of versions that violate the dependency constraints using e.g. pip's --no-deps. For example on Python 3.8 you can do:

$ pip install --no-deps 'numpy==1.18' 'scipy==1.10'
Collecting numpy==1.18
  Using cached numpy-1.18.0-cp38-cp38-manylinux1_x86_64.whl.metadata (2.1 kB)
Collecting scipy==1.10
  Using cached scipy-1.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (58 kB)
Using cached numpy-1.18.0-cp38-cp38-manylinux1_x86_64.whl (20.6 MB)
Using cached scipy-1.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)
Installing collected packages: scipy, numpy
Successfully installed numpy-1.18.0 scipy-1.10.0

Likewise uv has the concept of dependency overrides for this:
https://github.com/astral-sh/uv?tab=readme-ov-file#dependency-overrides

@moorepants
Copy link
Member

The uv dependency overrides is a helpful feature. Nice idea. It helps avoid having to create things like this: https://pypi.org/project/poetry-relax/

@oscarbenjamin
Copy link
Collaborator

Oscar dismissed me posting Henry Schreiner's blog post earlier, but I don't really know how to articulate the rational behind avoiding upper bounds better than what he wrote: https://iscinumpy.dev/post/bound-version-constraints/. Apologies if you'd already read it.

I have read this excellent blog post by @henryiii before but I am not going to read the whole thing again right now because it is long. The blog post is aimed at a proliferation of speculative upper bounds as is encouraged by certain tools like poetry. That is absolutely not the situation here for us though.

Apart from Python itself SymPy has one hard dependency which is mpmath. The two libraries are tightly coupled having originally been a single codebase that was later split into two separate libraries. This is why I compare the situation to that of numpy and scipy which are also tightly coupled and as I understand it were originally a single codebase as well.

The section in the blog post titled "Upper limits are valid sometimes" is the relevant part here and a number of reasons are given for when upper bounds might be the right choice many of which apply to this situation:

  1. If a dependency is known to be broken, block out the broken version.

This is the situation for mpmath 1.4.0 alpha 0. It should not have mattered because it is a prerelease but the pytorch situation shows that we probably still need version caps to exclude prereleases as well. Currently we have released sympy 1.12.1 alpha 1 which is compatible with mpmath 1.4.0 alpha 0 but still has no version cap (actually it has mpmath >= 0.19 which is wildly inaccurate). So the situation as it stands now is that an mpmath 1.4.0 alpha 1 release could bring back the problem that lead to this issue being opened in the first place.

  1. If you know upstream is about to make a major change that is very likely to break your usage, you can cap

I am not sure if mpmath 1.4.0 would make major changes but I do see major changes on the horizon for mpmath, like mpf tuples becoming 3-tuples and everything being rebuilt around python-flint's arf type. Focusing purely on how to improve mpmath I would want to make a lot of changes that would break sympy's current use of it.

  1. You are releasing two or more libraries in sync with each other.

This is the situation with numpy and scipy and is somewhat similar to the situation with mpmath and sympy. They are not two unrelated projects and there is discussion and coordination. We will coordinate to ensure that the most recent releases of both projects are always compatible even if significantly older versions of one are not compatible with significantly newer versions of the other. We also can test supported versions of mpmath to keep any version constraints up to date.

  1. You depend on private internal details of a library.

Most of the mpmath functions that sympy uses are not part of mpmath's documented public API. For example the entire mpmath.libmp subpackage is not documented at all and all of SymPy's evalf is based on those functions rather than the public API based on the mpf type and MPContext.

  1. If you have a heavy dependency on a library, maybe cap

This is absolutely applicable to sympy and mpmath. I'm not sure if everyone understands exactly the extent to which core sympy functionality depends on mpmath because a lot of things that look like symbolic manipulation are actually based on evalf's robust numerics which depends entirely on mpmath. The mathematical definition of most sympy functions effectively lies in the mpmath codebase rather than sympy's.

It is important not to be over-zealous about something like "never use upper bound caps". It might be a good default message for many common situations but you always have to look on a case by case basis at these things. For the particular case of sympy and mpmath right now we absolutely need to have an upper version cap.

It would be good for sympy to reduce its dependence on mpmath internals. Sympy and mpmath should coordinate to work out what should be considered the public stable API of mpmath going forwards and to try to ensure that sympy only depends on that (see e.g. mpmath/mpmath#704). However sympy depends on mpmath so heavily that I think we would always need upper bound version caps regardless. We got away without any formal version constraints in recent years just because mpmath was not being actively developed but that is not the case any more.

skirpichev added a commit to skirpichev/mpmath that referenced this issue Mar 30, 2024
skirpichev added a commit to skirpichev/mpmath that referenced this issue Mar 30, 2024
@moorepants
Copy link
Member

I don't appreciate continuing to be called dogmatic and overzealous. I do not believe I'm making personal quips in the other direction.

I understand that there may be a use for a dependency cap in some cases as per the blog post, but I don't see how it is here. If SymPy >= 1.12.1 is compatible with mpmath 1.1.0 to 1.4.0 alpha 0, then we are safe until mpmath makes new releases that includes new breaking changes without deprecation. What is the reason for the upper bound pin in this case? I described the scenario that is a negative consequence in this comment: #26273 (comment).

The fact that anyone here may be helping to develop mpmath and has intentions to make breaking changes in future releases is interesting and provides some insight. I also know that everyone here has an intention to really be on the ball about keeping these version pins correctly and duly updated regularly, but proposing a new paradigm in how mpmath and SymPy have a co-release pattern, cycle, and coordinated effort is probably suitable for a SYMPEP.

@oscarbenjamin
Copy link
Collaborator

If SymPy >= 1.12.1 is compatible with mpmath 1.1.0 to 1.4.0 alpha 0, then we are safe until mpmath makes new releases that includes new breaking changes without deprecation. What is the reason for the upper bound pin in this case?

I have explained this repeatedly: SymPy 1.12.1 will not be compatible with all future versions of mpmath. There is a good chance that the next release of mpmath would break SymPy 1.12.1 or if not the next release then the one after that. This is not a hypothetical because we already saw significant breakage in the first alpha release and we know that changes are needed for mpmath to move forwards.

It has not been a problem in recent years just because mpmath was basically dormant and had almost no changes or releases. SymPy reaches too heavily into mpmath for anyone to expect or ask that mpmath never does anything that would break old SymPy versions. We cannot update the version constraint retroactively so if we put out any release now without version constraints then that release (pip install sympy==1.12.1) will be broken by some not too distant future mpmath release.

SymPy's version constraints should not even allow a situation in which SymPy is using deprecated functionality from mpmath because mpmath is so core to SymPy that this is like SymPy using its own deprecated functionality.

We can talk about a SymPEP for the future but if you are saying that there should be one before we add the version constraint then that is basically just blocking anything from happening. We should be putting out two SymPy releases 1.12.1 and 1.13 right now but both are now blocked if a SymPEP is needed first.

I have spent much more time on this than I wanted to and I am not going to write a SymPEP right now. I am also not going to put out any releases without the mpmath upper cap. I wouldn't have sunk time adding any releases for 1.12.x if it wasn't for this mpmath issue: we could be releasing 1.13 right now instead.

This is because of a hypothetical future scenario in which someone wants to use a new version of mpmath with an old version of sympy (which they would still be able to do anyway if they wanted). In order to support that hypothetical scenario we then break much more realistic scenarios like someone wanting to install a particular version of sympy and expecting it to work. Most importantly we leave ourselves at risk of sudden immediate breakage rather than being able to manage compatibility one release at a time.

I do not want to repeat the situation from Tuesday night where suddenly it becomes urgent to put out a new release and I sink a whole day of work at an inconvenient time: there would have been no urgent problem if the latest release of sympy had had a version cap on mpmath. I have done the necessary work to prevent this happening again in future but now others are blocking me from completing that. If this situation arises again in future then I expect others to handle it.

I am going to drop out of this discussion now. I am not putting out any releases but I am also not going to waste more time discussing this because I am repeating myself in the face of generic objections to upper caps that ignore all of the details of this situation.

@moorepants
Copy link
Member

I think we just disagree about this then. Different perspectives give a different take on whether these version caps are helpful or not. Here it is helpful for you the developer, in this specific case, to put up future protections on the time you feel obliged to give if mpmath has sudden breaking changes. But a package distribution maintainer, where SymPy is one small part, may also lose a day of work because we introduce upper bound pins. I certainly have lost many days due to dealing with upper bound pins in the packaging, distribution, and maintaining work I do. We've never had upper pins on SymPy's dependencies and the approach of fixing SymPy and releasing if a dependency breaks it has served us well for many years without the negative consequences of upper bound pins.

Maybe it is time for SymPy to be a bit more formal about roles and responsibilities so that we can distribute the load among the core developers better. It should not be that Oscar has to bear the whole load with tight time pressure in such a situation.

@henryiii
Copy link

The blog post is aimed at a proliferation of speculative upper bounds as is encouraged by certain tools like poetry. That is absolutely not the situation here for us though. ... The section in the blog post titled "Upper limits are valid sometimes" is the relevant part here and a number of reasons are given for when upper bounds might be the right choice many of which apply to this situation:

You are correct, the post is not meant to say you can't upper cap (except the Python version, which doesn't even work correctly if you upper cap) if you've thought about it and have a reason to. It does increase friction, but if there's a high chance the next version(s) will break you and you are willing to make releases shortly after the dependency does & keep the cap up to date, go ahead and cap.

@moorepants
Copy link
Member

Thanks for comment Henry. I'm not fundamentally against upper pins. I personally use them when upstream code breaks my packages and I don't have time to actually fix the issue. I release a point release with a version cap, but with the intention to get rid of it as soon as I fix the issue. This seems to align with this from your post:

If you know upstream is about to make a major change that is very likely to break your usage, you can cap. But try to fix this as quickly as possible so you can remove the cap by the time they release. Possibly add development branch/release testing until this is resolved. ...

We learned of a breaking upstream change in the alpha release of mpmath (top of this issue). Oscar fixed the compatibility issue in our master branch and even released it already with SymPy 1.12.1 alpha. So this change will not break SymPy 1.12.1 or SymPy 1.13. So, if we've already fixed it, then a cap in the release of 1.12.1 does nothing but prevent future versions of mpmath installed alongside sympy 1.12.1.

Now as of several hours ago, mpmath has added the code back that was removed in their alpha release. So their second alpha will not even break SymPy 1.12, pointing towards mpmath 1.4 not breaking SymPy at all. We also, thanks to Oscar, have the compatibility fix in place for whenever mpmath removes the code after a deprecation cycle. Yet, it seems the intent is still to prevent people from installing mpmath>=1.4 alongside SymPy 1.12.1.

There are four points (I think) in support of upper pins on mpmath moving forward:

  • pip install sympy==<specific version number> should work "forever": if sympy is installed with the only required dependency with just those two packages in the virtual environment and we added upper pins to mpmath, then yes I agree this would work as far out as a 10 year horizon or so. But the second you need other packages in this environment, the picture starts to change, particularly if you want to install one of the 500 dependents of mpmath or work with the 100k repositories that use mpmath directly
  • mpmath may make changes that break sympy in upcoming releases: this has always been the case, but it just hasn't happened much, maybe it will happen more in the near future, but this doesn't change anything for us other than we need to be more on our toes
  • mpmath is tightly coupled with sympy: this is true but we don't control both repos or make tandem releases. that could change but it isn't the reality now
  • upper pins will reduce the chance we have to do an emergency fix like this past week: that is likely true

So it seems the tradeoff is this: either pip install sympy==<specific version> is made more likely to work as the future progresses or we prevent people from installing specific versions of sympy alongside versions of mpmath that were released after it. If being able to have the second scenario means that I have to type pip install sympy==<specific version mpmath=<compatible version> instead of pip install sympy==<specific version> in the future, then the former seems to be worth the gain of increased compatibility that no upper pins provide.

There are proponents that find having the opposite preferable. I don't see a way to decide on either scenario because it is opinion at that point about who has do deal with any issues related to whether an upper bound pin was there. I'm of the opinion that we, the SymPy developers, should bear the burden and try to reduce it for the downstream users, library/application maintainers, and downstream packagers (which are many). I believe the introduction of upper pins puts the burden downstream of us more than not having them, as that is the problem I've encountered the most.

It's just opinion about who should deal with the issues and I didn't do the work to fix the recent issues. My opinion lies with the issues I've had to deal with regarding upper bound pins as a user, library developer dependent on SymPy, and packager/distributor. So, given that it now seems like we are at opinion, have nothing more to say on this.

@oscarbenjamin
Copy link
Collaborator

We learned of a breaking upstream change in the alpha release of mpmath (top of this issue).

We actually learned of it much earlier than that. I pushed a fix 8 months before the alpha release but it got transformed beyond recognition (#25290) so I had to fix it again a month ago (#26269). The second time I made sure to add CI checks.

dnkurek pushed a commit to dnkurek/openvino that referenced this issue Apr 8, 2024
bbielawx pushed a commit to bbielawx/openvino that referenced this issue Apr 12, 2024
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
@oscarbenjamin
Copy link
Collaborator

@ezyang and other pytorch people is it going to cause a problem when SymPy puts out a 1.13.0rc1 release candidate?

The mpmath issue is resolved now. SymPy 1.12.1 has the mpmath < 1.4 requirement and also has the fix to work with latest mpmath master anyway. SymPy 1.13.0rc1 and master also both have the fix and 1.13.0rc1 will have the mpmath < 1.4 constraint as well.

There are a lot of changes though from SymPy 1.12 to 1.13:
https://github.com/sympy/sympy/wiki/Release-Notes-for-1.13

Normally I wouldn't worry about a prerelease breaking anything because the purpose of release candidates is to pick up early reports of downstream breakage but we don't expect them to be used in "production". The intended purpose of prereleases is that downstream dependents can have e.g. an optional CI job that tests against them or that some users/downstream will install and test out the prereleases when they are announced.

Basically I want to put out SymPy 1.13.0rc1 and then I am going on holiday and will not respond to any reports of problems for maybe a week. When prereleases are being used in the intended way that is fine because the whole idea is that you put them out and then wait some time to collect reports of problems. Here though it seems that somehow pytorch can be broken by SymPy putting out a prerelease because setup.py develop will apparently pull that prerelease in causing immediate problems for people who did not intend to test the prerelease.

Does pytorch still have a setup that will pull in prereleases of dependencies automatically?

Does pytorch have anything to constrain the SymPy version so that SymPy 1.13.x would not be picked up automatically?

Can someone test pytorch with current SymPy master (or the 1.13 release branch)?

I haven't put any prerelease on PyPI but you can install the 1.13 release branch from git with:

pip install git+https://github.com/sympy/sympy.git@1.13

I intend to push sympy 1.13.0rc1 to PyPI tomorrow and then I will be unavailable for at least a few days.

@oscarbenjamin
Copy link
Collaborator

The mpmath issue is resolved now.

I'm going to close this issue as fixed.

Can someone test pytorch with current SymPy master (or the 1.13 release branch)?

you can install the 1.13 release branch from git with:
pip install git+https://github.com/sympy/sympy.git@1.13

Regardless of how pytorch handles prereleases the sympy 1.13 final release is also coming soon and so it would be good for someone to test with pytorch so that we can fix any issues before final release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants