Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t-SNE optimization using scikit-learn-intelex #3279

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

removed hatch build

8c4a56e
Select commit
Loading
Failed to load commit list.
Open

t-SNE optimization using scikit-learn-intelex #3279

removed hatch build
8c4a56e
Select commit
Loading
Failed to load commit list.
Azure Pipelines / scverse.scanpy failed Oct 14, 2024 in 31m 21s

Build #20241014.5 had test failures

Details

Tests

  • Failed: 23 (0.26%)
  • Passed: 8,188 (92.89%)
  • Other: 604 (6.85%)
  • Total: 8,815
Code coverage

  • 9603 of 12505 line covered (76.79%)

Annotations

Check failure on line 3708 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / scverse.scanpy

Build log #L3708

Bash exited with code '1'.

Check failure on line 1 in tests/test_embedding.py::test_tsne[None]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / scverse.scanpy

tests/test_embedding.py::test_tsne[None]

key_added = None, key_obsm = 'X_tsne', key_uns = 'tsne'

    @pytest.mark.parametrize(
        ("key_added", "key_obsm", "key_uns"),
        [
            pytest.param(None, "X_tsne", "tsne", id="None"),
            pytest.param("custom_key", "custom_key", "custom_key", id="custom_key"),
        ],
    )
    def test_tsne(key_added: str | None, key_obsm: str, key_uns: str):
        pbmc = pbmc68k_reduced()[:200].copy()
    
>       euclidean1 = sc.tl.tsne(pbmc, metric="euclidean", copy=True)

tests/test_embedding.py:22: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/legacy_api_wrap/__init__.py:80: in fn_compatible
    return fn(*args_all, **kw)
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/scanpy/tools/_tsne.py:167: in tsne
    from sklearnex import patch_sklearn,unpatch_sklearn
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/__init__.py:20: in <module>
    from . import utils
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/utils/__init__.py:17: in <module>
    from .validation import _assert_all_finite
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/utils/validation.py:17: in <module>
    from daal4py.sklearn.utils.validation import _assert_all_finite
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/__init__.py:56: in <module>
    from . import mb, sklearn
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/__init__.py:17: in <module>
    from .model_builders import GBTDAALBaseModel, convert_model
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    from typing import Literal, Optional
    
    import numpy as np
    
    import daal4py as d4p
    
    try:
        from pandas import DataFrame
        from pandas.core.dtypes.cast import find_common_type
    
        pandas_is_imported = True
    except (ImportError, ModuleNotFoundError):
        pandas_is_imported = False
    
>   from sklearn.utils.metaestimators import available_if
E   ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators' (/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearn/utils/metaestimators.py)

/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/model_builders.py:33: ImportError
Raw output
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/model_builders.py:33: ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators' (/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearn/utils/metaestimators.py)

Check failure on line 1 in tests/test_embedding.py::test_tsne[custom_key]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / scverse.scanpy

tests/test_embedding.py::test_tsne[custom_key]

key_added = 'custom_key', key_obsm = 'custom_key', key_uns = 'custom_key'

    @pytest.mark.parametrize(
        ("key_added", "key_obsm", "key_uns"),
        [
            pytest.param(None, "X_tsne", "tsne", id="None"),
            pytest.param("custom_key", "custom_key", "custom_key", id="custom_key"),
        ],
    )
    def test_tsne(key_added: str | None, key_obsm: str, key_uns: str):
        pbmc = pbmc68k_reduced()[:200].copy()
    
>       euclidean1 = sc.tl.tsne(pbmc, metric="euclidean", copy=True)

tests/test_embedding.py:22: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/legacy_api_wrap/__init__.py:80: in fn_compatible
    return fn(*args_all, **kw)
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/scanpy/tools/_tsne.py:167: in tsne
    from sklearnex import patch_sklearn,unpatch_sklearn
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/__init__.py:20: in <module>
    from . import utils
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/utils/__init__.py:17: in <module>
    from .validation import _assert_all_finite
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearnex/utils/validation.py:17: in <module>
    from daal4py.sklearn.utils.validation import _assert_all_finite
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/__init__.py:56: in <module>
    from . import mb, sklearn
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/__init__.py:17: in <module>
    from .model_builders import GBTDAALBaseModel, convert_model
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    from typing import Literal, Optional
    
    import numpy as np
    
    import daal4py as d4p
    
    try:
        from pandas import DataFrame
        from pandas.core.dtypes.cast import find_common_type
    
        pandas_is_imported = True
    except (ImportError, ModuleNotFoundError):
        pandas_is_imported = False
    
>   from sklearn.utils.metaestimators import available_if
E   ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators' (/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearn/utils/metaestimators.py)

/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/model_builders.py:33: ImportError
Raw output
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/daal4py/mb/model_builders.py:33: ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators' (/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/sklearn/utils/metaestimators.py)

Check failure on line 1 in tests/test_logging.py::test_call_outputs[print_versions]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / scverse.scanpy

tests/test_logging.py::test_call_outputs[print_versions]

func = <function print_versions at 0x7f07396f1a60>

    @pytest.mark.parametrize(
        "func",
        [
            sc.logging.print_header,
            sc.logging.print_versions,
            sc.logging.print_version_and_date,
        ],
    )
    def test_call_outputs(func):
        """
        Tests that these functions print to stdout and don't error.
    
        Checks that https://github.com/scverse/scanpy/issues/1437 is fixed.
        """
        output_io = StringIO()
        with redirect_stdout(output_io):
>           func()

tests/test_logging.py:145: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/scanpy/logging.py:198: in print_versions
    session_info.show(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

na = True, os = True, cpu = False, jupyter = None, dependencies = True
std_lib = False, private = False, write_req_file = False, req_file_name = None
html = False
excludes = ['builtins', 'stdlib_list', 'importlib_metadata', '$coverage', '__future__', '__main__', ...]

    def show(na=True, os=True, cpu=False, jupyter=None, dependencies=None,
             std_lib=False, private=False, write_req_file=False, req_file_name=None,
             html=None, excludes=['builtins', 'stdlib_list']):
        '''
        Show version information for loaded modules in the current session,
        Python, and the OS.
    
        Parameters
        ----------
        na : bool
            Show module name even when no version number is found.
        os : bool
            Show OS information.
        cpu : bool
            Show number of logical CPU cores and info string (if available).
        jupyter : bool
            Show information about the jupyter environment. If `None`, output
            jupyter info only if inside a Jupyter notebook.
        dependencies : bool
            Show information about modules imported by the Python interpreter on
            startup and depency modules imported via other modules. If `None`,
            dependency modules will be included in the HTML output under a
            <details> tag, and excluded from the printed output. Setting `na` to
            `False` could be helpful to reduce verboseness.
        std_lib : bool
            Show information for modules imported from the standard library.
            Tries to detect the Python version to compare with the corresponding
            standard libarary, falls back to Python 3.7 if the version cannot be
            detected.
        private : bool
            Show information for private modules.
        write_req_file: bool
            Create a pip-compatible text file that lists all the module versions.
            If `None`, write dependency files for Jupyter notebooks only. If the
            `notebook` module is installed, `session_info` can automatically find the
            notebook name use it as a prefix for the requirments file. Otherwise
            it falls back to `session_info-requirements.txt`.
            This only writes explicitly imported modules.
        req_file_name : str
            Change the name of the requirements file.
        html: bool
            Format the output as HTML and collapse it in a <details> tag. If `None`,
            HTML will be used only if a Jupyter notebook environment is detected.
            Note that this will not be visible in notebooks shared on GitHub since
            they seemingly do not support the <details> tag. Requires IPython.
        excludes : list
            Do not show version information for these modules.
        '''
        # Exclude std lib packages
        if not std_lib:
            try:
                std_modules = stdlib_list(version=platform.python_version()[:-2])
            except ValueError:
                # Use 3.7 if the Python version cannot be found
                std_modules = stdlib_li
Raw output
/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/session_info/main.py:209: KeyError: 'daal4py'

Check failure on line 1 in tests/test_plotting.py::test_scatterplots[pca-fn0]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / scverse.scanpy

tests/test_plotting.py::test_scatterplots[pca-fn0]

Test failed