Skip to content

Releases: xcube-dev/xcube

1.0.3

29 Mar 10:30
177fbfb
Compare
Choose a tag to compare

Changes in 1.0.3

Some unit tests have been fixed. Regression was caused by a minor Python environment change.
Therefore, the actual changes are the same as in 1.0.2:

  • Bundled latest xcube-viewer 1.0.1.

  • xcube is now compatible with Python 3.10. (#583)

  • The Viewer.add_dataset() method of the xcube JupyterLab integration has been enhanced by two optional keyword arguments style and color_mappings to allow for customized, initial color mapping of dataset variables. The example notebook xcube-viewer-in-jl.ipynb has been updated to reflect the enhancement.

  • Fixed an issue with new xcube data store abfs for the Azure Blob filesystem. (#798)

Full Changelog

1.0.2

29 Mar 08:24
Compare
Choose a tag to compare

Changes in 1.0.2

  • Bundled latest xcube-viewer 1.0.1.

  • xcube is now compatible with Python 3.10. (#583)

  • The Viewer.add_dataset() method of the xcube JupyterLab integration has been enhanced by two optional keyword arguments style and color_mappings to allow for customized, initial color mapping of dataset variables. The example notebook xcube-viewer-in-jl.ipynb has been updated to reflect the enhancement.

  • Fixed an issue with new xcube data store abfs for the Azure Blob filesystem. (#798)

Full Changelog: v1.0.1...v1.0.2

1.0.2.dev3

28 Mar 10:39
Compare
Choose a tag to compare
1.0.2.dev3 Pre-release
Pre-release

Changes in 1.0.2 (in development)

  • Includes xcube-viewer version 1.0.1-dev.2.

  • xcube is now compatible with Python 3.10. (#583)

  • The Viewer.add_dataset() method of the xcube JupyterLab integration
    has been enhanced by two optional keyword arguments style and
    color_mappings to allow for customized, initial color mapping
    of dataset variables. The example notebook
    xcube-viewer-in-jl.ipynb
    has been updated to reflect the enhancement.

  • Fixed an issue with new xcube data store abfs
    for the Azure Blob filesystem. (#798)

Full Changelog: v1.0.2.dev2...v1.0.2.dev3

1.0.2.dev2

27 Mar 14:30
Compare
Choose a tag to compare
1.0.2.dev2 Pre-release
Pre-release

Changes in 1.0.2 (in development)

  • Includes xcube-viewer version 1.0.1-dev.1.

  • xcube is now compatible with Python 3.10. (#583)

  • The Viewer.add_dataset() method of the xcube JupyterLab integration
    has been enhanced by two optional keyword arguments style and
    color_mappings to allow for customized, initial color mapping
    of dataset variables. The example notebook
    xcube-viewer-in-jl.ipynb
    has been updated to reflect the enhancement.

  • Fixed an issue with new xcube data store abfs
    for the Azure Blob filesystem. (#798)

Full Changelog: v1.0.2.dev1...v1.0.2.dev2

1.0.2.dev1

21 Mar 16:23
Compare
Choose a tag to compare
1.0.2.dev1 Pre-release
Pre-release

Changes in 1.0.2 (in development)

  • xcube is now compatible with Python 3.10. (#583)

  • The Viewer.add_dataset() method of the xcube JupyterLab integration
    has been enhanced by two optional keyword arguments style and
    color_mappings to allow for customized, initial color mapping
    of dataset variables. The example notebook
    xcube-viewer-in-jl.ipynb
    has been updated to reflect the enhancement.

  • Fixed an issue with new xcube data store abfs
    for the Azure Blob filesystem. (#798)

1.0.1

15 Mar 15:54
Compare
Choose a tag to compare

Changes in 1.0.1

Fixes

  • Fixed recurring issue where xcube server was unable to locate Python
    code downloaded from S3 when configuring dynamically computed datasets
    (configuration FileSystem: memory) or augmenting existing datasets
    by dynamically computed variables (configuration Augmentation). (#828)

1.0.1.dev1

15 Mar 12:49
7c368fe
Compare
Choose a tag to compare
1.0.1.dev1 Pre-release
Pre-release

Changes in 1.0.1 (in development)

Fixes

  • Fixed recurring issue where xcube server was unable to locate Python
    code downloaded from S3 when configuring dynamically computed dataset
    (configuration FileSystem: memory) or augmenting existing datasets
    by dynamically computed variables (configuration Augmentation). (#828)

Full Changelog: v1.0.0...v1.0.1.dev1

1.0.0

10 Mar 17:13
Compare
Choose a tag to compare

Changes in 1.0.0

Enhancements

  • Added a catalog API compliant to STAC to
    xcube server. (#455)

    • It serves a single collection named "datacubes" whose items are the
      datasets published by the service.
    • The collection items make use the STAC
      datacube extension.
  • Simplified the cloud deployment of xcube server/viewer applications (#815).
    This has been achieved by the following new xcube server features:

    • Configuration files can now also be URLs which allows
      provisioning from S3-compatible object storage.
      For example, it is now possible to invoke xcube server as follows:
      $ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
    • A new endpoint /viewer/config/{*path} allows
      for configuring the viewer accessible via endpoint /viewer.
      The actual source for the configuration items is configured by xcube
      server configuration using the new entry Viewer/Configuration/Path,
      for example:
      Viewer:
        Configuration:
          Path: s3://cyanoalert/xcube/viewer-config
    • A typical xcube server configuration comprises many paths, and
      relative paths of known configuration parameters are resolved against
      the base_dir configuration parameter. However, for values of
      parameters passed to user functions that represent paths in user code,
      this cannot be done automatically. For such situations, expressions
      can be used. An expression is any string between "${" and "}" in a
      configuration value. An expression can contain the variables
      base_dir (a string), ctx the current server context
      (type xcube.webapi.datasets.DatasetsContext), as well as the function
      resolve_config_path(path) that is used to make a path absolut with
      respect to base_dir and to normalize it. For example
      Augmentation:
        Path: augmentation/metadata.py
        Function: metadata:update_metadata
        InputParameters:
          bands_config: ${resolve_config_path("../common/bands.yaml")}
  • xcube's spatial resampling functions resample_in_space(),
    affine_transform_dataset(), and rectify_dataset() exported
    from module xcube.core.resampling now encode the target grid mapping
    into the resampled datasets. (#822)

    This new default behaviour can be switched off by keyword argument
    encode_cf=False.
    The grid mapping name can be set by keyword argument gm_name.
    If gm_name is not given a grid mapping will not be encoded if
    all the following conditions are true:

    • The target CRS is geographic;
    • The spatial dimension names are "lon" and "lat";
    • The spatial 1-D coordinate variables are named "lon" and "lat"
      and are evenly spaced.

    The encoding of the grid mapping is done according to CF conventions:

    • The CRS is encoded as attributes of a 0-D data variable named by gm_name
    • All spatial data variables receive an attribute grid_mapping that is
      set to the value of gm_name.
  • Added Notebook
    xcube-viewer-in-jl.ipynb
    that explains how xcube Viewer can now be utilised in JupyterLab
    using the new (still experimental) xcube JupyterLab extension
    xcube-jl-ext.
    The xcube-jl-ext package is also available on PyPI.

  • Updated example
    Notebook for CMEMS data store
    to reflect changes of parameter names that provide CMEMS API credentials.

  • Included support for Azure Blob Storage filesystem by adding a new
    data store abfs. Many thanks to Ed!
    (#752)

    These changes will enable access to data cubes (.zarr or .levels)
    in Azure blob storage as shown here:

    store = new_data_store(
        "abfs",                    # Azure filesystem protocol
        root="my_blob_container",  # Azure blob container name
        storage_options= {'anon': True, 
                          # Alternatively, use 'connection_string': 'xxx'
                          'account_name': 'xxx', 
                          'account_key':'xxx'}  
    )
    store.list_data_ids()

    Same configuration for xcube Server:

    DataStores:
    - Identifier: siec
      StoreId: abfs
      StoreParams:
        root: my_blob_container
        max_depth: 1
        storage_options:
          anon: true
          account_name: "xxx"
          account_key': "xxx"
          # or
          # connection_string: "xxx"
      Datasets:
        - Path: "*.levels"
          Style: default
  • Added Notebook
    8_azure_blob_filesystem.ipynb.
    This notebook shows how a new data store instance can connect and list
    Zarr files from Azure bolb storage using the new abfs data store.

  • xcube's Dockerfile no longer creates a conda environment xcube.
    All dependencies are now installed into the base environment making it
    easier to use the container as an executable for xcube applications.
    We are now also using a micromamba base image instead of miniconda.
    The result is a much faster build and smaller image size.

  • Added a new_cluster function to xcube.util.dask, which can create
    Dask clusters with various configuration options.

  • The xcube multi-level dataset specification has been enhanced. (#802)

    • When writing multi-level datasets (*.levels/) we now create a new
      JSON file .zlevels that contains the parameters used to create the
      dataset.
    • A new class xcube.core.mldataset.FsMultiLevelDataset that represents
      a multi-level dataset persisted to some filesystem, like
      "file", "s3", "memory". It can also write datasets to the filesystem.
  • Changed the behaviour of the class
    xcube.core.mldataset.CombinedMultiLevelDataset to do what we
    actually expect:
    If the keyword argument combiner_func is not given or None is passed,
    a copy of the first dataset is made, which is then subsequently updated
    by the remaining datasets using xarray.Dataset.update().
    The former default was using the xarray.merge(), which for some reason
    can eagerly load Dask array chunks into memory that won't be released.

Fixes

  • Tiles of datasets with forward slashes in their identifiers
    (originated from nested directories) now display again correctly
    in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817)

  • The xcube server configuration parameters url_prefix and
    reverse_url_prefix can now be absolute URLs. This fixes a problem for
    relative prefixes such as "proxy/8000" used for xcube server running
    inside JupyterLab. Here, the expected returned self-referencing URL was
    https://{host}/users/{user}/proxy/8000/{path} but we got
    http://{host}/proxy/8000/{path}. (#806)

New Contributors

Full Changelog: v0.13.0...v1.0.0

1.0.0.dev3

03 Mar 15:42
Compare
Choose a tag to compare
1.0.0.dev3 Pre-release
Pre-release

Changes in 1.0.0.dev3

Enhancements

  • Included support for Azure Blob Storage filesystem by adding a new
    data store abfs. Many thanks to Ed!
    (#752)

    These changes will enable access to data cubes (.zarr or .levels)
    in Azure blob storage as shown here:

    store = new_data_store(
        "abfs",                    # Azure filesystem protocol
        root="my_blob_container",  # Azure blob container name
        storage_options= {'anon': True, 
                          # Alternatively, use 'connection_string': 'xxx'
                          'account_name': 'xxx', 
                          'account_key':'xxx'}  
    )
    store.list_data_ids()

    Same configuration for xcube Server:

    DataStores:
    - Identifier: siec
      StoreId: abfs
      StoreParams:
        root: my_blob_container
        max_depth: 1
        storage_options:
          anon: true
          account_name: "xxx"
          account_key': "xxx"
          # or
          # connection_string: "xxx"
      Datasets:
        - Path: "*.levels"
          Style: default
  • Added Notebook
    8_azure_blob_filesystem.ipynb.
    This notebook shows how a new data store instance can connect and list
    Zarr files from Azure bolb storage using the new abfs data store.

  • Added a catalog API compliant to STAC to
    xcube server. (#455)

    • It serves a single collection named "datacubes" whose items are the
      datasets published by the service.
    • The collection items make use the STAC
      datacube extension.
  • Simplified the cloud deployment of xcube server/viewer applications (#815).
    This has been achieved by the following new xcube server features:

    • Configuration files can now also be URLs which allows
      provisioning from S3-compatible object storage.
      For example, it is now possible to invoke xcube server as follows:
      $ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
    • A new endpoint /viewer/config/{*path} allows
      for configuring the viewer accessible via endpoint /viewer.
      The actual source for the configuration items is configured by xcube
      server configuration using the new entry Viewer/Configuration/Path,
      for example:
      Viewer:
        Configuration:
          Path: s3://cyanoalert/xcube/viewer/ 
    • A typical xcube server configuration comprises many paths, and
      relative paths of known configuration parameters are resolved against
      the base_dir configuration parameter. However, for values of
      parameters passed to user functions that represent paths in user code,
      this cannot be done automatically. For such situations, expressions
      can be used. An expression is any string between "${" and "}" in a
      configuration value. An expression can contain the variables
      base_dir (a string), ctx the current server context
      (type xcube.webapi.datasets.DatasetsContext), as well as the function
      resolve_config_path(path) that is used to make a path absolut with
      respect to base_dir and to normalize it. For example
      Augmentation:
        Path: augmentation/metadata.py
        Function: metadata:update_metadata
        InputParameters:
          bands_config: ${resolve_config_path("../common/bands.yaml")}
  • xcube's Dockerfile no longer creates a conda environment xcube.
    All dependencies are now installed into the base environment making it
    easier to use the container as an executable for xcube applications.
    We are now also using a micromamba base image instead of miniconda.
    The result is a much faster build and smaller image size.

  • Added a new_cluster function to xcube.util.dask, which can create
    Dask clusters with various configuration options.

  • The xcube multi-level dataset specification has been enhanced. (#802)

    • When writing multi-level datasets (*.levels/) we now create a new
      JSON file .zlevels that contains the parameters used to create the
      dataset.
    • A new class xcube.core.mldataset.FsMultiLevelDataset that represents
      a multi-level dataset persisted to some filesystem, like
      "file", "s3", "memory". It can also write datasets to the filesystem.
  • Changed the behaviour of the class
    xcube.core.mldataset.CombinedMultiLevelDataset to do what we
    actually expect:
    If the keyword argument combiner_func is not given or None is passed,
    a copy of the first dataset is made, which is then subsequently updated
    by the remaining datasets using xarray.Dataset.update().
    The former default was using the xarray.merge(), which for some reason
    can eagerly load Dask array chunks into memory that won't be released.

Fixes

  • Tiles of datasets with forward slashes in their identifiers
    (originated from nested directories) now display again correctly
    in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817)

  • The xcube server configuration parameters url_prefix and
    reverse_url_prefix can now be absolute URLs. This fixes a problem for
    relative prefixes such as "proxy/8000" used for xcube server running
    inside JupyterLab. Here, the expected returned self-referencing URL was
    https://{host}/users/{user}/proxy/8000/{path} but we got
    http://{host}/proxy/8000/{path}. (#806)

New Contributors

Full Changelog: v1.0.0.dev2...v1.0.0.dev3

1.0.0.dev2

02 Mar 17:19
Compare
Choose a tag to compare
1.0.0.dev2 Pre-release
Pre-release

Changes in 1.0.0.dev2

Enhancements

  • Added a catalog API compliant to STAC to
    xcube server.
    It serves a single collection named "datasets" whose items are the
    datasets published by the service. (#455)

  • Simplified the cloud deployment of xcube server/viewer applications (#815).
    This has been achieved by the following new xcube server features:

    • Configuration files can now also be URLs which allows
      provisioning from S3-compatible object storage.
      For example, it is now possible to invoke xcube server as follows:
      $ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
    • A new endpoint /viewer/config/{*path} allows
      for configuring the viewer accessible via endpoint /viewer.
      The actual source for the configuration items is configured by xcube
      server configuration using the new entry Viewer/Configuration/Path,
      for example:
      Viewer:
        Configuration:
          Path: s3://cyanoalert/xcube/viewer/ 
    • A typical xcube server configuration comprises many paths, and
      relative paths of known configuration parameters are resolved against
      the base_dir configuration parameter. However, for values of
      parameters passed to user functions that represent paths in user code,
      this cannot be done automatically. For such situations, expressions
      can be used. An expression is any string between "${" and "}" in a
      configuration value. An expression can contain the variables
      base_dir (a string), ctx the current server context
      (type xcube.webapi.datasets.DatasetsContext), as well as the function
      resolve_config_path(path) that is used to make a path absolut with
      respect to base_dir and to normalize it. For example
      Augmentation:
        Path: augmentation/metadata.py
        Function: metadata:update_metadata
        InputParameters:
          bands_config: ${resolve_config_path("../common/bands.yaml")}
  • xcube's Dockerfile no longer creates a conda environment xcube.
    All dependencies are now installed into the base environment making it
    easier to use the container as an executable for xcube applications.
    We are now also using a micromamba base image instead of miniconda.
    The result is a much faster build and smaller image size.

  • Added a new_cluster function to xcube.util.dask, which can create
    Dask clusters with various configuration options.

  • The xcube multi-level dataset specification has been enhanced. (#802)

    • When writing multi-level datasets (*.levels/) we now create a new
      JSON file .zlevels that contains the parameters used to create the
      dataset.
    • A new class xcube.core.mldataset.FsMultiLevelDataset that represents
      a multi-level dataset persisted to some filesystem, like
      "file", "s3", "memory". It can also write datasets to the filesystem.
  • Changed the behaviour of the class
    xcube.core.mldataset.CombinedMultiLevelDataset to do what we
    actually expect:
    If the keyword argument combiner_func is not given or None is passed,
    a copy of the first dataset is made, which is then subsequently updated
    by the remaining datasets using xarray.Dataset.update().
    The former default was using the xarray.merge(), which for some reason
    can eagerly load Dask array chunks into memory that won't be released.

Fixes

  • Tiles of datasets with forward slashes in their identifiers
    (originated from nested directories) now display again correctly
    in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817)

  • The xcube server configuration parameters url_prefix and
    reverse_url_prefix can now be absolute URLs. This fixes a problem for
    relative prefixes such as "proxy/8000" used for xcube server running
    inside JupyterLab. Here, the expected returned self-referencing URL was
    https://{host}/users/{user}/proxy/8000/{path} but we got
    http://{host}/proxy/8000/{path}. (#806)

Full Changelog: v1.0.0.dev1...v1.0.0.dev2