Releases: xcube-dev/xcube
1.0.3
Changes in 1.0.3
Some unit tests have been fixed. Regression was caused by a minor Python environment change.
Therefore, the actual changes are the same as in 1.0.2:
-
Bundled latest xcube-viewer 1.0.1.
-
xcube is now compatible with Python 3.10. (#583)
-
The
Viewer.add_dataset()
method of the xcube JupyterLab integration has been enhanced by two optional keyword argumentsstyle
andcolor_mappings
to allow for customized, initial color mapping of dataset variables. The example notebook xcube-viewer-in-jl.ipynb has been updated to reflect the enhancement. -
Fixed an issue with new xcube data store
abfs
for the Azure Blob filesystem. (#798)
Full Changelog
1.0.2
Changes in 1.0.2
-
Bundled latest xcube-viewer 1.0.1.
-
xcube is now compatible with Python 3.10. (#583)
-
The
Viewer.add_dataset()
method of the xcube JupyterLab integration has been enhanced by two optional keyword argumentsstyle
andcolor_mappings
to allow for customized, initial color mapping of dataset variables. The example notebook xcube-viewer-in-jl.ipynb has been updated to reflect the enhancement. -
Fixed an issue with new xcube data store
abfs
for the Azure Blob filesystem. (#798)
Full Changelog: v1.0.1...v1.0.2
1.0.2.dev3
Changes in 1.0.2 (in development)
-
Includes xcube-viewer version 1.0.1-dev.2.
-
xcube is now compatible with Python 3.10. (#583)
-
The
Viewer.add_dataset()
method of the xcube JupyterLab integration
has been enhanced by two optional keyword argumentsstyle
and
color_mappings
to allow for customized, initial color mapping
of dataset variables. The example notebook
xcube-viewer-in-jl.ipynb
has been updated to reflect the enhancement. -
Fixed an issue with new xcube data store
abfs
for the Azure Blob filesystem. (#798)
Full Changelog: v1.0.2.dev2...v1.0.2.dev3
1.0.2.dev2
Changes in 1.0.2 (in development)
-
Includes xcube-viewer version 1.0.1-dev.1.
-
xcube is now compatible with Python 3.10. (#583)
-
The
Viewer.add_dataset()
method of the xcube JupyterLab integration
has been enhanced by two optional keyword argumentsstyle
and
color_mappings
to allow for customized, initial color mapping
of dataset variables. The example notebook
xcube-viewer-in-jl.ipynb
has been updated to reflect the enhancement. -
Fixed an issue with new xcube data store
abfs
for the Azure Blob filesystem. (#798)
Full Changelog: v1.0.2.dev1...v1.0.2.dev2
1.0.2.dev1
Changes in 1.0.2 (in development)
-
xcube is now compatible with Python 3.10. (#583)
-
The
Viewer.add_dataset()
method of the xcube JupyterLab integration
has been enhanced by two optional keyword argumentsstyle
and
color_mappings
to allow for customized, initial color mapping
of dataset variables. The example notebook
xcube-viewer-in-jl.ipynb
has been updated to reflect the enhancement. -
Fixed an issue with new xcube data store
abfs
for the Azure Blob filesystem. (#798)
1.0.1
Changes in 1.0.1
Fixes
- Fixed recurring issue where xcube server was unable to locate Python
code downloaded from S3 when configuring dynamically computed datasets
(configurationFileSystem: memory
) or augmenting existing datasets
by dynamically computed variables (configurationAugmentation
). (#828)
1.0.1.dev1
Changes in 1.0.1 (in development)
Fixes
- Fixed recurring issue where xcube server was unable to locate Python
code downloaded from S3 when configuring dynamically computed dataset
(configurationFileSystem: memory
) or augmenting existing datasets
by dynamically computed variables (configurationAugmentation
). (#828)
Full Changelog: v1.0.0...v1.0.1.dev1
1.0.0
Changes in 1.0.0
Enhancements
-
Added a catalog API compliant to STAC to
xcube server. (#455)- It serves a single collection named "datacubes" whose items are the
datasets published by the service. - The collection items make use the STAC
datacube extension.
- It serves a single collection named "datacubes" whose items are the
-
Simplified the cloud deployment of xcube server/viewer applications (#815).
This has been achieved by the following new xcube server features:- Configuration files can now also be URLs which allows
provisioning from S3-compatible object storage.
For example, it is now possible to invoke xcube server as follows:$ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
- A new endpoint
/viewer/config/{*path}
allows
for configuring the viewer accessible via endpoint/viewer
.
The actual source for the configuration items is configured by xcube
server configuration using the new entryViewer/Configuration/Path
,
for example:Viewer: Configuration: Path: s3://cyanoalert/xcube/viewer-config
- A typical xcube server configuration comprises many paths, and
relative paths of known configuration parameters are resolved against
thebase_dir
configuration parameter. However, for values of
parameters passed to user functions that represent paths in user code,
this cannot be done automatically. For such situations, expressions
can be used. An expression is any string between"${"
and"}"
in a
configuration value. An expression can contain the variables
base_dir
(a string),ctx
the current server context
(typexcube.webapi.datasets.DatasetsContext
), as well as the function
resolve_config_path(path)
that is used to make a path absolut with
respect tobase_dir
and to normalize it. For exampleAugmentation: Path: augmentation/metadata.py Function: metadata:update_metadata InputParameters: bands_config: ${resolve_config_path("../common/bands.yaml")}
- Configuration files can now also be URLs which allows
-
xcube's spatial resampling functions
resample_in_space()
,
affine_transform_dataset()
, andrectify_dataset()
exported
from modulexcube.core.resampling
now encode the target grid mapping
into the resampled datasets. (#822)This new default behaviour can be switched off by keyword argument
encode_cf=False
.
The grid mapping name can be set by keyword argumentgm_name
.
Ifgm_name
is not given a grid mapping will not be encoded if
all the following conditions are true:- The target CRS is geographic;
- The spatial dimension names are "lon" and "lat";
- The spatial 1-D coordinate variables are named "lon" and "lat"
and are evenly spaced.
The encoding of the grid mapping is done according to CF conventions:
- The CRS is encoded as attributes of a 0-D data variable named by
gm_name
- All spatial data variables receive an attribute
grid_mapping
that is
set to the value ofgm_name
.
-
Added Notebook
xcube-viewer-in-jl.ipynb
that explains how xcube Viewer can now be utilised in JupyterLab
using the new (still experimental) xcube JupyterLab extension
xcube-jl-ext.
Thexcube-jl-ext
package is also available on PyPI. -
Updated example
Notebook for CMEMS data store
to reflect changes of parameter names that provide CMEMS API credentials. -
Included support for Azure Blob Storage filesystem by adding a new
data storeabfs
. Many thanks to Ed!
(#752)These changes will enable access to data cubes (
.zarr
or.levels
)
in Azure blob storage as shown here:store = new_data_store( "abfs", # Azure filesystem protocol root="my_blob_container", # Azure blob container name storage_options= {'anon': True, # Alternatively, use 'connection_string': 'xxx' 'account_name': 'xxx', 'account_key':'xxx'} ) store.list_data_ids()
Same configuration for xcube Server:
DataStores: - Identifier: siec StoreId: abfs StoreParams: root: my_blob_container max_depth: 1 storage_options: anon: true account_name: "xxx" account_key': "xxx" # or # connection_string: "xxx" Datasets: - Path: "*.levels" Style: default
-
Added Notebook
8_azure_blob_filesystem.ipynb.
This notebook shows how a new data store instance can connect and list
Zarr files from Azure bolb storage using the newabfs
data store. -
xcube's
Dockerfile
no longer creates a conda environmentxcube
.
All dependencies are now installed into thebase
environment making it
easier to use the container as an executable for xcube applications.
We are now also using amicromamba
base image instead ofminiconda
.
The result is a much faster build and smaller image size. -
Added a
new_cluster
function toxcube.util.dask
, which can create
Dask clusters with various configuration options. -
The xcube multi-level dataset specification has been enhanced. (#802)
- When writing multi-level datasets (
*.levels/
) we now create a new
JSON file.zlevels
that contains the parameters used to create the
dataset. - A new class
xcube.core.mldataset.FsMultiLevelDataset
that represents
a multi-level dataset persisted to some filesystem, like
"file", "s3", "memory". It can also write datasets to the filesystem.
- When writing multi-level datasets (
-
Changed the behaviour of the class
xcube.core.mldataset.CombinedMultiLevelDataset
to do what we
actually expect:
If the keyword argumentcombiner_func
is not given orNone
is passed,
a copy of the first dataset is made, which is then subsequently updated
by the remaining datasets usingxarray.Dataset.update()
.
The former default was using thexarray.merge()
, which for some reason
can eagerly load Dask array chunks into memory that won't be released.
Fixes
-
Tiles of datasets with forward slashes in their identifiers
(originated from nested directories) now display again correctly
in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817) -
The xcube server configuration parameters
url_prefix
and
reverse_url_prefix
can now be absolute URLs. This fixes a problem for
relative prefixes such as"proxy/8000"
used for xcube server running
inside JupyterLab. Here, the expected returned self-referencing URL was
https://{host}/users/{user}/proxy/8000/{path}
but we got
http://{host}/proxy/8000/{path}
. (#806)
New Contributors
Full Changelog: v0.13.0...v1.0.0
1.0.0.dev3
Changes in 1.0.0.dev3
Enhancements
-
Included support for Azure Blob Storage filesystem by adding a new
data storeabfs
. Many thanks to Ed!
(#752)These changes will enable access to data cubes (
.zarr
or.levels
)
in Azure blob storage as shown here:store = new_data_store( "abfs", # Azure filesystem protocol root="my_blob_container", # Azure blob container name storage_options= {'anon': True, # Alternatively, use 'connection_string': 'xxx' 'account_name': 'xxx', 'account_key':'xxx'} ) store.list_data_ids()
Same configuration for xcube Server:
DataStores: - Identifier: siec StoreId: abfs StoreParams: root: my_blob_container max_depth: 1 storage_options: anon: true account_name: "xxx" account_key': "xxx" # or # connection_string: "xxx" Datasets: - Path: "*.levels" Style: default
-
Added Notebook
8_azure_blob_filesystem.ipynb.
This notebook shows how a new data store instance can connect and list
Zarr files from Azure bolb storage using the newabfs
data store. -
Added a catalog API compliant to STAC to
xcube server. (#455)- It serves a single collection named "datacubes" whose items are the
datasets published by the service. - The collection items make use the STAC
datacube extension.
- It serves a single collection named "datacubes" whose items are the
-
Simplified the cloud deployment of xcube server/viewer applications (#815).
This has been achieved by the following new xcube server features:- Configuration files can now also be URLs which allows
provisioning from S3-compatible object storage.
For example, it is now possible to invoke xcube server as follows:$ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
- A new endpoint
/viewer/config/{*path}
allows
for configuring the viewer accessible via endpoint/viewer
.
The actual source for the configuration items is configured by xcube
server configuration using the new entryViewer/Configuration/Path
,
for example:Viewer: Configuration: Path: s3://cyanoalert/xcube/viewer/
- A typical xcube server configuration comprises many paths, and
relative paths of known configuration parameters are resolved against
thebase_dir
configuration parameter. However, for values of
parameters passed to user functions that represent paths in user code,
this cannot be done automatically. For such situations, expressions
can be used. An expression is any string between"${"
and"}"
in a
configuration value. An expression can contain the variables
base_dir
(a string),ctx
the current server context
(typexcube.webapi.datasets.DatasetsContext
), as well as the function
resolve_config_path(path)
that is used to make a path absolut with
respect tobase_dir
and to normalize it. For exampleAugmentation: Path: augmentation/metadata.py Function: metadata:update_metadata InputParameters: bands_config: ${resolve_config_path("../common/bands.yaml")}
- Configuration files can now also be URLs which allows
-
xcube's
Dockerfile
no longer creates a conda environmentxcube
.
All dependencies are now installed into thebase
environment making it
easier to use the container as an executable for xcube applications.
We are now also using amicromamba
base image instead ofminiconda
.
The result is a much faster build and smaller image size. -
Added a
new_cluster
function toxcube.util.dask
, which can create
Dask clusters with various configuration options. -
The xcube multi-level dataset specification has been enhanced. (#802)
- When writing multi-level datasets (
*.levels/
) we now create a new
JSON file.zlevels
that contains the parameters used to create the
dataset. - A new class
xcube.core.mldataset.FsMultiLevelDataset
that represents
a multi-level dataset persisted to some filesystem, like
"file", "s3", "memory". It can also write datasets to the filesystem.
- When writing multi-level datasets (
-
Changed the behaviour of the class
xcube.core.mldataset.CombinedMultiLevelDataset
to do what we
actually expect:
If the keyword argumentcombiner_func
is not given orNone
is passed,
a copy of the first dataset is made, which is then subsequently updated
by the remaining datasets usingxarray.Dataset.update()
.
The former default was using thexarray.merge()
, which for some reason
can eagerly load Dask array chunks into memory that won't be released.
Fixes
-
Tiles of datasets with forward slashes in their identifiers
(originated from nested directories) now display again correctly
in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817) -
The xcube server configuration parameters
url_prefix
and
reverse_url_prefix
can now be absolute URLs. This fixes a problem for
relative prefixes such as"proxy/8000"
used for xcube server running
inside JupyterLab. Here, the expected returned self-referencing URL was
https://{host}/users/{user}/proxy/8000/{path}
but we got
http://{host}/proxy/8000/{path}
. (#806)
New Contributors
Full Changelog: v1.0.0.dev2...v1.0.0.dev3
1.0.0.dev2
Changes in 1.0.0.dev2
Enhancements
-
Added a catalog API compliant to STAC to
xcube server.
It serves a single collection named "datasets" whose items are the
datasets published by the service. (#455) -
Simplified the cloud deployment of xcube server/viewer applications (#815).
This has been achieved by the following new xcube server features:- Configuration files can now also be URLs which allows
provisioning from S3-compatible object storage.
For example, it is now possible to invoke xcube server as follows:$ xcube serve --config s3://cyanoalert/xcube/demo.yaml ...
- A new endpoint
/viewer/config/{*path}
allows
for configuring the viewer accessible via endpoint/viewer
.
The actual source for the configuration items is configured by xcube
server configuration using the new entryViewer/Configuration/Path
,
for example:Viewer: Configuration: Path: s3://cyanoalert/xcube/viewer/
- A typical xcube server configuration comprises many paths, and
relative paths of known configuration parameters are resolved against
thebase_dir
configuration parameter. However, for values of
parameters passed to user functions that represent paths in user code,
this cannot be done automatically. For such situations, expressions
can be used. An expression is any string between"${"
and"}"
in a
configuration value. An expression can contain the variables
base_dir
(a string),ctx
the current server context
(typexcube.webapi.datasets.DatasetsContext
), as well as the function
resolve_config_path(path)
that is used to make a path absolut with
respect tobase_dir
and to normalize it. For exampleAugmentation: Path: augmentation/metadata.py Function: metadata:update_metadata InputParameters: bands_config: ${resolve_config_path("../common/bands.yaml")}
- Configuration files can now also be URLs which allows
-
xcube's
Dockerfile
no longer creates a conda environmentxcube
.
All dependencies are now installed into thebase
environment making it
easier to use the container as an executable for xcube applications.
We are now also using amicromamba
base image instead ofminiconda
.
The result is a much faster build and smaller image size. -
Added a
new_cluster
function toxcube.util.dask
, which can create
Dask clusters with various configuration options. -
The xcube multi-level dataset specification has been enhanced. (#802)
- When writing multi-level datasets (
*.levels/
) we now create a new
JSON file.zlevels
that contains the parameters used to create the
dataset. - A new class
xcube.core.mldataset.FsMultiLevelDataset
that represents
a multi-level dataset persisted to some filesystem, like
"file", "s3", "memory". It can also write datasets to the filesystem.
- When writing multi-level datasets (
-
Changed the behaviour of the class
xcube.core.mldataset.CombinedMultiLevelDataset
to do what we
actually expect:
If the keyword argumentcombiner_func
is not given orNone
is passed,
a copy of the first dataset is made, which is then subsequently updated
by the remaining datasets usingxarray.Dataset.update()
.
The former default was using thexarray.merge()
, which for some reason
can eagerly load Dask array chunks into memory that won't be released.
Fixes
-
Tiles of datasets with forward slashes in their identifiers
(originated from nested directories) now display again correctly
in xcube Viewer. Tile URLs have not been URL-encoded in such cases. (#817) -
The xcube server configuration parameters
url_prefix
and
reverse_url_prefix
can now be absolute URLs. This fixes a problem for
relative prefixes such as"proxy/8000"
used for xcube server running
inside JupyterLab. Here, the expected returned self-referencing URL was
https://{host}/users/{user}/proxy/8000/{path}
but we got
http://{host}/proxy/8000/{path}
. (#806)
Full Changelog: v1.0.0.dev1...v1.0.0.dev2