Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-kerchunk backend for HDF5/netcdf4 files. #87

Merged
merged 106 commits into from
Nov 19, 2024
Merged
Show file tree
Hide file tree
Changes from 64 commits
Commits
Show all changes
106 commits
Select commit Hold shift + click to select a range
6b7abe2
Generate chunk manifest backed variable from HDF5 dataset.
sharkinsspatial Apr 19, 2024
bca0aab
Transfer dataset attrs to variable.
sharkinsspatial Apr 19, 2024
384ff6b
Get virtual variables dict from HDF5 file.
sharkinsspatial Apr 19, 2024
4c5f9bd
Update virtual_vars_from_hdf to use fsspec and drop_variables arg.
sharkinsspatial Apr 22, 2024
1dd3370
mypy fix to use ChunkKey and empty dimensions list.
sharkinsspatial Apr 22, 2024
d92c75c
Extract attributes from hdf5 root group.
sharkinsspatial Apr 22, 2024
0ed8362
Use hdf reader for netcdf4 files.
sharkinsspatial Apr 22, 2024
f4485fa
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Apr 22, 2024
3cc1254
Merge branch 'main' into hdf5_reader
sharkinsspatial May 8, 2024
0123df7
Fix ruff complaints.
sharkinsspatial May 9, 2024
332bcaa
First steps for handling HDF5 filters.
sharkinsspatial May 10, 2024
c51e615
Initial step for hdf5plugin supported codecs.
sharkinsspatial May 13, 2024
0083f77
Small commit to check compression support in CI environment.
sharkinsspatial May 16, 2024
3c00071
Merge branch 'main' into hdf5_reader
sharkinsspatial May 18, 2024
207c4b5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 19, 2024
c573800
Fix mypy complaints for hdf_filters.
sharkinsspatial May 19, 2024
ef0d7a8
Merge branch 'hdf5_reader' of https://github.com/TomNicholas/Virtuali…
sharkinsspatial May 19, 2024
588e06b
Local pre-commit fix for hdf_filters.
sharkinsspatial May 19, 2024
725333e
Use fsspec reader_options introduced in #37.
sharkinsspatial May 21, 2024
72df108
Fix incorrect zarr_v3 if block position from merge commit ef0d7a8.
sharkinsspatial May 21, 2024
d1e85cb
Fix early return from hdf _extract_attrs.
sharkinsspatial May 21, 2024
1e2b343
Test that _extract_attrs correctly handles multiple attributes.
sharkinsspatial May 21, 2024
7f1c189
Initial attempt at scale and offset via numcodecs.
sharkinsspatial May 22, 2024
908e332
Tests for cfcodec_from_dataset.
sharkinsspatial May 23, 2024
0df332d
Temporarily relax integration tests to assert_allclose.
sharkinsspatial May 24, 2024
ca6b236
Add blosc_lz4 fixture parameterization to confirm libnetcdf environment.
sharkinsspatial May 24, 2024
b7426c5
Check for compatability with netcdf4 engine.
sharkinsspatial May 24, 2024
dac21dd
Use separate fixtures for h5netcdf and netcdf4 compression styles.
sharkinsspatial May 27, 2024
e968772
Print libhdf5 and libnetcdf4 versions to confirm compiled environment.
sharkinsspatial May 27, 2024
9a98e57
Skip netcdf4 style compression tests when libhdf5 < 1.14.
sharkinsspatial May 27, 2024
7590b87
Include imagecodecs.numcodecs to support HDF5 lzf filters.
sharkinsspatial Jun 11, 2024
e9fbc8a
Merge branch 'main' into hdf5_reader
sharkinsspatial Jun 11, 2024
14bd709
Remove test that verifies call to read_kerchunk_references_from_file.
sharkinsspatial Jun 11, 2024
acdf0d7
Add additional codec support structures for imagecodecs and numcodecs.
sharkinsspatial Jun 12, 2024
4ba323a
Add codec config test for Zstd.
sharkinsspatial Jun 12, 2024
e14e53b
Include initial cf decoding tests.
sharkinsspatial Jun 21, 2024
b808ded
Merge branch 'main' into hdf5_reader
sharkinsspatial Jun 21, 2024
b052f8c
Revert typo for scale_factor retrieval.
sharkinsspatial Jun 21, 2024
01a3980
Update reader to use new numpy manifest representation.
sharkinsspatial Jun 21, 2024
c37d9e5
Temporarily skip test until blosc netcdf4 issue is solved.
sharkinsspatial Jun 22, 2024
17b30d4
Fix Pydantic 2 migration warnings.
sharkinsspatial Jun 22, 2024
f6b596a
Include hdf5plugin and imagecodecs-numcodecs in mamba test environment.
sharkinsspatial Jun 22, 2024
eb6e24d
Mamba attempt with imagecodecs rather than imagecodecs-numcodecs.
sharkinsspatial Jun 22, 2024
c85bd16
Mamba attempt with latest imagecodecs release.
sharkinsspatial Jun 22, 2024
ca435da
Use correct iter_chunks callback function signtature.
sharkinsspatial Jun 26, 2024
3017951
Include pip based imagecodecs-numcodecs until conda-forge availability.
sharkinsspatial Jun 26, 2024
ccf0b73
Merge branch 'main' into hdf5_reader
sharkinsspatial Jun 26, 2024
32ba135
Handle non-coordinate dims which are serialized to hdf as empty dataset.
sharkinsspatial Jun 27, 2024
64f446c
Use reader_options for filetype check and update failing kerchunk call.
sharkinsspatial Jun 27, 2024
1c590bb
Merge branch 'main' into hdf5_reader
sharkinsspatial Jun 27, 2024
9797346
Fix chunkmanifest shaping for chunked datasets.
sharkinsspatial Jun 30, 2024
c833e19
Handle scale_factor attribute serialization for compressed files.
sharkinsspatial Jun 30, 2024
701bcfa
Include chunked roundtrip fixture.
sharkinsspatial Jun 30, 2024
08c988e
Standardize xarray integration tests for hdf filters.
sharkinsspatial Jun 30, 2024
e6076bd
Merge branch 'hdf5_reader' of https://github.com/TomNicholas/Virtuali…
sharkinsspatial Jun 30, 2024
d684a84
Merge branch 'main' into hdf5_reader
sharkinsspatial Jun 30, 2024
4cb4bac
Update reader selection logic for new filetype determination.
sharkinsspatial Jun 30, 2024
d352104
Use decode_times for integration test.
sharkinsspatial Jun 30, 2024
3d89ea4
Standardize fixture names for hdf5 vs netcdf4 file types.
sharkinsspatial Jun 30, 2024
c9dd0d9
Handle array add_offset property for compressed data.
sharkinsspatial Jul 1, 2024
db5b421
Include h5py shuffle filter.
sharkinsspatial Jul 1, 2024
9a1da32
Make ScaleAndOffset codec last in filters list.
sharkinsspatial Jul 1, 2024
9b2b0f8
Apply ScaleAndOffset codec to _FillValue since it's value is now down…
sharkinsspatial Jul 2, 2024
9ef1362
Coerce scale and add_offset values to native float for JSON serializa…
sharkinsspatial Jul 2, 2024
30005bd
Merge branch 'main' into hdf5_reader
sharkinsspatial Aug 6, 2024
14f7a99
Merge branch 'main' into hdf5_reader
sharkinsspatial Aug 6, 2024
f4f9c8f
Temporarily xfail integration tests for main
sharkinsspatial Aug 9, 2024
d257cb9
Merge branch 'main' into hdf5_reader
sharkinsspatial Oct 2, 2024
e795c2c
Merge branch 'main' into hdf5_reader
sharkinsspatial Oct 8, 2024
a9e59f2
Remove pydantic dependency as per pull/210.
sharkinsspatial Oct 8, 2024
2b33bc2
Update test for new kerchunk reader module location.
sharkinsspatial Oct 8, 2024
a57ae9e
Fix branch typing errors.
sharkinsspatial Oct 9, 2024
e21fc69
Re-include automatic file type determination.
sharkinsspatial Oct 9, 2024
df69a12
Handle various hdf flavors of _FillValue storage.
sharkinsspatial Oct 9, 2024
169337c
Include loadable variables in drop variables list.
sharkinsspatial Oct 9, 2024
bdcbfbf
Mock readers.hdf.virtual_vars_from_hdf to verify option passing.
sharkinsspatial Oct 9, 2024
77f1689
Convert numpy _FillValue to native Python for serialization support.
sharkinsspatial Oct 9, 2024
42c653a
Support groups with HDF5 reader.
sharkinsspatial Oct 10, 2024
9c86e0d
Handle empty variables with a shape.
sharkinsspatial Oct 17, 2024
001a4a7
Merge branch 'main' into hdf5_reader
sharkinsspatial Oct 23, 2024
79f9921
Merge branch 'main' into hdf5_reader
sharkinsspatial Oct 23, 2024
1589776
Import top-level version of xarray classes.
sharkinsspatial Oct 23, 2024
772c580
Add option to explicitly specify use of an experimental hdf backend.
sharkinsspatial Oct 24, 2024
3ab90c6
Include imagecodecs and hdf5plugin in all CI environments.
sharkinsspatial Oct 24, 2024
150d06d
Add test_hdf_integration tests to be skipped for non-kerchunk env.
sharkinsspatial Oct 24, 2024
8ccba34
Include imagecodecs in dependencies.
sharkinsspatial Oct 24, 2024
81874e0
Diagnose imagecodecs-numcodecs installation failures in CI.
sharkinsspatial Oct 24, 2024
f87abe2
Ignore mypy complaints for VirtualBackend.
sharkinsspatial Oct 24, 2024
70e7e29
Remove checksum assert which varies across different zstd versions.
sharkinsspatial Oct 24, 2024
43bc0e4
Temporarily xfail integration tests with coordinate inconsistency.
sharkinsspatial Oct 24, 2024
82a6321
Remove backend arg for non-hdf network file tests.
sharkinsspatial Oct 24, 2024
b34f260
Fix mypy comment moved by ruff formatting.
sharkinsspatial Oct 24, 2024
f9ead06
Make HDR reader dependencies optional.
sharkinsspatial Oct 25, 2024
5608292
Handle optional imagecodecs and hdf5plugin dependency imports for tests.
sharkinsspatial Oct 25, 2024
2fa548c
Prevent conflicts with explicit filetype and backend args.
sharkinsspatial Nov 11, 2024
bc0d925
Correctly convert root coordinate attributes to a list.
sharkinsspatial Nov 13, 2024
783df94
Clarify that method extracts attrs from any specified group.
sharkinsspatial Nov 14, 2024
16f288b
Restructure hdf reader and codec filters into a module namespace.
sharkinsspatial Nov 14, 2024
3e216dc
Improve docstrings for hdf and filter modules.
sharkinsspatial Nov 14, 2024
5b085a6
Explicitly specify HDF5VirtualBackend for test parameter.
sharkinsspatial Nov 14, 2024
83ff577
Include isssue references for xfailed tests.
sharkinsspatial Nov 15, 2024
ee6fa0b
Use soft import strategy for optional dependencies see xarray/issues/…
sharkinsspatial Nov 18, 2024
44bce08
Merge branch 'main' into hdf5_reader
sharkinsspatial Nov 18, 2024
5de9d2c
Handle mypy for soft imports.
sharkinsspatial Nov 18, 2024
a8cc82f
Attempt at nested optional depedency usage.
sharkinsspatial Nov 18, 2024
65a6b14
Handle use of soft import sub modules for typing.
sharkinsspatial Nov 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions ci/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ dependencies:
- ujson
- packaging
- universal_pathlib
- hdf5plugin
# Testing
- codecov
- pre-commit
Expand All @@ -26,7 +27,10 @@ dependencies:
- fsspec
- s3fs
- fastparquet
- imagecodecs>=2024.6.1
# for opening tiff files
- tifffile
# for opening FITS files
- astropy
- pip:
- imagecodecs-numcodecs
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ dependencies = [
"ujson",
"packaging",
"universal-pathlib",
"hdf5plugin",
]

[project.optional-dependencies]
Expand All @@ -45,6 +46,7 @@ test = [
"fsspec",
"s3fs",
"fastparquet",
"imagecodecs-numcodecs",
]


Expand Down
243 changes: 243 additions & 0 deletions virtualizarr/readers/hdf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
import math
from typing import List, Mapping, Optional, Union

import h5py
import numpy as np
import xarray as xr

from virtualizarr.manifests import ChunkEntry, ChunkManifest, ManifestArray
from virtualizarr.readers.hdf_filters import cfcodec_from_dataset, codecs_from_dataset
from virtualizarr.types import ChunkKey
from virtualizarr.utils import _fsspec_openfile_from_filepath
from virtualizarr.zarr import ZArray


def _dataset_chunk_manifest(
path: str, dataset: h5py.Dataset
) -> Optional[ChunkManifest]:
"""
Generate ChunkManifest for HDF5 dataset.

Parameters
----------
path: str
The path the HDF5 container file
dset : h5py.Dataset
HDF5 dataset for which to create a ChunkManifest

Returns
-------
ChunkManifest
A Virtualizarr ChunkManifest
"""
dsid = dataset.id

if dataset.chunks is None:
if dsid.get_offset() is None:
return None
else:
key_list = [0] * (len(dataset.shape) or 1)
key = ".".join(map(str, key_list))
chunk_entry = ChunkEntry(
path=path, offset=dsid.get_offset(), length=dsid.get_storage_size()
)
chunk_key = ChunkKey(key)
chunk_entries = {chunk_key: chunk_entry.dict()}
chunk_manifest = ChunkManifest(entries=chunk_entries)
return chunk_manifest
else:
num_chunks = dsid.get_num_chunks()
if num_chunks == 0:
raise ValueError("The dataset is chunked but contains no chunks")

shape = tuple(math.ceil(a / b) for a, b in zip(dataset.shape, dataset.chunks))
paths = np.empty(shape, dtype=np.dtypes.StringDType) # type: ignore
offsets = np.empty(shape, dtype=np.int32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After #177 , these arrays will need to be uint64 instead of int32.

lengths = np.empty(shape, dtype=np.int32)

def get_key(blob):
return tuple([a // b for a, b in zip(blob.chunk_offset, dataset.chunks)])

def add_chunk_info(blob):
key = get_key(blob)
paths[key] = path
offsets[key] = blob.byte_offset
lengths[key] = blob.size

has_chunk_iter = callable(getattr(dsid, "chunk_iter", None))
if has_chunk_iter:
dsid.chunk_iter(add_chunk_info)
else:
for index in range(num_chunks):
add_chunk_info(dsid.get_chunk_info(index))

chunk_manifest = ChunkManifest.from_arrays(
paths=paths, offsets=offsets, lengths=lengths
)
return chunk_manifest


def _dataset_dims(dataset: h5py.Dataset) -> Union[List[str], List[None]]:
"""
Get a list of dimension scale names attached to input HDF5 dataset.

This is required by the xarray package to work with Zarr arrays. Only
one dimension scale per dataset dimension is allowed. If dataset is
dimension scale, it will be considered as the dimension to itself.

Parameters
----------
dataset : h5py.Dataset
HDF5 dataset.

Returns
-------
list
List with HDF5 path names of dimension scales attached to input
dataset.
"""
dims = list()
rank = len(dataset.shape)
if rank:
for n in range(rank):
num_scales = len(dataset.dims[n])
if num_scales == 1:
dims.append(dataset.dims[n][0].name[1:])
elif h5py.h5ds.is_scale(dataset.id):
dims.append(dataset.name[1:])
elif num_scales > 1:
raise ValueError(
f"{dataset.name}: {len(dataset.dims[n])} "
f"dimension scales attached to dimension #{n}"
)
elif num_scales == 0:
# Some HDF5 files do not have dimension scales.
# If this is the case, `num_scales` will be 0.
# In this case, we mimic netCDF4 and assign phony dimension names.
# See https://github.com/fsspec/kerchunk/issues/41
dims.append(f"phony_dim_{n}")
return dims


def _extract_attrs(h5obj: Union[h5py.Dataset, h5py.Group]):
"""
Extract attributes from an HDF5 group or dataset.

Parameters
----------
h5obj : h5py.Group or h5py.Dataset
An HDF5 group or dataset.
"""
_HIDDEN_ATTRS = {
"REFERENCE_LIST",
"CLASS",
"DIMENSION_LIST",
"NAME",
"_Netcdf4Dimid",
"_Netcdf4Coordinates",
"_nc3_strict",
"_NCProperties",
}
attrs = {}
for n, v in h5obj.attrs.items():
if n in _HIDDEN_ATTRS:
continue
# Fix some attribute values to avoid JSON encoding exceptions...
if isinstance(v, bytes):
v = v.decode("utf-8") or " "
elif isinstance(v, (np.ndarray, np.number, np.bool_)):
if v.dtype.kind == "S":
v = v.astype(str)
if n == "_FillValue":
continue
elif v.size == 1:
v = v.flatten()[0]
if isinstance(v, (np.ndarray, np.number, np.bool_)):
v = v.tolist()
else:
v = v.tolist()
elif isinstance(v, h5py._hl.base.Empty):
v = ""
if v == "DIMENSION_SCALE":
continue

attrs[n] = v
return attrs


def _dataset_to_variable(path: str, dataset: h5py.Dataset) -> Optional[xr.Variable]:
# This chunk determination logic mirrors zarr-python's create
# https://github.com/zarr-developers/zarr-python/blob/main/zarr/creation.py#L62-L66

manifest = _dataset_chunk_manifest(path, dataset)
if manifest:
chunks = dataset.chunks if dataset.chunks else dataset.shape
codecs = codecs_from_dataset(dataset)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving compressor=None causes ambiguity for roundtripping v3 metadata (ZArray -> disk -> ZArray) because we can't determine if it's a list of 2 filters or a list of one filter and one compressor. zlib is a compression codec and FixedScaleOffset is not, but should they both be treated as filters?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ghidalgo3 My rationale for describing the full codec chain in the filters property was the fact that internally HDF5 does not distinguish compressors and filters, the entire encoding chain is represented as filters. Since we don't need to worry about v2 interoperability, I think we can just focus with aligning with v3's api (which still seem to be in a state of flux). I think I prefer the approach proposed in zarr-developers/zarr-python#1944 (comment) but I don't know where that leaves me in the interim until a final decision gets made on the v3 api path 🤔. For v3 compatibility we'll also need to track zarr-developers/numcodecs#524 so we use numcodecs which are compatible with the new v3 codec specification. TLDR I think we might be in flux for some time while upstream v3 decisions get made.

cfcodec = cfcodec_from_dataset(dataset)
attrs = _extract_attrs(dataset)
if cfcodec:
codecs.insert(0, cfcodec["codec"])
dtype = cfcodec["target_dtype"]
attrs.pop("scale_factor", None)
attrs.pop("add_offset", None)
fill_value = cfcodec["codec"].decode(dataset.fillvalue)
else:
dtype = dataset.dtype
fill_value = dataset.fillvalue
filters = [codec.get_config() for codec in codecs]
zarray = ZArray(
chunks=chunks,
compressor=None,
dtype=dtype,
fill_value=fill_value,
filters=filters,
order="C",
shape=dataset.shape,
zarr_format=2,
)
marray = ManifestArray(zarray=zarray, chunkmanifest=manifest)
dims = _dataset_dims(dataset)
variable = xr.Variable(data=marray, dims=dims, attrs=attrs)
else:
variable = None
return variable


def virtual_vars_from_hdf(
path: str,
drop_variables: Optional[List[str]] = None,
reader_options: Optional[dict] = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default reader_options were updated a bit ago to:
reader_options: Optional[dict[str, Any]] = None,

"storage_options": {"key": "", "secret": "", "anon": True}
},
) -> Mapping[str, xr.Variable]:
if drop_variables is None:
drop_variables = []
open_file = _fsspec_openfile_from_filepath(
filepath=path, reader_options=reader_options
)
f = h5py.File(open_file, mode="r")
variables = {}
for key in f.keys():
if key not in drop_variables:
if isinstance(f[key], h5py.Dataset):
variable = _dataset_to_variable(path, f[key])
if variable is not None:
variables[key] = variable
else:
raise NotImplementedError("Nested groups are not yet supported")

return variables


def attrs_from_root_group(
path: str,
reader_options: Optional[dict] = {
"storage_options": {"key": "", "secret": "", "anon": True}
},
):
open_file = _fsspec_openfile_from_filepath(
filepath=path, reader_options=reader_options
)
f = h5py.File(open_file, mode="r")
attrs = _extract_attrs(f)
return attrs
Loading
Loading