Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modification in support of eventual Zarr support. #1259

Closed
wants to merge 3 commits into from

Conversation

DennisHeimbigner
Copy link
Collaborator

@DennisHeimbigner DennisHeimbigner commented Dec 17, 2018

Modification in support of eventual Zarr support.

[Revised PR Description]
This PR has two purposes:

  1. Try to centralize all the code that infers the dispatcher
    from various available pieces of information in the path,
    the mode flags, and in the dataset itself.
  2. Modify URL handling in support of using S3 URLS for S3
    reading and for Zarr. The goal is to make specific IO
    handling tags accessible inside dispatchers that support
    multiple IO access methods (like file versus Zarr) to figure
    out what internal IO handler to use.

The primary code change is to create a new set of files
libdispatch/dinfermodel.c and include/ncinfermodel.c
to centralize the dispatcher inference code.

Aside: the whole dispatcher inference mechanism needs serious fixing:
it is way too ad hoc.

Other changes:

  1. NC_HDF5_finalize was not being properly called by nc_finalize().
  2. Fix minor bug in ncgen3.l
  3. fix memory leak in nc4info.c
  4. add code to walk the .daprc triples and to replace protocol=
    fragment tag with a more general mode= tag.

This is a set of changes to the way that URLs are handled
as paths. This is in support of eventual Zarr support.
The primary difference is to extract io handler information
from the url. This will be used inside dispatchers that support
multiple IO access methods (like file versus Zarr) to figure out
what internal IO handler to use.

Specific changes:
1. move url analyzer code from dfile.c to durlmodel.c
2. add code to walk the .daprc triples
3. NC_HDF5_finalize was not being properly called by nc_finalize().
4. Fix minor bug in ncgen3.l
5. fix memory leak in nc4info.c
6. replace protocol= fragment tag with a more general mode= tag.
@DennisHeimbigner
Copy link
Collaborator Author

I am temporarily closing this until I can make further improvements.

This PR has two purposes:

1. Try to centralize all the code that infers the dispatcher
   from various available pieces of information in the path,
   the mode flags, and in the dataset itself.
2. Modify URL handling in support of using S3 URLS for S3
   reading and for Zarr. The goal is to make specific IO
   handling tags accessible inside dispatchers that support
   multiple IO access methods (like file versus Zarr) to figure
   out what internal IO handler to use.

The primary code change is to create a new set of files
libdispatch/dinfermodel.c and include/ncinfermodel.c
to centralize the dispatcher inference code.

Aside: the whole dispatcher inference mechanism needs serious fixing:
it is way too ad hoc.

Other changes:

1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
   fragment tag with a more general mode= tag.
@DennisHeimbigner
Copy link
Collaborator Author

Re-opening with the changes as described in revised description

@DennisHeimbigner
Copy link
Collaborator Author

Closing again permanently in favor of a later TBD pull request.

@edhartnett
Copy link
Contributor

@DennisHeimbigner I noticed this go by on my twitter feed, it may be of interest:
https://twitter.com/HammanHydro/status/1072984431814684673

"Earlier today, @rabernat gave a punchy little talk on his vision #zarr and #netcdf for cloud storage of geoscientific data."

DennisHeimbigner added a commit that referenced this pull request Jan 2, 2019
re: issue #1251

Assume that you have the URL to a remote dataset
which is a normal netcdf-3 or netcdf-4 file.

This PR allows the netcdf-c to read that dataset's
contents as a netcdf file using HTTP byte ranges
if the remote server supports byte-range access.

Originally, this PR was set up to access Amazon S3 objects,
but it can also access other remote datasets such as those
provided by a Thredds server via the HTTPServer access protocol.
It may also work for other kinds of servers.

Note that this is not intended as a true production
capability because, as is known, this kind of access to
can be quite slow. In addition, the byte-range IO drivers
do not currently do any sort of optimization or caching.

An additional goal here is to gain some experience with
the Amazon S3 REST protocol.

This architecture and its use documented in
the file docs/byterange.dox.

There are currently two test cases:

1. nc_test/tst_s3raw.c - this does a simple open, check format, close cycle
   for a remote netcdf-3 file and a remote netcdf-4 file.
2. nc_test/test_s3raw.sh - this uses ncdump to investigate some remote
   datasets.

This PR also incorporates significantly changed model inference code
(see the superceded PR #1259).

1. It centralizes the code that infers the dispatcher.
2. It adds support for byte-range URLs

Other changes:

1. NC_HDF5_finalize was not being properly called by nc_finalize().
2. Fix minor bug in ncgen3.l
3. fix memory leak in nc4info.c
4. add code to walk the .daprc triples and to replace protocol=
   fragment tag with a more general mode= tag.

Final Note:
Th inference code is still way too complicated. We need to move
to the validfile() model used by netcdf Java, where each
dispatcher is asked if it can process the file. This decentralizes
the inference code. This will be done after all the major new
dispatchers (PIO, Zarr, etc) have been implemented.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants