You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The proposal is to change the usage of streams in the following way.
When stream=True the returned object would be a Fieldlist (for GRIB data):
ds=from_source("url", "http://..../my_data.grib", stream=True)
forfinds:
# f is now a Field# at this point ds consumed the stream
Iterating in batches would be a generic option (not only stream specific):
ds1=from_source("file", "my_local_data.grib")
ds2=from_source("url", "http://..../my_data.grib", stream=True)
forfinds1.batched(2):
# f is now a Fieldlist with 2 Fieldsforfinds2.batched(2):
# f is now a Fieldlist with 2 Fields
group_by would behave in a similar way.
ds1=from_source("file", "my_local_data.grib")
ds2=from_source("url", "http://..../my_data.grib", stream=True)
forfinds1.group_by("level"):
# f is now a Fieldlistforfinds2.group_by("level"):
# f is now a Fieldlist
Please note that using group_by for non-stream data will be based on the metadata from the full dataset. However, for the stream it would be simply built by consuming GRIB messages from the stream until the values of the metadata keys specified in group_by change.
We could read the whole stream into memory with the read_all option:
ds=from_source("url", "http://..../my_data.grib", stream=True, read_all=True)
# ds is now a Fieldlist in memory, so all these worklen(ds)
r=ds.sel(param="t")
forfinds:
# f is now a Fieldforfinds.batched(2):
# f is now a Fieldlist with 2 Fields
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The proposal is to change the usage of streams in the following way.
When
stream=True
the returned object would be a Fieldlist (for GRIB data):Iterating in batches would be a generic option (not only stream specific):
group_by
would behave in a similar way.Please note that using
group_by
for non-stream data will be based on the metadata from the full dataset. However, for the stream it would be simply built by consuming GRIB messages from the stream until the values of the metadata keys specified ingroup_by
change.We could read the whole stream into memory with the
read_all
option:The text was updated successfully, but these errors were encountered: