-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Dataset summary methods #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for raising this as a separate issue. Yes, I agree it would be nice to add these summary methods! We can imagine DataArray methods on Datasets mapping over all variables in a somewhat similar way to how groupby methods map over each group. These methods are very convenient for pandas.DataFrame objects, so it makes sense to have them for xray.Dataset, too. The only unfortunate aspect that is that it is harder to see the values in a Dataset, because they aren't given in the standard string representation. In contrast, methods like |
I'm not sure we need to worry about the string representation too much. The To flush out some of the desired functionality a bit more:
|
As a note on your points (1) and (2): currently, we remove all dataset and array attributes when doing any operations other than (re)indexing. This includes when reduce operations like mean are applied, because it didn't seem safe to assume that the original attributes were still descriptive. In particular, I was worried about units. I'm willing to reconsider this, but in general I would like to avoid any functionality that is metadata aware other than dimension and coordinate labels. In my experience, systems that rely on attributes become much more complex and harder to predict, so I would like to avoid that. I don't see a unit system as in scope for xray, at least not at this time. Your solution 4(b) -- dropping coordinates rather than attempting to summarize them -- would also be my preferred approach. It is consistent with pandas (try Speaking of non-numerical data, we will need to take an approach like pandas to ignore non-numerical variables with taking the mean. It might be worth taking a look at how pandas handles this, but I imagine using a In you're interested in taking a crack at implementation, take a look at |
I'm willing to take a crack at it but I'm guessing I'll be requesting some assistance along the way. Let me look into a bit and I'll report back with how I see it going together. |
A couple more thoughts. I agree that staying metatdata unaware is the best course of action. However, I think you can do that but still carry the dataset and variable attributes (in the same manor that NCO and CDO do). You just want to be explicit in the documentation by saying that the attributes are from the original dataset and that xray is not attribute aware or a units system (except for the time variable I guess). |
You're right that keeping attributes fully intact under any operation is a perfectly reasonable alternative to dropping them. So what do NCO and CDO do with attributes when you calculate the variance along a dimension of a variable? The choices, as I see them, are:
For xray, 2 is out, because it leaves wrong metadata intact. 3 and 4 are out, because we don't want to be in the business of relying on metadata. This leaves 1 -- dropping all attributes. For consistency, if 1 is the choice we need to make for "variance", then the same rule should apply for all "reduce" operations, including apparently innocuous operations like "mean". Note that this is also consistent with how xray handles attributes all other mathematical operations -- even adding 0 or multiplying by 1 removes all attributes. My sense (not being a heavy user of these tools) is that NCO and CDO have a little bit more freedom to keep around metadata because they maintain a "history" attribute. Loading files from disk is a little different. Notice that once variables get loaded into xray, any attributes that were used for decoding have been removed from "attributes" and moved to "encoding". The meaningful attributes only exist on files on disk (unavoidable given the limitations of NetCDF). |
Both NCO and CDO keep all attributes, and as you mention, maintain a history attribute. Even for operations like "variance" where the units are no longer accurate. Maybe we're headed to a user specified option to keep the attributes around with the default being option 1. I can see this existing at any (but probably not all) of these levels:
This approach would put the onus on the user to specify they want to keep metadata around. My preference would be to apply this at the module level. |
Module wide configuration flags are generally a bad idea, because such non-local effects make it harder to predict how code works. This is less of a concern for configuration options which only change how objects are displayed, which I believe is the only way such flags are used in numpy or pandas. But I don't have any objections to adding a method option. |
This might be obsolete, I just started to use xArray and also missed something like a describe function. That's what I use so far: import xarray as xr
import numpy as np
import pandas as pd
def is_numeric_dtype(da):
# Check if the data type of the DataArray is numeric
return np.issubdtype(da.dtype, np.number)
def ds_describe(dataset):
data = {
'Variable Name': [],
'Number of Dimensions': [],
'Number of NaNs': [],
'Mean': [],
'Median': [],
'Standard Deviation': [],
'Minimum': [],
'25th Percentile': [],
'75th Percentile': [],
'Maximum': []
}
for var_name in dataset.variables:
# Get the data array
data_array = dataset[var_name]
# Check if the data type is numeric
if is_numeric_dtype(data_array):
flat_data_array = data_array.values.flatten()
# Append statistics to the data dictionary
data['Variable Name'].append(var_name)
data['Number of Dimensions'].append(data_array.ndim)
data['Number of NaNs'].append(np.isnan(flat_data_array).sum())
data['Mean'].append(np.nanmean(flat_data_array))
data['Median'].append(np.nanmedian(flat_data_array))
data['Standard Deviation'].append(np.nanstd(flat_data_array))
data['Minimum'].append(np.nanmin(flat_data_array))
data['25th Percentile'].append(np.nanpercentile(flat_data_array, 25))
data['75th Percentile'].append(np.nanpercentile(flat_data_array, 75))
data['Maximum'].append(np.nanmax(flat_data_array))
# Create a pandas DataFrame from the data dictionary
df = pd.DataFrame(data)
return df |
This is exactly what I needed - thank you! |
You can also use dask_dataframe, with the advantage that it should be a chunked computation import xarray as xr
import numpy as np
import pandas as pd
from IPython.display import display
def ds_describe(dataset):
for var_name in dataset.variables:
# Get the data array
data_array = dataset[var_name]
# note this wont work with every variable as some have to many dims
df_stats = data_array.to_dask_dataframe().describe().compute()
print(var_name)
display(df_stats) |
Add summary methods to Dataset object. For example, it would be great if you could summarize a entire dataset in a single line.
(1) Mean of all variables in dataset.
(2) Mean of all variables in dataset along a dimension:
In the case where a dimension is specified and there are variables that don't use that dimension, I'd imagine you would just pass that variable through unchanged.
Related to #122.
The text was updated successfully, but these errors were encountered: