Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: len(df.groupby(..., dropna=False)) raises ValueError: Categorical categories cannot be null #35202

Open
3 tasks done
ssche opened this issue Jul 9, 2020 · 9 comments
Open
3 tasks done
Assignees
Labels
Bug good first issue Groupby Needs Tests Unit test(s) needed to prevent regressions
Milestone

Comments

@ssche
Copy link
Contributor

ssche commented Jul 9, 2020

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.
    (only in current master; dropna=False didn't exist previously)

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

>>> df = pd.DataFrame({'a': [pd.to_datetime('2019-02-12'), pd.to_datetime('2019-02-12'), pd.to_datetime('2019-02-13'), pd.NaT], 'b': [1, 2, 3, 4]})
>>>
>>> dfg = df.groupby(['a'], dropna=False)
>>> len(dfg)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/groupby/groupby.py", line 539, in __len__
    return len(self.groups)
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/groupby/groupby.py", line 558, in groups
    return self.grouper.groups
  File "pandas/_libs/properties.pyx", line 33, in pandas._libs.properties.CachedProperty.__get__
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/groupby/ops.py", line 257, in groups
    return self.groupings[0].groups
  File "pandas/_libs/properties.pyx", line 33, in pandas._libs.properties.CachedProperty.__get__
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/groupby/grouper.py", line 598, in groups
    return self.index.groupby(Categorical.from_codes(self.codes, self.group_index))
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/arrays/categorical.py", line 606, in from_codes
    dtype = CategoricalDtype._from_values_or_dtype(
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/dtypes/dtypes.py", line 365, in _from_values_or_dtype
    dtype = CategoricalDtype(categories, ordered)
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/dtypes/dtypes.py", line 252, in __init__
    self._finalize(categories, ordered, fastpath=False)
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/dtypes/dtypes.py", line 406, in _finalize
    categories = self.validate_categories(categories, fastpath=fastpath)
  File "./venv/pandas-1/lib64/python3.8/site-packages/pandas/core/dtypes/dtypes.py", line 579, in validate_categories
    raise ValueError("Categorical categories cannot be null")
ValueError: Categorical categories cannot be null

Problem description

When using groupby with the option of dropna=False, it may happen that a ValueError is being raised when the result of len(groupby(..., dropna=False)) is obtained. This works fine in older version of Pandas (0.25.x) which always dropped na rows by default. Even in the current master version (1.1.0.dev0+2054.gc15f08084) len(...) still works fine when dropna=True is used. The interesting bit (for me at least) is that categoricals are somehow used while there are no categoricals in the dataframe to begin with. Perhaps this is a code path that is not meant to be executed.

Expected Output

No exception being raised (there isn't any categorical after all) and len(...) isn't modifying anything so an exception is totally unexpected.

Output of pd.show_versions()

>>> pd.show_versions()

INSTALLED VERSIONS
------------------
commit           : c15f0808428b5cdb4eaea290fb26a71c3534630b
python           : 3.8.3.final.0
python-bits      : 64
OS               : Linux
OS-release       : 5.7.7-200.fc32.x86_64
Version          : #1 SMP Wed Jul 1 19:53:01 UTC 2020
machine          : x86_64
processor        : x86_64
byteorder        : little
LC_ALL           : None
LANG             : en_AU.UTF-8
LOCALE           : en_AU.UTF-8

pandas           : 1.1.0.dev0+2054.gc15f08084
numpy            : 1.19.0
pytz             : 2020.1
dateutil         : 2.8.1
pip              : 20.1
setuptools       : 46.1.3
Cython           : None
pytest           : None
hypothesis       : None
sphinx           : None
blosc            : None
feather          : None
xlsxwriter       : None
lxml.etree       : None
html5lib         : None
pymysql          : None
psycopg2         : None
jinja2           : None
IPython          : None
pandas_datareader: None
bs4              : None
bottleneck       : None
fsspec           : None
fastparquet      : None
gcsfs            : None
matplotlib       : None
numexpr          : None
odfpy            : None
openpyxl         : None
pandas_gbq       : None
pyarrow          : None
pytables         : None
pyxlsb           : None
s3fs             : None
scipy            : None
sqlalchemy       : None
tables           : None
tabulate         : None
xarray           : None
xlrd             : 1.2.0
xlwt             : None
numba            : None
@ssche ssche added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 9, 2020
@jorisvandenbossche jorisvandenbossche added Groupby and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Jul 10, 2020
@jorisvandenbossche jorisvandenbossche added this to the 1.1 milestone Jul 10, 2020
@jorisvandenbossche
Copy link
Member

@ssche thanks for the report!
(cc @charlesdong1991)

@charlesdong1991
Copy link
Member

take

@jreback jreback modified the milestones: 1.1, Contributions Welcome Jul 11, 2020
agalitsyna added a commit to open2c/cooltools that referenced this issue Mar 9, 2021
… while grouping np.nans.

Pandas fails sometimes on groupby with np.nan in the dataframe:
pandas-dev/pandas#35202

With this fix, we replace NaN regions (unannotated) with empty
string, and then do the grouping.

Tests of snipping added, covering NaN in "region" column of the
windows dataframe for snipping.
nvictus added a commit to open2c/cooltools that referenced this issue Mar 10, 2021
…h NaNs (#216)

* bugfix(snipping): features with empty regions return snips filled with NaNs.

Related to 1a63791
which turned out to be not complete fix.

* bugfix(snipping): Docstrings added, code simplified.

* bugfix+tests(snipping): Snipping adapted for pandas failing sometimes while grouping np.nans.

Pandas fails sometimes on groupby with np.nan in the dataframe:
pandas-dev/pandas#35202

With this fix, we replace NaN regions (unannotated) with empty
string, and then do the grouping.

Tests of snipping added, covering NaN in "region" column of the
windows dataframe for snipping.

* style(snipping test): improved

* formatting imporoved

* Update cooltools/snipping.py

Co-authored-by: Nezar Abdennur <nabdennur@gmail.com>
@HalfWhitt
Copy link

Hello, may I ask if anyone's figured out yet how to fix this? I just encountered the bug myself and was rather mystified. Is the dropna=False option even usable at all, perhaps with a workaround?

@ssche
Copy link
Contributor Author

ssche commented Feb 28, 2022

@HalfWhitt I think you can use dfg.ngroups instead of len(dfg) (note: this is a very specific error and should not be conflated with the general use of dropna=False).

@HalfWhitt
Copy link

HalfWhitt commented Mar 1, 2022

@ssche Oh, interesting... I don't actually need len() myself, but I was trying to do list(dfg)... After finding that that, dfg.groups, and (as mentioned here) len(dfg) all threw the same error, I guess I just assumed the groups object didn't work at all with drop_na.

But you prompted me to look at it again, and now that I've tried iterating over it and found that that works, I guess I can just do [group for group in dfg] instead. Which feels a little silly (and really makes me wonder how it could work while list(dfg) fails), but works perfectly well I suppose.

@mroeschke mroeschke removed this from the Contributions Welcome milestone Oct 13, 2022
@nir-ml
Copy link

nir-ml commented Apr 15, 2023

How about converting the datetime column to a string column before using groupby?

>>> df['a'] = df['a'].astype(str)
>>> dfg = df.groupby(['a'], dropna=False)

@rhshadrach
Copy link
Member

I now get 3 in the OP, could use tests.

@rhshadrach rhshadrach added good first issue Needs Tests Unit test(s) needed to prevent regressions labels Nov 10, 2024
@ssche
Copy link
Contributor Author

ssche commented Nov 11, 2024

I now get 3 in the OP, could use tests.

Still happening in 2.1.1 and 2.2.3.

@rhshadrach
Copy link
Member

@ssche - the fix is in main, will be released in 3.0.

@ssche ssche added this to the 3.0 milestone Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug good first issue Groupby Needs Tests Unit test(s) needed to prevent regressions
Projects
None yet
Development

No branches or pull requests

8 participants