Skip to content

json_normalize skips an entry in a pymongo cursor #30323

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
languitar opened this issue Dec 18, 2019 · 5 comments
Closed

json_normalize skips an entry in a pymongo cursor #30323

languitar opened this issue Dec 18, 2019 · 5 comments
Labels
Bug Compat pandas objects compatability with Numpy or Python functions IO JSON read_json, to_json, json_normalize

Comments

@languitar
Copy link

languitar commented Dec 18, 2019

I am sorry for not being able to provide a reproducible example, but any attempt to reduce the problem to something limited makes the problem disappear.

In [54]: res = client.events.api.longterm.find({'foo': 'bar'})

In [55]: res.count()
Out[55]: 76845

In [56]: len(pd.io.json.json_normalize(res))
Out[56]: 76844

In [57]: res = client.events.api.longterm.find({'foo': 'bar'})

In [58]: len(pd.io.json.json_normalize(list(res)))
Out[58]: 76845

Problem description

I have a pretty large collection of documents in MongoDB, which I am querying using pymongo. The resulting cursor is passed to pd.io.json.json_normalize to convert the resulting data into a data frame. In one example, which I am unfortunately unable to reduce to something reproducible, a single element of the 76845 entries in the cursor is not present in the resulting data frame. If I convert the cursor to a list before using json_normalize, all entries are present. The affected document itself looks completely sane and without anything suspicious. Moreover, the default for json_normalize should be to raise errors and not to swallow rows.

Expected Output

All 76845 rows are present in the result when passing the cursor directly to json_normalize.

Output of pd.show_versions()

[paste the output of pd.show_versions() here below this line]

INSTALLED VERSIONS

commit : None
python : 3.8.0.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.3-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 0.25.3
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 42.0.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.10.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 3.0.2
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

@jbrockmendel jbrockmendel added the IO JSON read_json, to_json, json_normalize label Dec 18, 2019
@mroeschke
Copy link
Member

Can this be reproduced without the external library? What type of object is res?

@mroeschke mroeschke added the Needs Info Clarification about behavior needed to assess issue label May 8, 2020
@languitar
Copy link
Author

Unfortunately, at that time I was unable to get a reproduction without a MongoDB result.

The result type of find is a Cursor

@mroeschke mroeschke added Bug Compat pandas objects compatability with Numpy or Python functions and removed Needs Info Clarification about behavior needed to assess issue labels May 11, 2020
@alex005005
Copy link

alex005005 commented Jun 11, 2020

Hi,
I can reproduce with following code and data.

Problem doesn't occur if I transform to list before json_normalize - maybe I'm using it incorrectly and that is how it's supposed to work.

`#%%

import json
from pymongo import MongoClient

client = MongoClient('localhost', 27017)
db = client['bikedata']
testcollection = db['test1']

with open('db_berlin_neu_aggregate_sort_.json.txt') as f:
file_data = json.load(f)

testcollection.insert_many(file_data)
#%%
doc = testcollection.aggregate([
{ "$sort" : {"date" : -1 } },
{ "$limit" : 50 }
])

df = json_normalize(doc)
df.head(50)

#%%

doc = testcollection.aggregate([
{ "$sort" : {"date" : -1 } },
{ "$limit" : 50 }
])

df = json_normalize(list(doc))
df.head(50)

#%%
f.close()
client.close()`

db_berlin_neu_aggregate_sort_.json.txt

@miriambenvil
Copy link

miriambenvil commented Mar 2, 2021

Hi all,

I have reproduced the error and identified what is going on

Code sample

import mongomock
import pandas as pd
test_data = [
    {'_id': 1, 'name': 'Miriam'},
    {'_id': 2, 'name': 'Peter'}
]
mongo_client = mongomock.MongoClient()
mongo_client["db_test"]["col_test"].insert_many(test_data)
cursor = mongo_client["db_test"]["col_test"].find({})
df = pd.json_normalize(cursor)

Output

    _id   name
0    2   Peter

Expected output

   _id     name
0    1   Miriam
1    2    Peter

Problem description

We are using pyarrow cursor as input for json_normalize with nullable record_path argument. This function itself checks if we have nested structures in the input to be transformed so it's evaluating the first element of the document:

any([isinstance(x, dict) for x in y.values()] for y in data)

Since it is a cursor next time we want to read the cursor we have "lost" that element.

Solution

df = pd.json_normalize(list(cursor))

Hope it helps!

simonjayhawkins added a commit to simonjayhawkins/pandas that referenced this issue Jun 3, 2022
@simonjayhawkins
Copy link
Member

This is now fixed. (in pandas 1.3) in commit: [52bdfdc] BUG: Fix pd.json_normalize to not skip the first element of a generator input (#38698)

#35923 was a duplicate of this issue.

appropriate testing added in #38698, so can just close this as fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Compat pandas objects compatability with Numpy or Python functions IO JSON read_json, to_json, json_normalize
Projects
None yet
Development

No branches or pull requests

6 participants