-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
json_normalize skips an entry in a pymongo cursor #30323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can this be reproduced without the external library? What type of object is |
Unfortunately, at that time I was unable to get a reproduction without a MongoDB result. The result type of |
Hi, Problem doesn't occur if I transform to list before json_normalize - maybe I'm using it incorrectly and that is how it's supposed to work. `#%% import json client = MongoClient('localhost', 27017) with open('db_berlin_neu_aggregate_sort_.json.txt') as f: testcollection.insert_many(file_data) df = json_normalize(doc) #%% doc = testcollection.aggregate([ df = json_normalize(list(doc)) #%% |
Hi all, I have reproduced the error and identified what is going on Code sampleimport mongomock
import pandas as pd
test_data = [
{'_id': 1, 'name': 'Miriam'},
{'_id': 2, 'name': 'Peter'}
]
mongo_client = mongomock.MongoClient()
mongo_client["db_test"]["col_test"].insert_many(test_data)
cursor = mongo_client["db_test"]["col_test"].find({})
df = pd.json_normalize(cursor) Output _id name
0 2 Peter Expected output _id name
0 1 Miriam
1 2 Peter Problem descriptionWe are using pyarrow cursor as input for any([isinstance(x, dict) for x in y.values()] for y in data) Since it is a cursor next time we want to read the cursor we have "lost" that element. Solutiondf = pd.json_normalize(list(cursor)) Hope it helps! |
I am sorry for not being able to provide a reproducible example, but any attempt to reduce the problem to something limited makes the problem disappear.
Problem description
I have a pretty large collection of documents in MongoDB, which I am querying using pymongo. The resulting cursor is passed to
pd.io.json.json_normalize
to convert the resulting data into a data frame. In one example, which I am unfortunately unable to reduce to something reproducible, a single element of the 76845 entries in the cursor is not present in the resulting data frame. If I convert the cursor to a list before usingjson_normalize
, all entries are present. The affected document itself looks completely sane and without anything suspicious. Moreover, the default forjson_normalize
should be to raise errors and not to swallow rows.Expected Output
All 76845 rows are present in the result when passing the cursor directly to
json_normalize
.Output of
pd.show_versions()
[paste the output of
pd.show_versions()
here below this line]INSTALLED VERSIONS
commit : None
python : 3.8.0.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.3-arch1-1
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 42.0.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.10.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 3.0.2
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
The text was updated successfully, but these errors were encountered: