BUG: Avoid precision loss for large integral Decimal values in read_sql with coerce_float=True #63532
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi all,
This fixes a precision issue in
read_sqlandread_sql_querywhencoerce_float=Trueand a DB driver returns large IDs asdecimal.Decimal. Those values can get coerced tofloat64and rounded once they’re above2**53, for example305184080441754059becomes305184080441754048.The fix is limited to the SQL ingestion path. For object columns, if we see an integral
Decimalthat would be lossy as float, we convert it to a Pythonintbefore dtype inference. FractionalDecimalvalues still coerce to float as before. Added sqlite-based regression tests for both cases.