-
Notifications
You must be signed in to change notification settings - Fork 564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
String right truncation pyodbc.ProgrammingError when using fast_executemany
#380
Comments
Some more details: After reading #295 , I considered that, even though a TEMP table is not the target of the insert, there is one used in the statement. Changing to a global temp table with ## instead of # fixes in this specific instance, but then the same error happens again on a different table, different column with a different set of data. The TEMP table does not use any varchars, only one integer column, and everywhere in the DB, all columns are varchar, not nvarchar
|
Can you try adding |
|
A common thread seems to be the buffer is capped at 510, which is 2 x 255 -- which suggests fast_executemany sometimes assumes a maximum string length of 255 characters (510 bytes) |
I've narrowed this down to be a variation of #280 When the SQL statement contains more than one statement (i.e. more complex transactions, try catch blocks, etc...) and fast_executemany=True, the SQLDescribeParam call doesn't work and thus VARCHARs are allocated a default buffer size of 510 bytes. This seems to be a known behaviour of
The following code will reproduce the error (note the "print" statement in the SQL, but this could be TRY/CATCH BLOCKS, etc...
(note that in the actual application, the SQL is significantly more complicated; this is the simplest case that will reproduce the error) This produces the output with errors:
However, changing any one of the below will avoid the exception:
Note that unlike the solution in the wiki, a temporary table is not being used, and It's unclear to me why this problem happens when Looking at @v-chojas 's |
Having read through I think the solution would be to add this Although I'm still a little unclear on why it works with |
|
Thanks, @v-chojas that clears things up. The problem with Options I see:
|
* enabling setinputsizes to also work with fast_executemany. Issue #380 * Refactored the copied code into a method to override input sizes in slow and fast mode. Extended the existing code, so that it is now possible to also override the column types, not only the sizes. Fixed some warnings. * Updated doc.
I'm going to close this now that #415 has been released in 4.0.24. However, I do want to get the binding behavior of fast and regular the same. If someone sets an encoding or other configuration, they will expect it to work. |
Environment
To diagnose, we usually need to know the following, including version numbers. On Windows, be
sure to specify 32-bit Python or 64-bit:
Issue
When executing a parameterized "UPDATE" statement with
fast_executemany
, the following exception is thrown.The column widths in the target database have more than enough size to hold the data.
This seems equivalent to Issue #337 except that in this case, I'm using 4.0.23 of pyodbc and MS ODBC Driver 17, which fixed the issue in that post but continue to be a problem for me.
I've trimmed down the data to just two rows and can consistently reproduce the problem with just those two rows.
I'm using SQL Alchemy to construct the statements and am using the
connection.execute(stmt, [row1, row2])
method to execute.Turning
fast_executemany
off fixes the problem, but unacceptably impacts performance.The text was updated successfully, but these errors were encountered: