-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
built-in function fetch_tuple returned a result with an error set #359
Comments
What's the CCSID of the column and the job CCSID (and default job CCSID)? |
I may be wrong but ccsid of ibm i is 1140 and i tought it could be something related to ccsid and i created a table with ccsid 1140 and still the error, then tried 37 and the result is a litle better but eventually it gets the error again. |
You can query the column CCSID like so: select ordinal_position, data_type, length, ccsid from qsys2.syscolumns where system_table_schema = '<SCHEMA>' and column_name = '<COLUMN>' and table_name = '<TABLE>' |
I think I'm able to recreate it, though the result is slightly different. create or replace table kadler.portuguese(c char(10) ccsid 1140);
delete from kadler.portuguese where 1=1;
insert into kadler.portuguese values('ÇÇÇÇÇÇÇÇ');
create or replace table kadler.english(c char(10) ccsid 37);
delete from kadler.english where 1=1;
insert into kadler.english values('ÇÇÇÇÇÇÇÇ'); import ibm_db_dbi as db2
conn = db2.connect()
cur = conn.cursor()
cur.execute('select * from kadler.portuguese')
print(cur.fetchone())
cur.execute('select * from kadler.english')
print(cur.fetchone())
cur.callproc('qsys2.qcmdexc', ('CHGJOB CCSID(1140)', ))
cur.execute('select * from kadler.portuguese')
print(cur.fetchone())
cur.execute('select * from kadler.english')
print(cur.fetchone())
Above, you can see that there's extra garbage in the fetched data. I think in your case, the garbage just happens to contain invalid UTF-8 sequence and throws the exception. If I change the data to a bunch of "C" characters instead, I get the expected data:
I think we're getting back bad info from the underlying CLI APIs, so I'll have to investigate further. |
Yeah but any tip where i can investigate?? I dont have a clue...
Kevin Adler <notifications@github.com> escreveu em ter, 27/11/2018 às 18:19
:
… I think I'm able to recreate it, though the result is slightly different.
create or replace table kadler.portuguese(c char(10) ccsid 1140);delete from kadler.portuguese where 1=1;insert into kadler.portuguese values('ÇÇÇÇÇÇÇÇ');
create or replace table kadler.english(c char(10) ccsid 37);delete from kadler.english where 1=1;insert into kadler.english values('ÇÇÇÇÇÇÇÇ');
import ibm_db_dbi as db2
conn = db2.connect()
cur = conn.cursor()
cur.execute('select * from kadler.portuguese')print(cur.fetchone())
cur.execute('select * from kadler.english')print(cur.fetchone())
cur.callproc('qsys2.qcmdexc', ('CHGJOB CCSID(1140)', ))
cur.execute('select * from kadler.portuguese')print(cur.fetchone())
cur.execute('select * from kadler.english')print(cur.fetchone())
('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
Above, you can see that there's extra garbage in the fetched data. I think
in your case, the garbage just happens to contain invalid UTF-8 sequence
and throws the exception.
If I change the data to a bunch of "C" characters instead, I get the
expected data:
('CCCCCCCC ',)
('CCCCCCCC ',)
('CCCCCCCC ',)
('CCCCCCCC ',)
I think we're getting back bad info from the underlying CLI APIs, so I'll
have to investigate further.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56t37B03cE3zJUk8Phk9Lu9trhxhuks5uzYIsgaJpZM4Y1wz3>
.
|
But you also have that garbage in your returned data. So your system is
also with a bug or is it just a python bug??
António Ramos <ramstein74@gmail.com> escreveu em ter, 27/11/2018 às 18:23 :
… Yeah but any tip where i can investigate?? I dont have a clue...
Kevin Adler ***@***.***> escreveu em ter, 27/11/2018 às
18:19 :
> I think I'm able to recreate it, though the result is slightly different.
>
> create or replace table kadler.portuguese(c char(10) ccsid 1140);delete from kadler.portuguese where 1=1;insert into kadler.portuguese values('ÇÇÇÇÇÇÇÇ');
> create or replace table kadler.english(c char(10) ccsid 37);delete from kadler.english where 1=1;insert into kadler.english values('ÇÇÇÇÇÇÇÇ');
>
> import ibm_db_dbi as db2
>
> conn = db2.connect()
> cur = conn.cursor()
>
> cur.execute('select * from kadler.portuguese')print(cur.fetchone())
>
> cur.execute('select * from kadler.english')print(cur.fetchone())
>
> cur.callproc('qsys2.qcmdexc', ('CHGJOB CCSID(1140)', ))
>
> cur.execute('select * from kadler.portuguese')print(cur.fetchone())
>
> cur.execute('select * from kadler.english')print(cur.fetchone())
>
> ('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
> ('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
> ('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
> ('ÇÇÇÇÇ\x00\x00\x00\x00\x00\x00\x07\x00',)
>
> Above, you can see that there's extra garbage in the fetched data. I
> think in your case, the garbage just happens to contain invalid UTF-8
> sequence and throws the exception.
>
> If I change the data to a bunch of "C" characters instead, I get the
> expected data:
>
> ('CCCCCCCC ',)
> ('CCCCCCCC ',)
> ('CCCCCCCC ',)
> ('CCCCCCCC ',)
>
> I think we're getting back bad info from the underlying CLI APIs, so I'll
> have to investigate further.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#359 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABJ56t37B03cE3zJUk8Phk9Lu9trhxhuks5uzYIsgaJpZM4Y1wz3>
> .
>
|
The problem is that ibm_db uses the returned column size information to determine the size of the buffer to allocate on the So far, the only problem is that we have truncated data, but later the fetch code uses the returned value in the indicator as the total length of the data in the buffer, assuming that it hasn't been truncated. This leads to a buffer over-read and garbage data is returned. |
Is there a solution? In my case in portugal we have a lot of special chars.
Regards
Kevin Adler <notifications@github.com> escreveu em ter, 27/11/2018 às 19:06
:
… The problem is that ibm_db uses the returned column size information to
determine the size of the buffer to allocate on the SQLBindCol. In this
case a column size of 10 will result in a 10-byte buffer being allocated.
However, converting to UTF-8 will see the byte length of the column
increase to greater than the size of the buffer. A 'Ç' is 1 byte in single
byte EBCDIC code pages, but 2 bytes in UTF-8. If the data expands beyond
the buffer size, the result will be truncated and the total size of the
data will be returned in the indicator.
So far, the only problem is that we have truncated data, but later the
fetch code uses the returned value in the indicator as the total length of
the data in the buffer, assuming that it hasn't been truncated. This leads
to a buffer over-read and garbage data is returned.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56jO3fZ29yf2JqvvUgP7DrPYuNOlFks5uzY0lgaJpZM4Y1wz3>
.
|
I'm working on a fix |
Many thanks :) and glad you are there ;)
Kevin Adler <notifications@github.com> escreveu em ter, 27/11/2018 às 22:02
:
… I'm working on a fix
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56tKrDEhdqS0Ew8S68w_IDJ8jMI28ks5uzbaGgaJpZM4Y1wz3>
.
|
@kadler using your code above and running the python script i get this output ('▒▒▒▒▒\x00\x00\x00\x00\x00\x00\x07\x00',) |
Should be fixed with kadler@2c4cfc3. I'll work on rolling out a new RPM shortly. |
Again thank you Kevin for your expertise.
Kevin Adler <notifications@github.com> escreveu em qua, 28/11/2018 às 20:03
:
… Should be fixed in ***@***.***
<kadler@2c4cfc3>.
I'll work on rolling out a new RPM shortly.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56qJZK6CwPzn8ZBBUxSKcHYndphBIks5uzuwCgaJpZM4Y1wz3>
.
|
RPMs are now available. You should be able to do a |
Cant say thank u enough ! It means a lot to have python as a tool to
modernize my ibm i :)
Kevin Adler <notifications@github.com> escreveu em qua, 28/11/2018 às 21:50
:
… RPMs are now available. You should be able to do a yum upgrade
python3-ibm_db and it should work (may have to clear cache with yum clean
metadata first).
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56jXhZIQhHmp0543Oaun5vJ0qtthlks5uzwUcgaJpZM4Y1wz3>
.
|
finally it works. but i get this many thanks |
Kevin, forget it. i think its an issue with my bash shell. I then send this to another app and there it appears correctly. if you have any idea feel free to save me again :) |
You need to set your locale to a UTF-8 locale (eg. on AIX this would be all caps: PT_PT). By default it's a single-byte locale, which cause Python to convert the character to 0xC7 instead of 0xC387, which is not valid UTF-8. I did answer your stack overflow question similarly. |
Worked again. Your knowledge is invaluable Kevin.
Thank you for sharing :)
Em qui, 29 de nov de 2018 às 16:16, Kevin Adler <notifications@github.com>
escreveu:
… You need to set your locale to a UTF-8 locale (eg. on AIX this would be
all caps: PT_PT). By default it's a single-byte locale, which cause Python
to convert the character to 0xC7 instead of 0xC387, which is not valid
UTF-8.
I did answer your stack overflow question similarly.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABJ56qBZHR9AFNUaD0Jn1IVEOyOTdhRKks5u0Ah5gaJpZM4Y1wz3>
.
|
@ramstein74 care to close the issue? |
Hi, im from Portugal so we have special chars like you can see below...
when i have a query that gets a row with this text
"PORCELANA PARA MAÇARICO DE IGNIÇ."
in a table column i get the error
4.4:zeta@~/pyWork/Tasks/allTasks> python3 test1.py
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 38: invalid start byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_dbi.py", line 1472, in _fetch_helper
row = ibm_db.fetch_tuple(self.stmt_handler)
SystemError: returned a result with an error set
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test1.py", line 15, in
for row in c1:
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_dbi.py", line 1128, in next
row = self.fetchone()
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_dbi.py", line 1492, in fetchone
row_list = self._fetch_helper(1)
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_dbi.py", line 1476, in _fetch_helper
raise self.messages[-1]
ibm_db_dbi.Error: ibm_db_dbi::Error: SystemError(' returned a result with an error set',)
4.4:zeta@~/pyWork/Tasks/allTasks>
All other records are parsed correctly.
any help?
regards
my code
import ibm_db_dbi as db2
conn = db2.connect()
c1 = conn.cursor() sql="select * ..... " -> querying only the record that crashed the code...
c1.execute(sql)
for row in c1:
print(row)
there is something strange because if i change "PORCELANA PARA MAÇARICO DE IGNIÇ." to "ÇÇÇÇÇ" it works but keep adding Ç at the end of the string eventually leads to the error.
any help ?
The text was updated successfully, but these errors were encountered: