Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow utf-8 encoding failures for python2 on the request body for hashing #30

Merged
merged 3 commits into from
Sep 26, 2017

Conversation

bigjust
Copy link
Contributor

@bigjust bigjust commented Sep 25, 2017

There have been some discussion about unnecessary unicode encoding.

I've removed the encoding operation, and added a failing test that exposes the issue.

add a separate test for 3, which does require the encoding, but
handles unicode better
This reverts commit 6a7e58b.
@bigjust
Copy link
Contributor Author

bigjust commented Sep 25, 2017

Found the issue, its handled differently for python2 and 3. In py2, the encode() call will throw an UnicodeDecodeError, which will get ignored, and the hash will compute successfully with the original body field. In py3, it'll encode successfully.

@@ -134,7 +134,7 @@ def get_aws_request_headers(self, r, aws_access_key, aws_secret_access_key, aws_
body = r.body if r.body else bytes()
try:
body = body.encode('utf-8')
except AttributeError:
except (AttributeError, UnicodeDecodeError):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for this pull request @bigjust !

Would you mind adding an in-line comment explaining what this try except block tries to accomplish in python 2 vs python 3? Right now, I'm trying to piece together the information you've gathered in this PR with the notes from the original PR that introduced the try except block:

Due to encoding differences between python 2 and 3, we can't just apply the encoding blindly.

Example: b'foo' is a str in python 2, and a bytes literal in python 3.
b'foo'.encode('utf-8') works in python 2, since it's a string. The b is ignored.
b'foo'.encode('utf-8') fails in python 3, since it's already a byte literal.

Maybe it's just Sunday, but I'm having trouble understanding the different execution scenarios. For example, I've had a few reports of the encoding failures, but have never been able to nail down which python version people are having the errors on?Our python 2 production system hasn't had issues (at least not yet... 😄 )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that unicode in a string, eg:

b'foo\xc3'.encode('utf-8') 

throws UnicodeDecodeError in python2, which we can safely ignore and send the str() body to be hashed. In python3, sending a string without encoding it will throw an error.

I've seen this UnicodeDecodeError thrown using production data on python2, and either catching and ignoring, or not encoding at all worked with large data sets. I only learned of the py3 difference with the unit test build ran

Copy link
Owner

@DavidMuller DavidMuller Sep 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW looks like we haven't upgraded our aws-requests-auth version to 0.3.1 where this encoding change was originally introduced

Copy link
Owner

@DavidMuller DavidMuller Sep 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm noticing that elasticsearch-py also raises UnicodeDecodeError during its normal operation. I'm running python 2.7.6 and elasticsearch-py 1.9.0:

In [5]: sys.version
Out[5]: '2.7.6 (default, Mar 22 2014, 22:59:56) \n[GCC 4.8.2]'

In [6]: import elasticsearch

In [7]: elasticsearch.__version__
Out[7]: (1, 9, 0)
In [22]: search_client.search(body={'filter': {'term': {'name': b'foo=bar\xc3'}}})

/usr/....../python2.7/site-packages/elasticsearch/serializer.pyc in dumps(self, data)
     45             return json.dumps(data, default=self.default)
     46         except (ValueError, TypeError) as e:
---> 47             raise SerializationError(data, e)
     48 
     49 DEFAULT_SERIALIZERS = {

SerializationError: ({'filter': {'term': {'name': 'foo=bar\xc3'}}}, UnicodeDecodeError('utf8', 'foo=bar\xc3', 7, 8, 'unexpected end of data'))

I get that same error if I pass in 'foo=bar\xc3' as well. I also get that error when using both aws-requests-auth version 0.2.5 (before the encoding change was originally introduced) and the most recent version of aws-reqeusts-auth (0.4.0) which includes the encoding change. I'm trying to determine if its "correct" for aws-requests-auth to raise given 'foo=bar\xc3' and b'foo=bar\xc3' (just as elasticsearch-py is doing).

Copy link
Contributor Author

@bigjust bigjust Sep 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I think further complicating the issue is the this behavior may have changed in ES 2.2 (http://elasticsearch-py.readthedocs.io/en/master/Changelog.html#id10), and I'm using 5.4

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, yeah I wind up with the unicode error regardless of what version combination of elasticsearch-py and aws-requests-auth I am using. For example, I get the unicode error on elasticesearch-py 5.4.0 with both aws-requess-auth 0.2.5 (before the encoding change was originally introduced) and the most recent version of aws-reqeusts-auth (0.4.0) which includes the encoding change:

In [6]: sys.version
Out[6]: '2.7.6 (default, Mar 22 2014, 22:59:56) \n[GCC 4.8.2]'

In [7]: elasticsearch.__version__
Out[7]: (5, 4, 0)

In [8]: search_cluster_client.search(body={'filter': {'term': {'name': b'foo=bar\xc3'}}})
....

/usr/....../python2.7/site-packages/elasticsearch/serializer.pyc in dumps(self, data)
     48             return json.dumps(data, default=self.default, ensure_ascii=False)
     49         except (ValueError, TypeError) as e:
---> 50             raise SerializationError(data, e)
     51 
     52 DEFAULT_SERIALIZERS = {

SerializationError: ({'filter': {'term': {'name': 'foo=bar\xc3'}}}, UnicodeDecodeError('utf8', 'foo=bar\xc3', 7, 8, 'unexpected end of data'))

Looks like the suggestion from elasticsearch-py is to use unicode:

if you work with non-ascii data in python 2 you must use the unicode type or have proper encoding set in your environment.

@bigjust bigjust changed the title Dont Attempt utf-8 encoding on the request body for hashing Allow utf-8 encoding failures for python2 on the request body for hashing Sep 25, 2017
@DavidMuller DavidMuller merged commit 141168e into DavidMuller:master Sep 26, 2017
@DavidMuller
Copy link
Owner

This change was included in version 0.4.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants