Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A call to CachingCodecRegistry.codecFor for HeapByteBuffer throws a CodecNotFoundException #1555

Closed
wants to merge 1 commit into from
Closed

A call to CachingCodecRegistry.codecFor for HeapByteBuffer throws a CodecNotFoundException #1555

wants to merge 1 commit into from

Conversation

QuentinFAIDIDE
Copy link

Please forgive my lack of java proficiency, I'm not a java dev.
It seems like updating our spark job to spark 3 raised some issues with cassandra-driver for scala (see graphsense/graphsense-transformation#38 ).
After some research the error was narrowed down to the CachingCodedRegistry and BlobCodec classes.
In the first one, in the codecFor function, the primitiveCodec.accept function was sent a JavaType and not a Class type object, leading to the test for ByteBuffer.class.equals(javaClass) to fail which in turn lead to a CodecNotFoundException being thrown.
In the BlobCodec class, only ByteBuffer.class.equals(javaClass) was used and it didn't test for child classes.
I think this issue may require a fix.

  • Quentin.

…pe accepts function did not test for child class
@QuentinFAIDIDE QuentinFAIDIDE changed the title fix: a classType was not sent to pritiveCodec.accept(), and the blob was not testing for class child of ByteBuffer fix: a class type was not sent to pritiveCodec.accept() in CachingCodecRegistry, and the BlobCodec accept function was not testing for class child of ByteBuffer Jun 5, 2021
@adutra
Copy link
Contributor

adutra commented Jun 5, 2021

Hi @QuentinFAIDIDE I appreciate your involvement but unfortunately your fix is not correct. The TypeCodec interface has 3 accept methods:

  • accepts(GenericType<?> javaType)
  • accepts(Class<?> javaClass)
  • accepts(Object value)

They have different contracts and usages.

I think your original problem may not reside in the driver, but in a layer above. The error you are seeing is:

CodecNotFoundException: Codec not found for requested operation: [BLOB <-> java.nio.HeapByteBuffer]

This means that someone is calling CachingCodecRegistry.codecFor(DataType, Class<?>) with wrong parameters.

If you are looking for a BLOB codec, the right call is CachingCodecRegistry.codecFor(DataType.BLOB, ByteBuffer.class). Indeed, there is a default BLOB codec registered in the driver that converts BLOB columns to ByteBuffer objects.

But, as explained in the docs, this method does not accept covariant classes. So if you try, say, CachingCodecRegistry.codecFor(DataType.BLOB, HeapByteBuffer.class) – then the method will throw CodecNotFoundException – even if HeapByteBuffer is a subtype of ByteBuffer.

Please check the code making usage of the CodecRegistry API and verify that all calls are correct.

Also: there are several codecFor methods, please read their javadocs carefully. A possible explanation is that the calling code is calling the wrong codecFor method.

@QuentinFAIDIDE
Copy link
Author

Ok, thank you for the fast response, I indeed was not sure where to look for the doc of those classes, and did not check for the accept method overloads or the classes javadocs before making my dirty fix 😬 .
I will try when I'll have time to lurk more on the documentation/code and find where the guilty codecFor call lies.
Thank you

@QuentinFAIDIDE QuentinFAIDIDE changed the title fix: a class type was not sent to pritiveCodec.accept() in CachingCodecRegistry, and the BlobCodec accept function was not testing for class child of ByteBuffer Unable to use the ByteBuffer coded for a HeapByteBuffer class instance Jun 5, 2021
@QuentinFAIDIDE QuentinFAIDIDE changed the title Unable to use the ByteBuffer coded for a HeapByteBuffer class instance A call to CachingCodecRegistry.codecFor for HeapByteBuffer throws a CodecNotFoundException Jun 5, 2021
@adutra
Copy link
Contributor

adutra commented Jun 11, 2021

Hi @QuentinFAIDIDE I'm going to close this as we can't merge it, I hope you are OK with that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants