-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HBASE-27065 Build against Hadoop 3.3.3 #4467
Conversation
When building against Hadoop 3.3.3 and any future version of Hadoop incorporating reload4j the new Enforcer rule we have active in branch-2.5 and up to exclude other logging frameworks besides log4j2 will trigger. We need to add exclusions to prevent that from happening so the build will succeed. Also exclude leveldbjni-all to avoid a LICENSE file generation error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm Other than reload4j and sf4j-reload4j, the rest of transitive dependency aren't new. Not sure what else did we changed in Hadoop. :(
I'm not sure why leveldb-jni is an issue now either @jojochuang . Perhaps there was a version increment, and anyway there is an architectural difference. On x86_64 the groupId is fusesource. On aarch64 it is openlabtesting. The license metadata in the POMs of these different arch-specific releases could vary. My build and test hosts are aarch64 so I may be one of the first to notice. |
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
trying to init minidfscluster with: java.lang.NoClassDefFoundError: io/netty/handler/codec/http/HttpRequest at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.getDeclaredMethod(Class.java:2128) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.getFilterHandlers(DatanodeHttpServer.java:279) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:149) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:977) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1412) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:507) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2828) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2734) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1755) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:969) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:864)
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
When building against Hadoop 3.3.3 and any future version of Hadoop incorporating reload4j the new Enforcer rule we have active in branch-2.5 and up to exclude other logging frameworks besides log4j2 will trigger. We need to add exclusions to prevent that from happening so the build will succeed. Also exclude leveldbjni-all to avoid a LICENSE file generation error. Add netty-all to hadoop-hdfs test context... to fix tests failing trying to init minidfscluster. Co-authored-by: stack <stack@apache.org> Signed-off-by: Sean Busbey <busbey@apache.org>
This reverts commit 9936c39.
When building against Hadoop 3.3.3 and any future version of Hadoop incorporating reload4j the new Enforcer rule we have active in branch-2.5 and up to exclude other logging frameworks besides log4j2 will trigger. We need to add exclusions to prevent that from happening so the build will succeed.
Also exclude leveldbjni-all to avoid a LICENSE file generation error. hadoop-hdfs and hadoop-mapreduce are messy and export this among findbugs and other clutter. Anyway, better to exclude something we do not require than add an unnecessary supplemental model.