You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
select*from parquet.`/github/gluten-docker/packages/tpch-data/lineitem/`as l1 left join parquet.`/github/gluten-docker/packages/tpch-data/lineitem/`as l2 onl1.l_orderkey=l2.l_orderkeylimit10;
and got an exception
java.lang.RuntimeException: Not found column l_orderkey#345 in block. There are only columns: l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, l_comment
0. ./output2/../contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x15c3c833 in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
1. ./output2/../src/Common/Exception.cpp:58: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xc64d7da in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
2. ./output2/../src/Core/Block.cpp:0: DB::Block::getByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x11e3e53e in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
3. ./output2/../contrib/libcxx/include/vector:1682: local_engine::HashNativeSplitter::computePartitionId(DB::Block&) @ 0xa7220c5 in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
4. ./output2/../src/Common/PODArray.h:104: local_engine::NativeSplitter::split(DB::Block&) @ 0xa720777 in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
5. ./output2/../contrib/libcxx/include/deque:1403: local_engine::NativeSplitter::hasNext() @ 0xa721aeb in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
6. ./output2/../utils/local-engine/local_engine_jni.cpp:890: Java_io_glutenproject_vectorized_BlockSplitIterator_nativeHasNext @ 0xa73d6d7 in /github/ClickHouse-gluten-bigo/output2/utils/local-engine/libch.so
at io.glutenproject.vectorized.BlockSplitIterator.nativeHasNext(Native Method)
at io.glutenproject.vectorized.BlockSplitIterator.hasNext(BlockSplitIterator.java:43)
at org.apache.spark.sql.execution.utils.CHExecUtil$$anon$1.hasNext(CHExecUtil.scala:135)
at io.glutenproject.vectorized.CloseablePartitionedBlockIterator.hasNext(CloseablePartitionedBlockIterator.scala:34)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
If possible, change "enabled" to true in "send_crash_reports" section in config.xml:
<send_crash_reports>
<!-- Changing <enabled> to true allows sending crash reports to -->
<!-- the ClickHouse core developers team via Sentry https://sentry.io -->
<enabled>false</enabled>
Describe what's wrong
We run the following query
and got an exception
Does it reproduce on recent release?
The list of releases
Enable crash reporting
How to reproduce
CREATE TABLE
statements for all tables involvedExpected behavior
Error message and/or stacktrace
Additional context
The text was updated successfully, but these errors were encountered: