You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched in the issues and found no similar issues.
Describe the proposal
Currently, Kyuubi supports thrift-based HS2 protocol, and the results transmission is not efficient enough.
For the Spark engine, the main pain points are:
Driver has high memory pressure because it needs to collect RDD as InternelRow, convert to Row, and convert to TRow(row-based or columnar-based, which depends on the client protocol) before sending it back to Kyuubi Server, which typically consumes several times memory size than the data stored in the parquet file.
The data conversion happens on the Driver side, consuming much CPU time as well.
The protocol does not support compression, compression is quite helpful for network bandwidth-limited scenarios.
Apache Arrow is a columnar-based format that is a more efficient format for data transmission, it is adopted by PySpark as the data serialization format between JVM and Python Process, and will be adopted by the ongoing Spark Connect. Kyuubi can support fetching data in Arrow format to improve the results transmission efficiency.
The core ideas are:
converting (encoding as arrow and optional compression) data on the executor side before collecting to driver
the driver collects arrow results, encodes arrow data as thrift binary data, and set a flag to indicate the client should decode the result in arrow format, then sends them back to the server directly
the client should be updated to support decoding and decompressing arrow format
Code of Conduct
Search before asking
Describe the proposal
Currently, Kyuubi supports thrift-based HS2 protocol, and the results transmission is not efficient enough.
For the Spark engine, the main pain points are:
Driver has high memory pressure because it needs to collect RDD as InternelRow, convert to Row, and convert to TRow(row-based or columnar-based, which depends on the client protocol) before sending it back to Kyuubi Server, which typically consumes several times memory size than the data stored in the parquet file.
The data conversion happens on the Driver side, consuming much CPU time as well.
The protocol does not support compression, compression is quite helpful for network bandwidth-limited scenarios.
Apache Arrow is a columnar-based format that is a more efficient format for data transmission, it is adopted by PySpark as the data serialization format between JVM and Python Process, and will be adopted by the ongoing Spark Connect. Kyuubi can support fetching data in Arrow format to improve the results transmission efficiency.
The core ideas are:
Task list
kyuubi.operation.result.codec
to KyuubiConf #3866Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: