Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(frontend):change query_handle to return data stream #5556

Merged
merged 4 commits into from
Sep 28, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 18 additions & 8 deletions src/frontend/src/handler/query.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.

use futures::stream::BoxStream;
use futures::StreamExt;
use pgwire::pg_field_descriptor::PgFieldDescriptor;
use pgwire::pg_response::{PgResponse, StatementType};
use pgwire::pg_server::BoxedError;
use pgwire::types::Row;
use risingwave_common::error::Result;
use risingwave_common::error::{ErrorCode, Result, RwError};
use risingwave_common::session_config::QueryMode;
use risingwave_sqlparser::ast::Statement;
use tracing::debug;
Expand All @@ -29,7 +32,7 @@ use crate::scheduler::{
};
use crate::session::{OptimizerContext, SessionImpl};

pub type QueryResultSet = Vec<Row>;
pub type QueryResultSet = BoxStream<'static, std::result::Result<Row, BoxedError>>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use generic to avoid this Box? There could be millions of rows in the result set, and I'm not sure whether this impacts the performance a lot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it may impact the performance. I try to use generic but it will make so many places full of generic param. I think it's not appropriate to use generic param( at least the way I use .
I think there are other ways to solve this problem:

  • use the type implement stream trait to avoid generic param
  • use Vec instead of Row to decrease the time of call next()
    BoxStream<'static, std::result::Result<Vec,BoxedError>>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it may impact the performance.

Could you elaborate this point? 👀

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boxstream is Box, it's a dynamic bind. So I guess every time we call 'stream.next()' to get a row, it needs to call by some mechanism like vtable. Hence if there are millions of rows in the result set, we may need to access vtable millions of times.

Copy link
Member

@xxchan xxchan Sep 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeees, Sorry I misread it as "may not impact" 😄😄😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the type implement stream trait to avoid generic param

Yes. We may Type Alias Impl Trait here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use Vec instead of Row to decrease the time of call next()
BoxStream<'static, std::result::Result<Vec,BoxedError>>

I have used this way to decrease the time of calling stream.next() .
And I will create a new PR to create a new stream type to avoid Boxstream later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please create an issue for that


pub async fn handle_query(
context: OptimizerContext,
Expand All @@ -55,7 +58,7 @@ pub async fn handle_query(
};
debug!("query_mode:{:?}", query_mode);

let (rows, pg_descs) = match query_mode {
let (mut row_stream, pg_descs) = match query_mode {
QueryMode::Local => {
if stmt_type.is_dml() {
// DML do not support local mode yet.
Expand All @@ -69,10 +72,15 @@ pub async fn handle_query(
};

let rows_count = match stmt_type {
StatementType::SELECT => rows.len() as i32,
StatementType::SELECT => 0_i32,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this is 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for query statement, rows_count is meaningless because it's stream mode. We will calculate the row count when we receive the data.
I have added the related comment in the rows_count field of PgResponse.

// row count of effected row. Used for INSERT, UPDATE, DELETE, COPY, and other statements that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better if we use Option here.

StatementType::INSERT | StatementType::DELETE | StatementType::UPDATE => {
let first_row = rows[0].values();
let affected_rows_str = first_row[0]
// Get the row from the row_stream.
let first_row = row_stream
.next()
.await
.expect("compute node should return affected rows in output")
.map_err(|err| RwError::from(ErrorCode::InternalError(format!("{}", err))))?;
let affected_rows_str = first_row.values()[0]
.as_ref()
.expect("compute node should return affected rows in output");
String::from_utf8(affected_rows_str.to_vec())
Expand All @@ -88,7 +96,9 @@ pub async fn handle_query(
flush_for_write(&session, stmt_type).await?;
}

Ok(PgResponse::new(stmt_type, rows_count, rows, pg_descs))
Ok(PgResponse::new_for_stream(
stmt_type, rows_count, row_stream, pg_descs,
))
}

fn to_statement_type(stmt: &Statement) -> StatementType {
Expand Down Expand Up @@ -195,7 +205,7 @@ async fn local_execute(
// TODO: Passing sql here
let execution =
LocalQueryExecution::new(query, front_env.clone(), "", epoch, session.auth_context());
let rsp = Ok((execution.collect_rows(format).await?, pg_descs));
let rsp = Ok((execution.stream_rows(format), pg_descs));

// Release hummock snapshot for local execution.
hummock_snapshot_manager.release(epoch, &query_id).await;
Expand Down
19 changes: 13 additions & 6 deletions src/frontend/src/scheduler/distributed/query_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ use std::sync::Arc;

use futures::StreamExt;
use futures_async_stream::try_stream;
use pgwire::pg_server::{Session, SessionId};
use pgwire::pg_server::{BoxedError, Session, SessionId};
use pgwire::types::Row;
use risingwave_batch::executor::BoxedDataChunkStream;
use risingwave_common::array::DataChunk;
use risingwave_common::error::RwError;
Expand Down Expand Up @@ -122,7 +123,7 @@ impl QueryManager {

// TODO: Clean up queries status when ends. This should be done lazily.

query_result_fetcher.collect_rows_from_channel(format).await
Ok(query_result_fetcher.stream_from_channel(format))
}

pub fn cancel_queries_in_session(&self, session_id: SessionId) {
Expand Down Expand Up @@ -190,13 +191,19 @@ impl QueryResultFetcher {
Box::pin(self.run_inner())
}

async fn collect_rows_from_channel(mut self, format: bool) -> SchedulerResult<QueryResultSet> {
let mut result_sets = vec![];
#[try_stream(ok = Row, error = BoxedError)]
async fn stream_from_channel_inner(mut self, format: bool) {
while let Some(chunk_inner) = self.chunk_rx.recv().await {
let chunk = chunk_inner?;
result_sets.extend(to_pg_rows(chunk, format));
let rows = to_pg_rows(chunk, format);
for row in rows {
yield row;
}
}
Ok(result_sets)
}

fn stream_from_channel(self, format: bool) -> QueryResultSet {
Box::pin(self.stream_from_channel_inner(format))
}
}

Expand Down
18 changes: 12 additions & 6 deletions src/frontend/src/scheduler/local.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,10 @@
use std::collections::HashMap;
use std::sync::Arc;

use futures_async_stream::{for_await, try_stream};
use futures_async_stream::try_stream;
use itertools::Itertools;
use pgwire::pg_server::BoxedError;
use pgwire::types::Row;
use risingwave_batch::executor::{BoxedDataChunkStream, ExecutorBuilder};
use risingwave_batch::task::TaskId;
use risingwave_common::array::DataChunk;
Expand Down Expand Up @@ -99,15 +101,19 @@ impl LocalQueryExecution {
Box::pin(self.run_inner())
}

pub async fn collect_rows(self, format: bool) -> SchedulerResult<QueryResultSet> {
let data_stream = self.run();
let mut rows = vec![];
#[try_stream(ok = Row, error = BoxedError)]
async fn stream_row_inner(data_stream: BoxedDataChunkStream, format: bool) {
#[for_await]
for chunk in data_stream {
rows.extend(to_pg_rows(chunk?, format));
let rows = to_pg_rows(chunk?, format);
for row in rows {
yield row;
}
}
}

Ok(rows)
pub fn stream_rows(self, format: bool) -> QueryResultSet {
Box::pin(Self::stream_row_inner(self.run(), format))
}

/// Convert query to plan fragment.
Expand Down