Skip to content

Commit

Permalink
[SPARK-43009][SQL] Parameterized sql() with Any constants
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
In the PR, I propose to change API of parameterized SQL, and replace type of argument values from `string` to `Any` in Scala/Java/Python and `Expression.Literal` in protobuf API. Language API can accept `Any` objects from which it is possible to construct literal expressions.

#### Scala/Java:

```scala
  def sql(sqlText: String, args: Map[String, Any]): DataFrame
```
values of the `args` map are wrapped by the `lit()` function which leaves `Column` as is and creates a literal from other Java/Scala objects (for more details see the `Scala` tab at https://spark.apache.org/docs/latest/sql-ref-datatypes.html).

#### Python:

```python
def sql(self, sqlQuery: str, args: Optional[Dict[str, Any]] = None, **kwargs: Any) -> DataFrame:
```
Similarly to Scala/Java `sql`, Python's `sql()` accepts Python objects as values of the `args` dictionary (see more details about acceptable Python objects at https://spark.apache.org/docs/latest/sql-ref-datatypes.html). `sql()` converts dictionary values to `Column` literal expressions by `lit()`.

#### Protobuf:

```proto
message SqlCommand {
  // (Required) SQL Query.
  string sql = 1;

  // (Optional) A map of parameter names to literal expressions.
  map<string, Expression.Literal> args = 2;
}
```

For example:
```scala
scala> val sqlText = """SELECT s FROM VALUES ('Jeff /*__*/ Green'), ('E\'Twaun Moore') AS t(s) WHERE s = :player_name"""
sqlText: String = SELECT s FROM VALUES ('Jeff /*__*/ Green'), ('E\'Twaun Moore') AS t(s) WHERE s = :player_name

scala> sql(sqlText, args = Map("player_name" -> lit("E'Twaun Moore"))).show(false)
+-------------+
|s            |
+-------------+
|E'Twaun Moore|
+-------------+
```

### Why are the changes needed?
The current implementation the parameterized `sql()` requires arguments as string values parsed to SQL literal expressions that causes the following issues:
1. SQL comments are skipped while parsing, so, some fragments of input might be skipped. For example, `'Europe -- Amsterdam'`. In this case, `-- Amsterdam` is excluded from the input.
2. Special chars in string values must be escaped, for instance `'E\'Twaun Moore'`

### Does this PR introduce _any_ user-facing change?
No since the parameterized SQL feature #38864 hasn't been released yet.

### How was this patch tested?
By running the affected tests:
```
$ build/sbt "test:testOnly *ParametersSuite"
$ python/run-tests --parallelism=1 --testnames 'pyspark.sql.tests.connect.test_connect_basic SparkConnectBasicTests.test_sql_with_args'
$ python/run-tests --parallelism=1 --testnames 'pyspark.sql.session SparkSession.sql'
```

Closes #40623 from MaxGekk/parameterized-sql-any.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
  • Loading branch information
MaxGekk authored and cloud-fan committed Apr 4, 2023
1 parent b283c6a commit 156a12e
Show file tree
Hide file tree
Showing 14 changed files with 293 additions and 233 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ import org.apache.spark.sql.catalyst.encoders.{AgnosticEncoder, RowEncoder}
import org.apache.spark.sql.catalyst.encoders.AgnosticEncoders.{BoxedLongEncoder, UnboundRowEncoder}
import org.apache.spark.sql.connect.client.{SparkConnectClient, SparkResult}
import org.apache.spark.sql.connect.client.util.{Cleaner, ConvertToArrow}
import org.apache.spark.sql.connect.common.LiteralValueProtoConverter.toLiteralProto
import org.apache.spark.sql.internal.CatalogImpl
import org.apache.spark.sql.types.StructType

Expand Down Expand Up @@ -215,15 +216,16 @@ class SparkSession private[sql] (
* @param sqlText
* A SQL statement with named parameters to execute.
* @param args
* A map of parameter names to string values that are parsed as SQL literal expressions. For
* example, map keys: "rank", "name", "birthdate"; map values: "1", "'Steven'",
* "DATE'2023-03-21'". The fragments of string values belonged to SQL comments are skipped
* while parsing.
* A map of parameter names to Java/Scala objects that can be converted to SQL literal
* expressions. See <a href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html">
* Supported Data Types</a> for supported value types in Scala/Java. For example, map keys:
* "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value
* can be also a `Column` of literal expression, in that case it is taken as is.
*
* @since 3.4.0
*/
@Experimental
def sql(sqlText: String, args: Map[String, String]): DataFrame = {
def sql(sqlText: String, args: Map[String, Any]): DataFrame = {
sql(sqlText, args.asJava)
}

Expand All @@ -234,19 +236,24 @@ class SparkSession private[sql] (
* @param sqlText
* A SQL statement with named parameters to execute.
* @param args
* A map of parameter names to string values that are parsed as SQL literal expressions. For
* example, map keys: "rank", "name", "birthdate"; map values: "1", "'Steven'",
* "DATE'2023-03-21'". The fragments of string values belonged to SQL comments are skipped
* while parsing.
* A map of parameter names to Java/Scala objects that can be converted to SQL literal
* expressions. See <a href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html">
* Supported Data Types</a> for supported value types in Scala/Java. For example, map keys:
* "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value
* can be also a `Column` of literal expression, in that case it is taken as is.
*
* @since 3.4.0
*/
@Experimental
def sql(sqlText: String, args: java.util.Map[String, String]): DataFrame = newDataFrame {
def sql(sqlText: String, args: java.util.Map[String, Any]): DataFrame = newDataFrame {
builder =>
// Send the SQL once to the server and then check the output.
val cmd = newCommand(b =>
b.setSqlCommand(proto.SqlCommand.newBuilder().setSql(sqlText).putAllArgs(args)))
b.setSqlCommand(
proto.SqlCommand
.newBuilder()
.setSql(sqlText)
.putAllArgs(args.asScala.mapValues(toLiteralProto).toMap.asJava)))
val plan = proto.Plan.newBuilder().setCommand(cmd)
val responseIter = client.execute(plan.build())

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,8 @@ message SqlCommand {
// (Required) SQL Query.
string sql = 1;

// (Optional) A map of parameter names to string values that are parsed as
// SQL literal expressions. For example, map keys: "rank", "name", "birthdate";
// map values: "1", "'Steven'", "DATE'2023-03-21'". The fragments of string values
// belonged to SQL comments are skipped while parsing.
map<string, string> args = 2;
// (Optional) A map of parameter names to literal expressions.
map<string, Expression.Literal> args = 2;
}

// A command that can create DataFrame global temp view or local temp view.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,11 +108,8 @@ message SQL {
// (Required) The SQL query.
string query = 1;

// (Optional) A map of parameter names to string values that are parsed as
// SQL literal expressions. For example, map keys: "rank", "name", "birthdate";
// map values: "1", "'Steven'", "DATE'2023-03-21'". The fragments of string values
// belonged to SQL comments are skipped while parsing.
map<string, string> args = 2;
// (Optional) A map of parameter names to literal expressions.
map<string, Expression.Literal> args = 2;
}

// Relation that reads from a file / table or other data source. Does not have additional
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -214,11 +214,11 @@ class SparkConnectPlanner(val session: SparkSession) {
}

private def transformSql(sql: proto.SQL): LogicalPlan = {
val args = sql.getArgsMap.asScala.toMap
val args = sql.getArgsMap
val parser = session.sessionState.sqlParser
val parsedPlan = parser.parsePlan(sql.getQuery)
if (args.nonEmpty) {
ParameterizedQuery(parsedPlan, args.mapValues(parser.parseExpression).toMap)
if (!args.isEmpty) {
ParameterizedQuery(parsedPlan, args.asScala.mapValues(transformLiteral).toMap)
} else {
parsedPlan
}
Expand Down Expand Up @@ -1690,7 +1690,9 @@ class SparkConnectPlanner(val session: SparkSession) {
sessionId: String,
responseObserver: StreamObserver[ExecutePlanResponse]): Unit = {
// Eagerly execute commands of the provided SQL string.
val df = session.sql(getSqlCommand.getSql, getSqlCommand.getArgsMap)
val df = session.sql(
getSqlCommand.getSql,
getSqlCommand.getArgsMap.asScala.mapValues(transformLiteral).toMap)
// Check if commands have been executed.
val isCommand = df.queryExecution.commandExecuted.isInstanceOf[CommandResult]
val rows = df.logicalPlan match {
Expand Down
9 changes: 5 additions & 4 deletions python/pyspark/sql/connect/plan.py
Original file line number Diff line number Diff line change
Expand Up @@ -945,13 +945,12 @@ def plan(self, session: "SparkConnectClient") -> proto.Relation:


class SQL(LogicalPlan):
def __init__(self, query: str, args: Optional[Dict[str, str]] = None) -> None:
def __init__(self, query: str, args: Optional[Dict[str, Any]] = None) -> None:
super().__init__(None)

if args is not None:
for k, v in args.items():
assert isinstance(k, str)
assert isinstance(v, str)

self._query = query
self._args = args
Expand All @@ -962,7 +961,7 @@ def plan(self, session: "SparkConnectClient") -> proto.Relation:

if self._args is not None and len(self._args) > 0:
for k, v in self._args.items():
plan.sql.args[k] = v
plan.sql.args[k].CopyFrom(LiteralExpression._from_value(v).to_plan(session).literal)

return plan

Expand All @@ -971,7 +970,9 @@ def command(self, session: "SparkConnectClient") -> proto.Command:
cmd.sql_command.sql = self._query
if self._args is not None and len(self._args) > 0:
for k, v in self._args.items():
cmd.sql_command.args[k] = v
cmd.sql_command.args[k].CopyFrom(
LiteralExpression._from_value(v).to_plan(session).literal
)
return cmd


Expand Down
50 changes: 25 additions & 25 deletions python/pyspark/sql/connect/proto/commands_pb2.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@


DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
b'\n\x1cspark/connect/commands.proto\x12\rspark.connect\x1a\x19google/protobuf/any.proto\x1a\x1fspark/connect/expressions.proto\x1a\x1dspark/connect/relations.proto"\xe9\x03\n\x07\x43ommand\x12]\n\x11register_function\x18\x01 \x01(\x0b\x32..spark.connect.CommonInlineUserDefinedFunctionH\x00R\x10registerFunction\x12H\n\x0fwrite_operation\x18\x02 \x01(\x0b\x32\x1d.spark.connect.WriteOperationH\x00R\x0ewriteOperation\x12_\n\x15\x63reate_dataframe_view\x18\x03 \x01(\x0b\x32).spark.connect.CreateDataFrameViewCommandH\x00R\x13\x63reateDataframeView\x12O\n\x12write_operation_v2\x18\x04 \x01(\x0b\x32\x1f.spark.connect.WriteOperationV2H\x00R\x10writeOperationV2\x12<\n\x0bsql_command\x18\x05 \x01(\x0b\x32\x19.spark.connect.SqlCommandH\x00R\nsqlCommand\x12\x35\n\textension\x18\xe7\x07 \x01(\x0b\x32\x14.google.protobuf.AnyH\x00R\textensionB\x0e\n\x0c\x63ommand_type"\x90\x01\n\nSqlCommand\x12\x10\n\x03sql\x18\x01 \x01(\tR\x03sql\x12\x37\n\x04\x61rgs\x18\x02 \x03(\x0b\x32#.spark.connect.SqlCommand.ArgsEntryR\x04\x61rgs\x1a\x37\n\tArgsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01"\x96\x01\n\x1a\x43reateDataFrameViewCommand\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x12\n\x04name\x18\x02 \x01(\tR\x04name\x12\x1b\n\tis_global\x18\x03 \x01(\x08R\x08isGlobal\x12\x18\n\x07replace\x18\x04 \x01(\x08R\x07replace"\x9b\x08\n\x0eWriteOperation\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x1b\n\x06source\x18\x02 \x01(\tH\x01R\x06source\x88\x01\x01\x12\x14\n\x04path\x18\x03 \x01(\tH\x00R\x04path\x12?\n\x05table\x18\x04 \x01(\x0b\x32\'.spark.connect.WriteOperation.SaveTableH\x00R\x05table\x12:\n\x04mode\x18\x05 \x01(\x0e\x32&.spark.connect.WriteOperation.SaveModeR\x04mode\x12*\n\x11sort_column_names\x18\x06 \x03(\tR\x0fsortColumnNames\x12\x31\n\x14partitioning_columns\x18\x07 \x03(\tR\x13partitioningColumns\x12\x43\n\tbucket_by\x18\x08 \x01(\x0b\x32&.spark.connect.WriteOperation.BucketByR\x08\x62ucketBy\x12\x44\n\x07options\x18\t \x03(\x0b\x32*.spark.connect.WriteOperation.OptionsEntryR\x07options\x1a:\n\x0cOptionsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01\x1a\x82\x02\n\tSaveTable\x12\x1d\n\ntable_name\x18\x01 \x01(\tR\ttableName\x12X\n\x0bsave_method\x18\x02 \x01(\x0e\x32\x37.spark.connect.WriteOperation.SaveTable.TableSaveMethodR\nsaveMethod"|\n\x0fTableSaveMethod\x12!\n\x1dTABLE_SAVE_METHOD_UNSPECIFIED\x10\x00\x12#\n\x1fTABLE_SAVE_METHOD_SAVE_AS_TABLE\x10\x01\x12!\n\x1dTABLE_SAVE_METHOD_INSERT_INTO\x10\x02\x1a[\n\x08\x42ucketBy\x12.\n\x13\x62ucket_column_names\x18\x01 \x03(\tR\x11\x62ucketColumnNames\x12\x1f\n\x0bnum_buckets\x18\x02 \x01(\x05R\nnumBuckets"\x89\x01\n\x08SaveMode\x12\x19\n\x15SAVE_MODE_UNSPECIFIED\x10\x00\x12\x14\n\x10SAVE_MODE_APPEND\x10\x01\x12\x17\n\x13SAVE_MODE_OVERWRITE\x10\x02\x12\x1d\n\x19SAVE_MODE_ERROR_IF_EXISTS\x10\x03\x12\x14\n\x10SAVE_MODE_IGNORE\x10\x04\x42\x0b\n\tsave_typeB\t\n\x07_source"\xad\x06\n\x10WriteOperationV2\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x1d\n\ntable_name\x18\x02 \x01(\tR\ttableName\x12\x1f\n\x08provider\x18\x03 \x01(\tH\x00R\x08provider\x88\x01\x01\x12L\n\x14partitioning_columns\x18\x04 \x03(\x0b\x32\x19.spark.connect.ExpressionR\x13partitioningColumns\x12\x46\n\x07options\x18\x05 \x03(\x0b\x32,.spark.connect.WriteOperationV2.OptionsEntryR\x07options\x12_\n\x10table_properties\x18\x06 \x03(\x0b\x32\x34.spark.connect.WriteOperationV2.TablePropertiesEntryR\x0ftableProperties\x12\x38\n\x04mode\x18\x07 \x01(\x0e\x32$.spark.connect.WriteOperationV2.ModeR\x04mode\x12J\n\x13overwrite_condition\x18\x08 \x01(\x0b\x32\x19.spark.connect.ExpressionR\x12overwriteCondition\x1a:\n\x0cOptionsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01\x1a\x42\n\x14TablePropertiesEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01"\x9f\x01\n\x04Mode\x12\x14\n\x10MODE_UNSPECIFIED\x10\x00\x12\x0f\n\x0bMODE_CREATE\x10\x01\x12\x12\n\x0eMODE_OVERWRITE\x10\x02\x12\x1d\n\x19MODE_OVERWRITE_PARTITIONS\x10\x03\x12\x0f\n\x0bMODE_APPEND\x10\x04\x12\x10\n\x0cMODE_REPLACE\x10\x05\x12\x1a\n\x16MODE_CREATE_OR_REPLACE\x10\x06\x42\x0b\n\t_providerB"\n\x1eorg.apache.spark.connect.protoP\x01\x62\x06proto3'
b'\n\x1cspark/connect/commands.proto\x12\rspark.connect\x1a\x19google/protobuf/any.proto\x1a\x1fspark/connect/expressions.proto\x1a\x1dspark/connect/relations.proto"\xe9\x03\n\x07\x43ommand\x12]\n\x11register_function\x18\x01 \x01(\x0b\x32..spark.connect.CommonInlineUserDefinedFunctionH\x00R\x10registerFunction\x12H\n\x0fwrite_operation\x18\x02 \x01(\x0b\x32\x1d.spark.connect.WriteOperationH\x00R\x0ewriteOperation\x12_\n\x15\x63reate_dataframe_view\x18\x03 \x01(\x0b\x32).spark.connect.CreateDataFrameViewCommandH\x00R\x13\x63reateDataframeView\x12O\n\x12write_operation_v2\x18\x04 \x01(\x0b\x32\x1f.spark.connect.WriteOperationV2H\x00R\x10writeOperationV2\x12<\n\x0bsql_command\x18\x05 \x01(\x0b\x32\x19.spark.connect.SqlCommandH\x00R\nsqlCommand\x12\x35\n\textension\x18\xe7\x07 \x01(\x0b\x32\x14.google.protobuf.AnyH\x00R\textensionB\x0e\n\x0c\x63ommand_type"\xb3\x01\n\nSqlCommand\x12\x10\n\x03sql\x18\x01 \x01(\tR\x03sql\x12\x37\n\x04\x61rgs\x18\x02 \x03(\x0b\x32#.spark.connect.SqlCommand.ArgsEntryR\x04\x61rgs\x1aZ\n\tArgsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x37\n\x05value\x18\x02 \x01(\x0b\x32!.spark.connect.Expression.LiteralR\x05value:\x02\x38\x01"\x96\x01\n\x1a\x43reateDataFrameViewCommand\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x12\n\x04name\x18\x02 \x01(\tR\x04name\x12\x1b\n\tis_global\x18\x03 \x01(\x08R\x08isGlobal\x12\x18\n\x07replace\x18\x04 \x01(\x08R\x07replace"\x9b\x08\n\x0eWriteOperation\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x1b\n\x06source\x18\x02 \x01(\tH\x01R\x06source\x88\x01\x01\x12\x14\n\x04path\x18\x03 \x01(\tH\x00R\x04path\x12?\n\x05table\x18\x04 \x01(\x0b\x32\'.spark.connect.WriteOperation.SaveTableH\x00R\x05table\x12:\n\x04mode\x18\x05 \x01(\x0e\x32&.spark.connect.WriteOperation.SaveModeR\x04mode\x12*\n\x11sort_column_names\x18\x06 \x03(\tR\x0fsortColumnNames\x12\x31\n\x14partitioning_columns\x18\x07 \x03(\tR\x13partitioningColumns\x12\x43\n\tbucket_by\x18\x08 \x01(\x0b\x32&.spark.connect.WriteOperation.BucketByR\x08\x62ucketBy\x12\x44\n\x07options\x18\t \x03(\x0b\x32*.spark.connect.WriteOperation.OptionsEntryR\x07options\x1a:\n\x0cOptionsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01\x1a\x82\x02\n\tSaveTable\x12\x1d\n\ntable_name\x18\x01 \x01(\tR\ttableName\x12X\n\x0bsave_method\x18\x02 \x01(\x0e\x32\x37.spark.connect.WriteOperation.SaveTable.TableSaveMethodR\nsaveMethod"|\n\x0fTableSaveMethod\x12!\n\x1dTABLE_SAVE_METHOD_UNSPECIFIED\x10\x00\x12#\n\x1fTABLE_SAVE_METHOD_SAVE_AS_TABLE\x10\x01\x12!\n\x1dTABLE_SAVE_METHOD_INSERT_INTO\x10\x02\x1a[\n\x08\x42ucketBy\x12.\n\x13\x62ucket_column_names\x18\x01 \x03(\tR\x11\x62ucketColumnNames\x12\x1f\n\x0bnum_buckets\x18\x02 \x01(\x05R\nnumBuckets"\x89\x01\n\x08SaveMode\x12\x19\n\x15SAVE_MODE_UNSPECIFIED\x10\x00\x12\x14\n\x10SAVE_MODE_APPEND\x10\x01\x12\x17\n\x13SAVE_MODE_OVERWRITE\x10\x02\x12\x1d\n\x19SAVE_MODE_ERROR_IF_EXISTS\x10\x03\x12\x14\n\x10SAVE_MODE_IGNORE\x10\x04\x42\x0b\n\tsave_typeB\t\n\x07_source"\xad\x06\n\x10WriteOperationV2\x12-\n\x05input\x18\x01 \x01(\x0b\x32\x17.spark.connect.RelationR\x05input\x12\x1d\n\ntable_name\x18\x02 \x01(\tR\ttableName\x12\x1f\n\x08provider\x18\x03 \x01(\tH\x00R\x08provider\x88\x01\x01\x12L\n\x14partitioning_columns\x18\x04 \x03(\x0b\x32\x19.spark.connect.ExpressionR\x13partitioningColumns\x12\x46\n\x07options\x18\x05 \x03(\x0b\x32,.spark.connect.WriteOperationV2.OptionsEntryR\x07options\x12_\n\x10table_properties\x18\x06 \x03(\x0b\x32\x34.spark.connect.WriteOperationV2.TablePropertiesEntryR\x0ftableProperties\x12\x38\n\x04mode\x18\x07 \x01(\x0e\x32$.spark.connect.WriteOperationV2.ModeR\x04mode\x12J\n\x13overwrite_condition\x18\x08 \x01(\x0b\x32\x19.spark.connect.ExpressionR\x12overwriteCondition\x1a:\n\x0cOptionsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01\x1a\x42\n\x14TablePropertiesEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n\x05value\x18\x02 \x01(\tR\x05value:\x02\x38\x01"\x9f\x01\n\x04Mode\x12\x14\n\x10MODE_UNSPECIFIED\x10\x00\x12\x0f\n\x0bMODE_CREATE\x10\x01\x12\x12\n\x0eMODE_OVERWRITE\x10\x02\x12\x1d\n\x19MODE_OVERWRITE_PARTITIONS\x10\x03\x12\x0f\n\x0bMODE_APPEND\x10\x04\x12\x10\n\x0cMODE_REPLACE\x10\x05\x12\x1a\n\x16MODE_CREATE_OR_REPLACE\x10\x06\x42\x0b\n\t_providerB"\n\x1eorg.apache.spark.connect.protoP\x01\x62\x06proto3'
)


Expand Down Expand Up @@ -187,29 +187,29 @@
_COMMAND._serialized_start = 139
_COMMAND._serialized_end = 628
_SQLCOMMAND._serialized_start = 631
_SQLCOMMAND._serialized_end = 775
_SQLCOMMAND._serialized_end = 810
_SQLCOMMAND_ARGSENTRY._serialized_start = 720
_SQLCOMMAND_ARGSENTRY._serialized_end = 775
_CREATEDATAFRAMEVIEWCOMMAND._serialized_start = 778
_CREATEDATAFRAMEVIEWCOMMAND._serialized_end = 928
_WRITEOPERATION._serialized_start = 931
_WRITEOPERATION._serialized_end = 1982
_WRITEOPERATION_OPTIONSENTRY._serialized_start = 1406
_WRITEOPERATION_OPTIONSENTRY._serialized_end = 1464
_WRITEOPERATION_SAVETABLE._serialized_start = 1467
_WRITEOPERATION_SAVETABLE._serialized_end = 1725
_WRITEOPERATION_SAVETABLE_TABLESAVEMETHOD._serialized_start = 1601
_WRITEOPERATION_SAVETABLE_TABLESAVEMETHOD._serialized_end = 1725
_WRITEOPERATION_BUCKETBY._serialized_start = 1727
_WRITEOPERATION_BUCKETBY._serialized_end = 1818
_WRITEOPERATION_SAVEMODE._serialized_start = 1821
_WRITEOPERATION_SAVEMODE._serialized_end = 1958
_WRITEOPERATIONV2._serialized_start = 1985
_WRITEOPERATIONV2._serialized_end = 2798
_WRITEOPERATIONV2_OPTIONSENTRY._serialized_start = 1406
_WRITEOPERATIONV2_OPTIONSENTRY._serialized_end = 1464
_WRITEOPERATIONV2_TABLEPROPERTIESENTRY._serialized_start = 2557
_WRITEOPERATIONV2_TABLEPROPERTIESENTRY._serialized_end = 2623
_WRITEOPERATIONV2_MODE._serialized_start = 2626
_WRITEOPERATIONV2_MODE._serialized_end = 2785
_SQLCOMMAND_ARGSENTRY._serialized_end = 810
_CREATEDATAFRAMEVIEWCOMMAND._serialized_start = 813
_CREATEDATAFRAMEVIEWCOMMAND._serialized_end = 963
_WRITEOPERATION._serialized_start = 966
_WRITEOPERATION._serialized_end = 2017
_WRITEOPERATION_OPTIONSENTRY._serialized_start = 1441
_WRITEOPERATION_OPTIONSENTRY._serialized_end = 1499
_WRITEOPERATION_SAVETABLE._serialized_start = 1502
_WRITEOPERATION_SAVETABLE._serialized_end = 1760
_WRITEOPERATION_SAVETABLE_TABLESAVEMETHOD._serialized_start = 1636
_WRITEOPERATION_SAVETABLE_TABLESAVEMETHOD._serialized_end = 1760
_WRITEOPERATION_BUCKETBY._serialized_start = 1762
_WRITEOPERATION_BUCKETBY._serialized_end = 1853
_WRITEOPERATION_SAVEMODE._serialized_start = 1856
_WRITEOPERATION_SAVEMODE._serialized_end = 1993
_WRITEOPERATIONV2._serialized_start = 2020
_WRITEOPERATIONV2._serialized_end = 2833
_WRITEOPERATIONV2_OPTIONSENTRY._serialized_start = 1441
_WRITEOPERATIONV2_OPTIONSENTRY._serialized_end = 1499
_WRITEOPERATIONV2_TABLEPROPERTIESENTRY._serialized_start = 2592
_WRITEOPERATIONV2_TABLEPROPERTIESENTRY._serialized_end = 2658
_WRITEOPERATIONV2_MODE._serialized_start = 2661
_WRITEOPERATIONV2_MODE._serialized_end = 2820
# @@protoc_insertion_point(module_scope)
Loading

0 comments on commit 156a12e

Please sign in to comment.