Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
9a21551
Atomic CTAS
mccheah Jun 4, 2019
266784e
Wire together the rest of replace table logical plans.
mccheah Jun 4, 2019
baeabc8
Remove redundant code
mccheah Jun 4, 2019
bc8d3b5
Some unit tests
mccheah Jun 5, 2019
6c958b9
DDL parser tests for replace table
mccheah Jun 5, 2019
8c0270f
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jun 5, 2019
95ba92d
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jun 7, 2019
08f115e
Fix merge conflicts
mccheah Jun 7, 2019
89aea5e
Address comments
mccheah Jun 8, 2019
a9142e9
Fix javadoc
mccheah Jun 8, 2019
842886e
Address comments
mccheah Jun 15, 2019
b68df7c
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jun 15, 2019
2bf4b5f
Fix merge conflicts
mccheah Jun 15, 2019
80dc0cc
Address comments
mccheah Jun 25, 2019
71f29ed
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jun 25, 2019
a333df8
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jul 3, 2019
4011a8b
Resolve conflict
mccheah Jul 3, 2019
aebf767
Newline
mccheah Jul 3, 2019
6231f6a
Add support for CREATE OR REPLACE TABLE
mccheah Jul 8, 2019
96b6db6
Name more boolean parameters
mccheah Jul 8, 2019
295ff75
Remove REPLACE...TEMPORARY
mccheah Jul 11, 2019
d7e1e67
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jul 11, 2019
deaf255
Address comments
mccheah Jul 12, 2019
0b5c029
Merge remote-tracking branch 'origin/master' into spark-27724
mccheah Jul 16, 2019
581dba2
Address comments
mccheah Jul 16, 2019
be04476
Address comments
mccheah Jul 18, 2019
609eb9c
Add stageCreateOrReplace to indicate to the staging catalog the appro…
mccheah Jul 19, 2019
2f6e0b6
Revert droppedTables stuff
mccheah Jul 19, 2019
05a827d
Address comments
mccheah Jul 19, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,14 @@ statement
(AS? query)? #createHiveTable
| CREATE TABLE (IF NOT EXISTS)? target=tableIdentifier
LIKE source=tableIdentifier locationSpec? #createTableLike
| replaceTableHeader ('(' colTypeList ')')? tableProvider
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there other flavors of REPLACE TABLE that we need to support?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that we should support all of what's already here, at least not to begin with.

I think that the main use of REPLACE TABLE as an atomic operation is REPLACE TABLE ... AS SELECT. That's because the replacement should only happen if the write succeeds and the write could easily fail for a lot of reasons. Without a write, this is just syntactic sugar for a combined drop and create.

I think the initial PR should focus on just the RTAS case. That simplifies this because it no longer needs the type list. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should support the USING clause that is used to pass the provider name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of the tableProvider field at the end I think USING is still supported right? As mentioned elsewhere, this is copied from CTAS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a test for it?

((OPTIONS options=tablePropertyList) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should OPTIONS be supported in v2? Right now, we copy options into table properties because v2 has no separate options. I also think it is confusing to users that there are table properties and options.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I copied this entirely from the equivalent create table statement. How does the syntax for REPLACE TABLE differ from that of the existing CREATE TABLE? My understanding is REPLACE TABLE is exactly equivalent to CREATE TABLE with the exception of not having an IF NOT EXISTS option.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, it should be the same a CREATE TABLE. That's a good reason to carry this forward.

(PARTITIONED BY partitioning=transformList) |
bucketSpec |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should bucketing be added using BUCKET BY? Or should we rely on bucket as a transform in the PARTITIONED BY clause?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as #24798 (comment) - this is copied from the create table spec.

locationSpec |
(COMMENT comment=STRING) |
(TBLPROPERTIES tableProps=tablePropertyList))*
(AS? query)? #replaceTable
| ANALYZE TABLE tableIdentifier partitionSpec? COMPUTE STATISTICS
(identifier | FOR COLUMNS identifierSeq | FOR ALL COLUMNS)? #analyze
| ALTER TABLE multipartIdentifier
Expand Down Expand Up @@ -261,6 +269,10 @@ createTableHeader
: CREATE TEMPORARY? EXTERNAL? TABLE (IF NOT EXISTS)? multipartIdentifier
;

replaceTableHeader
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried about creating new SQL syntax in Spark. AFAIK a similar syntax is CREATE OR REPLACE TABLE, which is implemented in DB2 and google BigQuery.

This is not a standard SQL syntax, so it's not surprising to see that Oracle doesn't support it. If Spark want a API for replace table, I think it's more reasonable to follow DB2 and BigQuery here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine using [CREATE OR] REPLACE TABLE instead of REPLACE TABLE [IF NOT EXISTS]. I think that is better syntax.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do we remove the existing support for the IF NOT EXISTS syntax in the SQL syntax? I think we'd want to avoid breaking existing SQL queries even in the 3.0 release.

Or do we support:

CREATE (OR REPLACE)? TABLE ... (IF NOT EXISTS)?

and then throw AnalysisException if: CREATE OR REPLACE TABLE ... IF NOT EXISTS?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine using [CREATE OR] REPLACE TABLE instead of REPLACE TABLE [IF NOT EXISTS]. I think that is better syntax.

We don't support REPLACE... IF NOT EXISTS in this PR. It's only either CREATE TABLE IF NOT EXISTS or REPLACE TABLE. I don't see any reason to include CREATE OR REPLACE table if we don't want to replace the IF NOT EXISTS parameter in existing spark-sql, but removing support for IF NOT EXISTS risks breaking existing SQL workflows as I've described above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say let's remove IF NOT EXISTS from the parser, but just for REPLACE TABLE.

Then, we should add an option to add CREATE OR, to the start, which would set an orCreate flag to true. If that is true, and the table doesn't exist, then create the table instead of replacing it. If that is false, then throw an exception that the table doesn't exist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense, though the current implementation doesn't have a rule supporting REPLACE TABLE IF NOT EXISTS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It it isn't supported, then we can throw an exception.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since that clause isn't supported in the SqlBase.g4 file itself, I'd expect the parser to throw an exception for us.

: (CREATE OR)? REPLACE TABLE multipartIdentifier
;

bucketSpec
: CLUSTERED BY identifierList
(SORTED BY orderedIdentifierList)?
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.catalog.v2;

import java.util.Map;

import org.apache.spark.sql.catalog.v2.expressions.Transform;
import org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException;
import org.apache.spark.sql.catalyst.analysis.NoSuchTableException;
import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException;
import org.apache.spark.sql.sources.v2.StagedTable;
import org.apache.spark.sql.sources.v2.SupportsWrite;
import org.apache.spark.sql.sources.v2.writer.BatchWrite;
import org.apache.spark.sql.sources.v2.writer.WriterCommitMessage;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;

/**
* An optional mix-in for implementations of {@link TableCatalog} that support staging creation of
* the a table before committing the table's metadata along with its contents in CREATE TABLE AS
* SELECT or REPLACE TABLE AS SELECT operations.
* <p>
* It is highly recommended to implement this trait whenever possible so that CREATE TABLE AS
* SELECT and REPLACE TABLE AS SELECT operations are atomic. For example, when one runs a REPLACE
* TABLE AS SELECT operation, if the catalog does not implement this trait, the planner will first
* drop the table via {@link TableCatalog#dropTable(Identifier)}, then create the table via
* {@link TableCatalog#createTable(Identifier, StructType, Transform[], Map)}, and then perform
* the write via {@link SupportsWrite#newWriteBuilder(CaseInsensitiveStringMap)}. However, if the
* write operation fails, the catalog will have already dropped the table, and the planner cannot
* roll back the dropping of the table.
* <p>
* If the catalog implements this plugin, the catalog can implement the methods to "stage" the
* creation and the replacement of a table. After the table's
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this doc is unfinished.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not, the comment continues to the links in the following line.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, I misread it. Thanks!

* {@link BatchWrite#commit(WriterCommitMessage[])} is called,
* {@link StagedTable#commitStagedChanges()} is called, at which point the staged table can
* complete both the data write and the metadata swap operation atomically.
*/
public interface StagingTableCatalog extends TableCatalog {
Copy link
Contributor

@cloud-fan cloud-fan Jul 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this API out to a new PR and implement atomic CTAS? I think REPLACE TABLE is not a blocker to this API and we don't have to do them together. This can also help us move forward faster, since designing a new SQL syntax (REPLACE TABLE) usually needs more time to get consensus.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that REPLACE TABLE and REPLACE TABLE AS SELECT is about as simple as one can get for this kind of feature. Can we stick with that for now? I'd really like to see REPLACE get in soon, it's a blocker for some of my workflows.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also thinking about this a bit more, perhaps the reason why REPLACE TABLE isn't supported in other database systems as standard is because all those systems support transactional operations. So users typically open a transaction, run the table creation, and then commit the operation.

Since we don't have support for specifically opening and closing transactions, we have to support this use case via an atomic REPLACE keyword.


/**
* Stage the creation of a table, preparing it to be committed into the metastore.
* <p>
* When the table is committed, the contents of any writes performed by the Spark planner are
* committed along with the metadata about the table passed into this method's arguments. If the
* table exists when this method is called, the method should throw an exception accordingly. If
* another process concurrently creates the table before this table's staged changes are
* committed, an exception should be thrown by {@link StagedTable#commitStagedChanges()}.
*
* @param ident a table identifier
* @param schema the schema of the new table, as a struct type
* @param partitions transforms to use for partitioning data in the table
* @param properties a string map of table properties
* @return metadata for the new table
* @throws TableAlreadyExistsException If a table or view already exists for the identifier
* @throws UnsupportedOperationException If a requested partition transform is not supported
* @throws NoSuchNamespaceException If the identifier namespace does not exist (optional)
*/
StagedTable stageCreate(
Identifier ident,
StructType schema,
Transform[] partitions,
Map<String, String> properties) throws TableAlreadyExistsException, NoSuchNamespaceException;

/**
* Stage the replacement of a table, preparing it to be committed into the metastore when the
* returned table's {@link StagedTable#commitStagedChanges()} is called.
* <p>
* When the table is committed, the contents of any writes performed by the Spark planner are
* committed along with the metadata about the table passed into this method's arguments. If the
* table exists, the metadata and the contents of this table replace the metadata and contents of
* the existing table. If a concurrent process commits changes to the table's data or metadata
* while the write is being performed but before the staged changes are committed, the catalog
* can decide whether to move forward with the table replacement anyways or abort the commit
* operation.
* <p>
* If the table does not exist, committing the staged changes should fail with
* {@link NoSuchTableException}. This differs from the semantics of
* {@link #stageCreateOrReplace(Identifier, StructType, Transform[], Map)}, which should create
* the table in the data source if the table does not exist at the time of committing the
* operation.
*
* @param ident a table identifier
* @param schema the schema of the new table, as a struct type
* @param partitions transforms to use for partitioning data in the table
* @param properties a string map of table properties
* @return metadata for the new table
* @throws UnsupportedOperationException If a requested partition transform is not supported
* @throws NoSuchNamespaceException If the identifier namespace does not exist (optional)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the implementation should throw TableNotFoundException if the table to replace doesn't exist.

* @throws NoSuchTableException If the table does not exist
*/
StagedTable stageReplace(
Identifier ident,
StructType schema,
Transform[] partitions,
Map<String, String> properties) throws NoSuchNamespaceException, NoSuchTableException;

/**
* Stage the creation or replacement of a table, preparing it to be committed into the metastore
* when the returned table's {@link StagedTable#commitStagedChanges()} is called.
* <p>
* When the table is committed, the contents of any writes performed by the Spark planner are
* committed along with the metadata about the table passed into this method's arguments. If the
* table exists, the metadata and the contents of this table replace the metadata and contents of
* the existing table. If a concurrent process commits changes to the table's data or metadata
* while the write is being performed but before the staged changes are committed, the catalog
* can decide whether to move forward with the table replacement anyways or abort the commit
* operation.
* <p>
* If the table does not exist when the changes are committed, the table should be created in the
* backing data source. This differs from the expected semantics of
* {@link #stageReplace(Identifier, StructType, Transform[], Map)}, which should fail when
* the staged changes are committed but the table doesn't exist at commit time.
*
* @param ident a table identifier
* @param schema the schema of the new table, as a struct type
* @param partitions transforms to use for partitioning data in the table
* @param properties a string map of table properties
* @return metadata for the new table
* @throws UnsupportedOperationException If a requested partition transform is not supported
* @throws NoSuchNamespaceException If the identifier namespace does not exist (optional)
*/
StagedTable stageCreateOrReplace(
Identifier ident,
StructType schema,
Transform[] partitions,
Map<String, String> properties) throws NoSuchNamespaceException;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.sources.v2;

import java.util.Map;
import org.apache.spark.sql.catalog.v2.Identifier;
import org.apache.spark.sql.catalog.v2.StagingTableCatalog;
import org.apache.spark.sql.catalog.v2.expressions.Transform;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.sql.util.CaseInsensitiveStringMap;

/**
* Represents a table which is staged for being committed to the metastore.
* <p>
* This is used to implement atomic CREATE TABLE AS SELECT and REPLACE TABLE AS SELECT queries. The
* planner will create one of these via
* {@link StagingTableCatalog#stageCreate(Identifier, StructType, Transform[], Map)} or
* {@link StagingTableCatalog#stageReplace(Identifier, StructType, Transform[], Map)} to prepare the
* table for being written to. This table should usually implement {@link SupportsWrite}. A new
* writer will be constructed via {@link SupportsWrite#newWriteBuilder(CaseInsensitiveStringMap)},
* and the write will be committed. The job concludes with a call to {@link #commitStagedChanges()},
* at which point implementations are expected to commit the table's metadata into the metastore
* along with the data that was written by the writes from the write builder this table created.
*/
public interface StagedTable extends Table {

/**
* Finalize the creation or replacement of this table.
*/
void commitStagedChanges();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not immediately obvious if this API belongs in StagedTable, or if it should be tied to the BatchWrite's commit() operation. The idea I had with tying it to StagedTable is:

  1. Make the atomic swap part more explicit from the perspective of the physical plan execution, and
  2. Allow both StagedTable and Table to share the same WriteBuilder and BatchWrite implementations that persist the rows, and decouple the atomic swap in this module only.

If we wanted to move the swap implementation behind the BatchWrite#commit and BatchWrite#abort APIs, then it's worth asking if we need the StagedTable interface at all - so TransactionalTableCatalog would return plain Table objects.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this. So the write's commit stashes changes in the staged table, which can finish or roll back.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also solves the problem of where to document how to complete the changes staged in a StagedTable. Can you add docs that describe what these methods should do, and for the StagedTable interface?


/**
* Abort the changes that were staged, both in metadata and from temporary outputs of this
* table's writers.
*/
void abortStagedChanges();
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/


package org.apache.spark.sql.catalyst.analysis

import org.apache.spark.sql.AnalysisException
import org.apache.spark.sql.catalog.v2.Identifier

class CannotReplaceMissingTableException(
tableIdentifier: Identifier,
cause: Option[Throwable] = None)
extends AnalysisException(
s"Table $tableIdentifier cannot be replaced as it did not exist." +
s" Use CREATE OR REPLACE TABLE to create the table.", cause = cause)
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ import org.apache.spark.sql.catalyst.expressions.aggregate.{First, Last}
import org.apache.spark.sql.catalyst.parser.SqlBaseParser._
import org.apache.spark.sql.catalyst.plans._
import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.plans.logical.sql.{AlterTableAddColumnsStatement, AlterTableAlterColumnStatement, AlterTableDropColumnsStatement, AlterTableRenameColumnStatement, AlterTableSetLocationStatement, AlterTableSetPropertiesStatement, AlterTableUnsetPropertiesStatement, AlterViewSetPropertiesStatement, AlterViewUnsetPropertiesStatement, CreateTableAsSelectStatement, CreateTableStatement, DropTableStatement, DropViewStatement, QualifiedColType}
import org.apache.spark.sql.catalyst.plans.logical.sql.{AlterTableAddColumnsStatement, AlterTableAlterColumnStatement, AlterTableDropColumnsStatement, AlterTableRenameColumnStatement, AlterTableSetLocationStatement, AlterTableSetPropertiesStatement, AlterTableUnsetPropertiesStatement, AlterViewSetPropertiesStatement, AlterViewUnsetPropertiesStatement, CreateTableAsSelectStatement, CreateTableStatement, DropTableStatement, DropViewStatement, QualifiedColType, ReplaceTableAsSelectStatement, ReplaceTableStatement}
import org.apache.spark.sql.catalyst.util.DateTimeUtils.{getZoneId, stringToDate, stringToTimestamp}
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types._
Expand Down Expand Up @@ -2127,6 +2127,15 @@ class AstBuilder(conf: SQLConf) extends SqlBaseBaseVisitor[AnyRef] with Logging
(multipartIdentifier, temporary, ifNotExists, ctx.EXTERNAL != null)
}

/**
* Validate a replace table statement and return the [[TableIdentifier]].
*/
override def visitReplaceTableHeader(
ctx: ReplaceTableHeaderContext): TableHeader = withOrigin(ctx) {
val multipartIdentifier = ctx.multipartIdentifier.parts.asScala.map(_.getText)
(multipartIdentifier, false, false, false)
}

/**
* Parse a qualified name to a multipart name.
*/
Expand Down Expand Up @@ -2294,6 +2303,69 @@ class AstBuilder(conf: SQLConf) extends SqlBaseBaseVisitor[AnyRef] with Logging
}
}

/**
* Replace a table, returning a [[ReplaceTableStatement]] logical plan.
*
* Expected format:
* {{{
* [CREATE OR] REPLACE TABLE [db_name.]table_name
* USING table_provider
* replace_table_clauses
* [[AS] select_statement];
*
* replace_table_clauses (order insensitive):
* [OPTIONS table_property_list]
* [PARTITIONED BY (col_name, transform(col_name), transform(constant, col_name), ...)]
* [CLUSTERED BY (col_name, col_name, ...)
* [SORTED BY (col_name [ASC|DESC], ...)]
* INTO num_buckets BUCKETS
* ]
* [LOCATION path]
* [COMMENT table_comment]
* [TBLPROPERTIES (property_name=property_value, ...)]
* }}}
*/
override def visitReplaceTable(ctx: ReplaceTableContext): LogicalPlan = withOrigin(ctx) {
val (table, _, ifNotExists, external) = visitReplaceTableHeader(ctx.replaceTableHeader)
if (external) {
operationNotAllowed("REPLACE EXTERNAL TABLE ... USING", ctx)
}

checkDuplicateClauses(ctx.TBLPROPERTIES, "TBLPROPERTIES", ctx)
checkDuplicateClauses(ctx.OPTIONS, "OPTIONS", ctx)
checkDuplicateClauses(ctx.PARTITIONED, "PARTITIONED BY", ctx)
checkDuplicateClauses(ctx.COMMENT, "COMMENT", ctx)
checkDuplicateClauses(ctx.bucketSpec(), "CLUSTERED BY", ctx)
checkDuplicateClauses(ctx.locationSpec, "LOCATION", ctx)

val schema = Option(ctx.colTypeList()).map(createSchema)
val partitioning: Seq[Transform] =
Option(ctx.partitioning).map(visitTransformList).getOrElse(Nil)
val bucketSpec = ctx.bucketSpec().asScala.headOption.map(visitBucketSpec)
val properties = Option(ctx.tableProps).map(visitPropertyKeyValues).getOrElse(Map.empty)
val options = Option(ctx.options).map(visitPropertyKeyValues).getOrElse(Map.empty)

val provider = ctx.tableProvider.qualifiedName.getText
val location = ctx.locationSpec.asScala.headOption.map(visitLocationSpec)
val comment = Option(ctx.comment).map(string)
val orCreate = ctx.replaceTableHeader().CREATE() != null

Option(ctx.query).map(plan) match {
case Some(_) if schema.isDefined =>
operationNotAllowed(
"Schema may not be specified in a Replace Table As Select (RTAS) statement",
ctx)

case Some(query) =>
ReplaceTableAsSelectStatement(table, query, partitioning, bucketSpec, properties,
provider, options, location, comment, orCreate = orCreate)

case _ =>
ReplaceTableStatement(table, schema.getOrElse(new StructType), partitioning,
bucketSpec, properties, provider, options, location, comment, orCreate = orCreate)
}
}

/**
* Create a [[DropTableStatement]] command.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -441,6 +441,47 @@ case class CreateTableAsSelect(
}
}

/**
* Replace a table with a v2 catalog.
*
* If the table does not exist, and orCreate is true, then it will be created.
* If the table does not exist, and orCreate is false, then an exception will be thrown.
*
* The persisted table will have no contents as a result of this operation.
*/
case class ReplaceTable(
catalog: TableCatalog,
tableName: Identifier,
tableSchema: StructType,
partitioning: Seq[Transform],
properties: Map[String, String],
orCreate: Boolean) extends Command

/**
* Replaces a table from a select query with a v2 catalog.
*
* If the table does not exist, and orCreate is true, then it will be created.
* If the table does not exist, and orCreate is false, then an exception will be thrown.
*/
case class ReplaceTableAsSelect(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the behavior if query depends on the table being replaced?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question. I'm fairly certain that:

  • If non-atomic catalog is being used, the table will have been dropped, so loading the table for the query will result in an error
  • If the atomic catalog is being used, the drop part of the replace isn't committed yet, so the catalog should be able to load the table's contents before the write is committed.

@rdblue - curious to hear your thoughts on the first situation. It makes a stronger case for trying to ban replacing tables without an atomic catalog.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. If the drop happens first and is not atomic, then the create will fail.

We should be able to add a rule to check for this and fail the query in analysis if the table doesn't support atomic updates. @mccheah, can you open an issue for adding a rule like that?

Copy link
Contributor

@cloud-fan cloud-fan Jul 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, I think we should fail the query if atomic RTAS is not supported. It's a valid use case to access the table being replaced in RTAS, Spark shouldn't throw table not found exception in this case, which is quite confusing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan, we discussed this in the DSv2 sync and decided that, in general, RTAS should be supported even if the source is not atomic. Just to clarify, this would be a rule to fail RTAS if it is not atomic when the query depends on the table being replaced. We don't want to drop a table and then find that we can't replace it because it was dropped.

catalog: TableCatalog,
tableName: Identifier,
partitioning: Seq[Transform],
query: LogicalPlan,
properties: Map[String, String],
writeOptions: Map[String, String],
orCreate: Boolean) extends Command {

override def children: Seq[LogicalPlan] = Seq(query)

override lazy val resolved: Boolean = {
// the table schema is created from the query schema, so the only resolution needed is to check
// that the columns referenced by the table's partitioning exist in the query schema
val references = partitioning.flatMap(_.references).toSet
references.map(_.fieldNames).forall(query.schema.findNestedField(_).isDefined)
}
}

/**
* Append data to an existing table.
*/
Expand Down
Loading