Skip to content

Commit 78fade3

Browse files
MaxGekkHyukjinKwon
authored andcommitted
[SPARK-33670][SQL][2.4] Verify the partition provider is Hive in v1 SHOW TABLE EXTENDED
### What changes were proposed in this pull request? Invoke the check `DDLUtils.verifyPartitionProviderIsHive()` from V1 implementation of `SHOW TABLE EXTENDED` when partition specs are specified. This PR is some kind of follow up #16373 and #15515. ### Why are the changes needed? To output an user friendly error with recommendation like **" ... partition metadata is not stored in the Hive metastore. To import this information into the metastore, run `msck repair table tableName` "** instead of silently output an empty result. ### Does this PR introduce _any_ user-facing change? Yes. ### How was this patch tested? By running the affected test suites, in particular: ``` $ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *HiveCatalogedDDLSuite" $ build/sbt -Phive-2.3 -Phive-thriftserver "hive/test:testOnly *PartitionProviderCompatibilitySuite" ``` Authored-by: Max Gekk <max.gekkgmail.com> Signed-off-by: HyukjinKwon <gurwls223apache.org> (cherry picked from commit 29096a8) Signed-off-by: Max Gekk <max.gekkgmail.com> Closes #30641 from MaxGekk/show-table-extended-verifyPartitionProviderIsHive-2.4. Authored-by: Max Gekk <max.gekk@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
1 parent 9ca324a commit 78fade3

File tree

3 files changed

+23
-4
lines changed

3 files changed

+23
-4
lines changed

sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -823,6 +823,9 @@ case class ShowTablesCommand(
823823
// Note: tableIdentifierPattern should be non-empty, otherwise a [[ParseException]]
824824
// should have been thrown by the sql parser.
825825
val table = catalog.getTableMetadata(TableIdentifier(tableIdentifierPattern.get, Some(db)))
826+
827+
DDLUtils.verifyPartitionProviderIsHive(sparkSession, table, "SHOW TABLE EXTENDED")
828+
826829
val tableIdent = table.identifier
827830
val normalizedSpec = PartitioningUtils.normalizePartitionSpec(
828831
partitionSpec.get,

sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2888,6 +2888,16 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
28882888
}
28892889
}
28902890
}
2891+
2892+
test("SPARK-33670: show partitions from a datasource table") {
2893+
import testImplicits._
2894+
val t = "part_datasrc"
2895+
withTable(t) {
2896+
val df = (1 to 3).map(i => (i, s"val_$i", i * 2)).toDF("a", "b", "c")
2897+
df.write.partitionBy("a").format("parquet").mode(SaveMode.Overwrite).saveAsTable(t)
2898+
assert(sql(s"SHOW TABLE EXTENDED LIKE '$t' PARTITION(a = 1)").count() === 1)
2899+
}
2900+
}
28912901
}
28922902

28932903
object FakeLocalFsFileSystem {

sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,8 @@ class PartitionProviderCompatibilitySuite
5353
s"ALTER TABLE $tableName PARTITION (partCol=1) SET LOCATION '/foo'",
5454
s"ALTER TABLE $tableName DROP PARTITION (partCol=1)",
5555
s"DESCRIBE $tableName PARTITION (partCol=1)",
56-
s"SHOW PARTITIONS $tableName")
56+
s"SHOW PARTITIONS $tableName",
57+
s"SHOW TABLE EXTENDED LIKE '$tableName' PARTITION (partCol=1)")
5758

5859
withSQLConf(SQLConf.HIVE_MANAGE_FILESOURCE_PARTITIONS.key -> "true") {
5960
for (cmd <- unsupportedCommands) {
@@ -124,10 +125,15 @@ class PartitionProviderCompatibilitySuite
124125
}
125126
// disabled
126127
withSQLConf(SQLConf.HIVE_MANAGE_FILESOURCE_PARTITIONS.key -> "false") {
127-
val e = intercept[AnalysisException] {
128-
spark.sql(s"show partitions test")
128+
Seq(
129+
"SHOW PARTITIONS test",
130+
"SHOW TABLE EXTENDED LIKE 'test' PARTITION (partCol=1)"
131+
).foreach { showPartitions =>
132+
val e = intercept[AnalysisException] {
133+
spark.sql(showPartitions)
134+
}
135+
assert(e.getMessage.contains("filesource partition management is disabled"))
129136
}
130-
assert(e.getMessage.contains("filesource partition management is disabled"))
131137
spark.sql("refresh table test")
132138
assert(spark.sql("select * from test").count() == 5)
133139
}

0 commit comments

Comments
 (0)