Skip to content

Conversation

@budde
Copy link

@budde budde commented Feb 15, 2017

EDIT: accidentally broke this PR with a force push. Opened #16944 to replace it.

Replaces #16797. See the discussion in this PR for more details/justification for this change.

Summary of changes

JIRA for this change

  • Add spark.sql.hive.schemaInferenceMode param to SQLConf
  • Add schemaFromTableProps field to CatalogTable (set to true when schema is
    successfully read from table props)
  • Perform schema inference in HiveMetastoreCatalog if schemaFromTableProps is
    false, depending on spark.sql.hive.schemaInferenceMode.
  • Update table metadata properties in HiveExternalCatalog.alterTable()
  • Add HiveSchemaInferenceSuite tests

How was this patch tested?

The tests in HiveSchemaInferenceSuite should verify that schema inference is working as expected.

Open issues

  • The option values for spark.sql.hive.schemaInferenceMode (e.g. "INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER") should be made into constants or an enum. I couldn't find a sensible object to place them in though that doesn't introduce a dependency between sql/core and sql/hive.
  • Should "INFER_AND_SAVE" be the default behavior? This restores the out-of-the-box compatibility that was present prior to 2.1.0 but changes the behavior of 2.1.0 (which is essentially "NEVER_INFER").
  • Is HiveExternalCatalog.alterTable() the appropriate place to write back the table metadata properties outside of createTable()? Should a new external catalog method like updateTableMetadata() be introduced?
  • All partition columns will still be treated as case-insensitive even after inferring. As far as I remember, this has always been the case with schema inference prior to Spark 2.1.0 and I haven't made any attempts to reconcile this since it doesn't cause the same problems that case sensitive data fields do. Should we attempt to restore case sensitivity by inspecting file paths or leave this as-is?

- Add spark.sql.hive.schemaInferenceMode param to SQLConf
- Add schemaFromTableProps field to CatalogTable (set to true when schema is
  successfully read from table props)
- Perform schema inference in HiveMetastoreCatalog if schemaFromTableProps is
  false, depending on spark.sql.hive.schemaInferenceMode.
- Update table metadata properties in HiveExternalCatalog.alterTable()
- Add HiveSchemaInferenceSuite tests
@budde
Copy link
Author

budde commented Feb 15, 2017

Pinging participants from #16797: @gatorsmile, @viirya, @ericl, @mallman and @cloud-fan


test("Queries against case-sensitive tables with no schema in table properties should work " +
"when schema inference is enabled") {
withSQLConf("spark.sql.hive.schemaInferenceMode" -> "INFER_AND_SAVE") {
Copy link
Author

@budde budde Feb 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change this to reference the key via the constant in SQLConf rather than "spark.sql.hive.schemaInferenceMode".

"NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring).")
.stringConf
.transform(_.toUpperCase())
.checkValues(Set("INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER"))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the PR, I'm looking for a good place to store these values as constants or an enum.

.stringConf
.transform(_.toUpperCase())
.checkValues(Set("INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER"))
.createWithDefault("INFER_AND_SAVE")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm open for discussion on whether or not this should be the default behavior.

@SparkQA
Copy link

SparkQA commented Feb 15, 2017

Test build #72947 has finished for PR 16942 at commit ced9c4d.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@budde
Copy link
Author

budde commented Feb 15, 2017

Tests appear to be failing due to the following error:

[info] Exception encountered when attempting to run a suite with class name: org.apache.spark.sql.streaming.FileStreamSourceSuite *** ABORTED *** (0 milliseconds)
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
  org.apache.spark.sql.execution.SQLExecutionSuite$$anonfun$3.apply(SQLExecutionSuite.scala:107)
...

I don't think anything in this PR should've changed the behavior of core SQL tests, but I'll look in to this.

@budde budde closed this Feb 15, 2017
@budde
Copy link
Author

budde commented Feb 15, 2017

Accidentally did a force-push to my branch for this issue. Looks like I'll have to open a new PR.

UPDATE: replaced by #16944

@mallman
Copy link
Contributor

mallman commented Feb 15, 2017

Force pushing your branch shouldn't close the PR. You didn't close it manually?

@budde
Copy link
Author

budde commented Feb 15, 2017

@mallman If I did close it then it was by mistake. The "Reopen and comment" button was disabled with a message about the PR being closed by a force push when I hovered over it. Afraid I'm a bit of a n00b on GitHub PRs :/

@mallman
Copy link
Contributor

mallman commented Feb 15, 2017

Weird. I think I've seen that behavior once before. But I think the only time I force push on a PR is to rebase. Maybe that's the only kind of force push allowed for Github PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants