-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-16311][SQL] Improve metadata refresh #13989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -265,6 +265,11 @@ abstract class LogicalPlan extends QueryPlan[LogicalPlan] with Logging { | |
| s"Reference '$name' is ambiguous, could be: $referenceNames.") | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Invalidates any metadata cached in the plan recursively. | ||
| */ | ||
| def refresh(): Unit = children.foreach(_.refresh()) | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. use
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not a tailrec?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You need to mark it.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But this function is not tail recursive.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we want to avoid recursive implementation at best. It is too expensive for a large tree.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't get it. Why would this be more expensive than any other recursive calls that happen in logical plans?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the comment I got when I using the recursive implementation.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you point me to it? The entire TreeNode library I saw had a lot of recursive calls.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let us wait for the comments from the Committers. They might be feeling OK for your scenario. Normally, recursive implementation is not encouraged.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| } | ||
|
|
||
| /** | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -2306,6 +2306,19 @@ class Dataset[T] private[sql]( | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| def distinct(): Dataset[T] = dropDuplicates() | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /** | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * Refreshes the metadata and data cached in Spark for data associated with this Dataset. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * An example use case is to invalidate the file system metadata cached by Spark, when the | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * underlying files have been updated by an external process. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * @group action | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * @since 2.0.0 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| */ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| def refresh(): Unit = { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| unpersist(false) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It will remove the cached data. This is different from what JIRA describes. CC @rxin
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Other refresh methods also remove cached data, so I thought this is better.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This new API has different behaviors from the spark/sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala Lines 349 to 374 in 02a029d
IMO, if we using the word
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ah ic - we can't unpersist.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can unpersist, but should persist it again immediately.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually we can and should call |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
| logicalPlan.refresh() | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
| /** | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * Persist this Dataset with the default storage level (`MEMORY_AND_DISK`). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| * | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -85,5 +85,10 @@ case class LogicalRelation( | |
| expectedOutputAttributes, | ||
| metastoreTableIdentifier).asInstanceOf[this.type] | ||
|
|
||
| override def refresh(): Unit = relation match { | ||
| case fs: HadoopFsRelation => fs.refresh() | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How about the other leaf nodes?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You have to document the reason why only
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What do you mean? Other leaf nodes don't keep state, do they?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I know, but we need to write the comments for the code readers.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't agree on this one. LogicalRelation might not be the only one that needs to override this in the future. There can certainly be other logical plans in the future that keep some state and needs to implement refresh. The definition of "refresh" itself with a default implementation also means only plans that need to refresh anything should override it. I'm going to update refresh in LogicalPlan to make this more clear.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can it be refreshed?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, we can invalidate the cache entry in
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, yeah, it is unable to be refreshed.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When we call |
||
| case _ => // Do nothing. | ||
| } | ||
|
|
||
| override def simpleString: String = s"Relation[${Utils.truncatedString(output, ",")}] $relation" | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -166,8 +166,8 @@ private[sql] class SessionState(sparkSession: SparkSession) { | |
|
|
||
| def executePlan(plan: LogicalPlan): QueryExecution = new QueryExecution(sparkSession, plan) | ||
|
|
||
| def invalidateTable(tableName: String): Unit = { | ||
| catalog.invalidateTable(sqlParser.parseTableIdentifier(tableName)) | ||
| def refreshTable(tableName: String): Unit = { | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To be honest, I still think
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just picked the one that was exposed to users (refresh in catalog and in sql). |
||
| catalog.refreshTable(sqlParser.parseTableIdentifier(tableName)) | ||
| } | ||
|
|
||
| def addJar(path: String): Unit = { | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,89 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.sql | ||
|
|
||
| import java.io.File | ||
|
|
||
| import org.apache.spark.SparkException | ||
| import org.apache.spark.sql.test.SharedSQLContext | ||
|
|
||
| class MetadataCacheSuite extends QueryTest with SharedSQLContext { | ||
|
|
||
| /** Removes one data file in the given directory. */ | ||
| private def deleteOneFileInDirectory(dir: File): Unit = { | ||
| assert(dir.isDirectory) | ||
| val oneFile = dir.listFiles().find { file => | ||
| !file.getName.startsWith("_") && !file.getName.startsWith(".") | ||
| } | ||
| assert(oneFile.isDefined) | ||
| oneFile.foreach(_.delete()) | ||
| } | ||
|
|
||
| test("Dataset.refresh()") { | ||
| withTempPath { (location: File) => | ||
| // Create a Parquet directory | ||
| spark.range(start = 0, end = 100, step = 1, numPartitions = 3) | ||
| .write.parquet(location.getAbsolutePath) | ||
|
|
||
| // Read the directory in | ||
| val df = spark.read.parquet(location.getAbsolutePath) | ||
| assert(df.count() == 100) | ||
|
|
||
| // Delete a file | ||
| deleteOneFileInDirectory(location) | ||
|
|
||
| // Read it again and now we should see a FileNotFoundException | ||
| val e = intercept[SparkException] { | ||
| df.count() | ||
| } | ||
| assert(e.getMessage.contains("FileNotFoundException")) | ||
| assert(e.getMessage.contains("refresh()")) | ||
|
|
||
| // Refresh and we should be able to read it again. | ||
| df.refresh() | ||
| assert(df.count() > 0 && df.count() < 100) | ||
| } | ||
| } | ||
|
|
||
| test("temporary view refresh") { | ||
| withTempPath { (location: File) => | ||
| // Create a Parquet directory | ||
| spark.range(start = 0, end = 100, step = 1, numPartitions = 3) | ||
| .write.parquet(location.getAbsolutePath) | ||
|
|
||
| // Read the directory in | ||
| spark.read.parquet(location.getAbsolutePath).createOrReplaceTempView("view_refresh") | ||
| assert(sql("select count(*) from view_refresh").first().getLong(0) == 100) | ||
|
|
||
| // Delete a file | ||
| deleteOneFileInDirectory(location) | ||
|
|
||
| // Read it again and now we should see a FileNotFoundException | ||
| val e = intercept[SparkException] { | ||
| sql("select count(*) from view_refresh").first() | ||
| } | ||
| assert(e.getMessage.contains("FileNotFoundException")) | ||
| assert(e.getMessage.contains("refresh()")) | ||
|
|
||
| // Refresh and we should be able to read it again. | ||
| spark.catalog.refreshTable("view_refresh") | ||
| val newCount = sql("select count(*) from view_refresh").first().getLong(0) | ||
| assert(newCount > 0 && newCount < 100) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Refreshes" instead of "Invalidates"?