Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ import org.apache.spark.util.SerializableConfiguration
* This allows users to give the data source alias as the format type over the fully qualified
* class name.
*
* A new instance of this class with be instantiated each time a DDL call is made.
* A new instance of this class will be instantiated each time a DDL call is made.
*
* @since 1.5.0
*/
Expand Down Expand Up @@ -74,7 +74,7 @@ trait DataSourceRegister {
* less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the
* data source 'org.apache.spark.sql.json.DefaultSource'
*
* A new instance of this class with be instantiated each time a DDL call is made.
* A new instance of this class will be instantiated each time a DDL call is made.
*
* @since 1.3.0
*/
Expand All @@ -100,7 +100,7 @@ trait RelationProvider {
* less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the
* data source 'org.apache.spark.sql.json.DefaultSource'
*
* A new instance of this class with be instantiated each time a DDL call is made.
* A new instance of this class will be instantiated each time a DDL call is made.
*
* The difference between a [[RelationProvider]] and a [[SchemaRelationProvider]] is that
* users need to provide a schema when using a [[SchemaRelationProvider]].
Expand Down Expand Up @@ -135,7 +135,7 @@ trait SchemaRelationProvider {
* less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the
* data source 'org.apache.spark.sql.json.DefaultSource'
*
* A new instance of this class with be instantiated each time a DDL call is made.
* A new instance of this class will be instantiated each time a DDL call is made.
*
* The difference between a [[RelationProvider]] and a [[HadoopFsRelationProvider]] is
* that users need to provide a schema and a (possibly empty) list of partition columns when
Expand Down Expand Up @@ -195,7 +195,7 @@ trait CreatableRelationProvider {
* implementation should inherit from one of the descendant `Scan` classes, which define various
* abstract methods for execution.
*
* BaseRelations must also define a equality function that only returns true when the two
* BaseRelations must also define an equality function that only returns true when the two
* instances will return the same data. This equality function is used when determining when
* it is safe to substitute cached results for a given relation.
*
Expand All @@ -208,7 +208,7 @@ abstract class BaseRelation {

/**
* Returns an estimated size of this relation in bytes. This information is used by the planner
* to decided when it is safe to broadcast a relation and can be overridden by sources that
* to decide when it is safe to broadcast a relation and can be overridden by sources that
* know the size ahead of time. By default, the system will assume that tables are too
* large to broadcast. This method will be called multiple times during query planning
* and thus should not perform expensive operations for each invocation.
Expand Down Expand Up @@ -383,7 +383,7 @@ abstract class OutputWriter {

/**
* ::Experimental::
* A [[BaseRelation]] that provides much of the common code required for formats that store their
* A [[BaseRelation]] that provides much of the common code required for relations that store their
* data to an HDFS compatible filesystem.
*
* For the read path, similar to [[PrunedFilteredScan]], it can eliminate unneeded columns and
Expand Down