diff --git a/docs/sql-keywords.md b/docs/sql-keywords.md
index 34e8cfb02c9f1..b18855366bb2b 100644
--- a/docs/sql-keywords.md
+++ b/docs/sql-keywords.md
@@ -22,7 +22,7 @@ license: |
When `spark.sql.ansi.enabled` is true, Spark SQL will use the ANSI mode parser.
In this mode, Spark SQL has two kinds of keywords:
* Reserved keywords: Keywords that are reserved and can't be used as identifiers for table, view, column, function, alias, etc.
-* Non-reserved keywords: Keywords that have a special meaning only in particular contexts and can be used as identifiers in other contexts. For example, `SELECT 1 WEEK` is an interval literal, but WEEK can be used as identifiers in other places.
+* Non-reserved keywords: Keywords that have a special meaning only in particular contexts and can be used as identifiers in other contexts. For example, `EXPLAIN SELECT ...` is a command, but EXPLAIN can be used as identifiers in other places.
When the ANSI mode is disabled, Spark SQL has two kinds of keywords:
* Non-reserved keywords: Same definition as the one when the ANSI mode enabled.
@@ -88,7 +88,6 @@ Below is a list of all the keywords in Spark SQL.
| DATABASE | non-reserved | non-reserved | non-reserved |
| DATABASES | non-reserved | non-reserved | non-reserved |
| DAY | reserved | non-reserved | reserved |
- | DAYS | non-reserved | non-reserved | non-reserved |
| DBPROPERTIES | non-reserved | non-reserved | non-reserved |
| DEFINED | non-reserved | non-reserved | non-reserved |
| DELETE | non-reserved | non-reserved | reserved |
@@ -136,7 +135,6 @@ Below is a list of all the keywords in Spark SQL.
| GROUPING | non-reserved | non-reserved | reserved |
| HAVING | reserved | non-reserved | reserved |
| HOUR | reserved | non-reserved | reserved |
- | HOURS | non-reserved | non-reserved | non-reserved |
| IF | non-reserved | non-reserved | reserved |
| IGNORE | non-reserved | non-reserved | non-reserved |
| IMPORT | non-reserved | non-reserved | non-reserved |
@@ -174,15 +172,9 @@ Below is a list of all the keywords in Spark SQL.
| MAP | non-reserved | non-reserved | non-reserved |
| MATCHED | non-reserved | non-reserved | non-reserved |
| MERGE | non-reserved | non-reserved | non-reserved |
- | MICROSECOND | non-reserved | non-reserved | non-reserved |
- | MICROSECONDS | non-reserved | non-reserved | non-reserved |
- | MILLISECOND | non-reserved | non-reserved | non-reserved |
- | MILLISECONDS | non-reserved | non-reserved | non-reserved |
| MINUS | reserved | strict-non-reserved | non-reserved |
| MINUTE | reserved | non-reserved | reserved |
- | MINUTES | non-reserved | non-reserved | non-reserved |
| MONTH | reserved | non-reserved | reserved |
- | MONTHS | non-reserved | non-reserved | non-reserved |
| MSCK | non-reserved | non-reserved | non-reserved |
| NAMESPACE | non-reserved | non-reserved | non-reserved |
| NAMESPACES | non-reserved | non-reserved | non-reserved |
@@ -242,7 +234,6 @@ Below is a list of all the keywords in Spark SQL.
| ROWS | non-reserved | non-reserved | reserved |
| SCHEMA | non-reserved | non-reserved | non-reserved |
| SECOND | reserved | non-reserved | reserved |
- | SECONDS | non-reserved | non-reserved | non-reserved |
| SELECT | reserved | non-reserved | reserved |
| SEMI | reserved | strict-non-reserved | non-reserved |
| SEPARATED | non-reserved | non-reserved | non-reserved |
@@ -293,12 +284,9 @@ Below is a list of all the keywords in Spark SQL.
| USING | reserved | strict-non-reserved | reserved |
| VALUES | non-reserved | non-reserved | reserved |
| VIEW | non-reserved | non-reserved | non-reserved |
- | WEEK | non-reserved | non-reserved | non-reserved |
- | WEEKS | non-reserved | non-reserved | non-reserved |
| WHEN | reserved | non-reserved | reserved |
| WHERE | reserved | non-reserved | reserved |
| WINDOW | non-reserved | non-reserved | reserved |
| WITH | reserved | non-reserved | reserved |
| YEAR | reserved | non-reserved | reserved |
- | YEARS | non-reserved | non-reserved | non-reserved |
diff --git a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
index b2a2b0f31a9c9..73f714ba0f0a7 100644
--- a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
+++ b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
@@ -813,7 +813,6 @@ booleanValue
interval
: INTERVAL (errorCapturingMultiUnitsInterval | errorCapturingUnitToUnitInterval)?
- | {SQL_standard_keyword_behavior}? (errorCapturingMultiUnitsInterval | errorCapturingUnitToUnitInterval)
;
errorCapturingMultiUnitsInterval
@@ -839,23 +838,12 @@ intervalValue
intervalUnit
: DAY
- | DAYS
| HOUR
- | HOURS
- | MICROSECOND
- | MICROSECONDS
- | MILLISECOND
- | MILLISECONDS
| MINUTE
- | MINUTES
| MONTH
- | MONTHS
| SECOND
- | SECONDS
- | WEEK
- | WEEKS
| YEAR
- | YEARS
+ | identifier
;
colPosition
@@ -991,7 +979,7 @@ number
// function, alias, etc.
// - Non-reserved keywords:
// Keywords that have a special meaning only in particular contexts and can be used as
-// identifiers in other contexts. For example, `SELECT 1 WEEK` is an interval literal, but WEEK
+// identifiers in other contexts. For example, `EXPLAIN SELECT ...` is a command, but EXPLAIN
// can be used as identifiers in other places.
// You can find the full keywords list by searching "Start of the keywords list" in this file.
// The non-reserved keywords are listed below. Keywords not in this list are reserved keywords.
@@ -1029,7 +1017,6 @@ ansiNonReserved
| DATA
| DATABASE
| DATABASES
- | DAYS
| DBPROPERTIES
| DEFINED
| DELETE
@@ -1060,7 +1047,6 @@ ansiNonReserved
| FUNCTIONS
| GLOBAL
| GROUPING
- | HOURS
| IF
| IGNORE
| IMPORT
@@ -1089,12 +1075,6 @@ ansiNonReserved
| MAP
| MATCHED
| MERGE
- | MICROSECOND
- | MICROSECONDS
- | MILLISECOND
- | MILLISECONDS
- | MINUTES
- | MONTHS
| MSCK
| NAMESPACE
| NAMESPACES
@@ -1141,7 +1121,6 @@ ansiNonReserved
| ROW
| ROWS
| SCHEMA
- | SECONDS
| SEPARATED
| SERDE
| SERDEPROPERTIES
@@ -1179,10 +1158,7 @@ ansiNonReserved
| USE
| VALUES
| VIEW
- | WEEK
- | WEEKS
| WINDOW
- | YEARS
;
// When `SQL_standard_keyword_behavior=false`, there are 2 kinds of keywords in Spark SQL.
@@ -1264,7 +1240,6 @@ nonReserved
| DATABASE
| DATABASES
| DAY
- | DAYS
| DBPROPERTIES
| DEFINED
| DELETE
@@ -1310,7 +1285,6 @@ nonReserved
| GROUPING
| HAVING
| HOUR
- | HOURS
| IF
| IGNORE
| IMPORT
@@ -1344,14 +1318,8 @@ nonReserved
| MAP
| MATCHED
| MERGE
- | MICROSECOND
- | MICROSECONDS
- | MILLISECOND
- | MILLISECONDS
| MINUTE
- | MINUTES
| MONTH
- | MONTHS
| MSCK
| NAMESPACE
| NAMESPACES
@@ -1408,7 +1376,6 @@ nonReserved
| ROWS
| SCHEMA
| SECOND
- | SECONDS
| SELECT
| SEPARATED
| SERDE
@@ -1457,14 +1424,11 @@ nonReserved
| USER
| VALUES
| VIEW
- | WEEK
- | WEEKS
| WHEN
| WHERE
| WINDOW
| WITH
| YEAR
- | YEARS
;
// NOTE: If you add a new token in the list below, you should update the list of keywords
@@ -1527,7 +1491,6 @@ DATA: 'DATA';
DATABASE: 'DATABASE';
DATABASES: 'DATABASES' | 'SCHEMAS';
DAY: 'DAY';
-DAYS: 'DAYS';
DBPROPERTIES: 'DBPROPERTIES';
DEFINED: 'DEFINED';
DELETE: 'DELETE';
@@ -1574,7 +1537,6 @@ GROUP: 'GROUP';
GROUPING: 'GROUPING';
HAVING: 'HAVING';
HOUR: 'HOUR';
-HOURS: 'HOURS';
IF: 'IF';
IGNORE: 'IGNORE';
IMPORT: 'IMPORT';
@@ -1612,14 +1574,8 @@ MACRO: 'MACRO';
MAP: 'MAP';
MATCHED: 'MATCHED';
MERGE: 'MERGE';
-MICROSECOND: 'MICROSECOND';
-MICROSECONDS: 'MICROSECONDS';
-MILLISECOND: 'MILLISECOND';
-MILLISECONDS: 'MILLISECONDS';
MINUTE: 'MINUTE';
-MINUTES: 'MINUTES';
MONTH: 'MONTH';
-MONTHS: 'MONTHS';
MSCK: 'MSCK';
NAMESPACE: 'NAMESPACE';
NAMESPACES: 'NAMESPACES';
@@ -1679,7 +1635,6 @@ ROW: 'ROW';
ROWS: 'ROWS';
SCHEMA: 'SCHEMA';
SECOND: 'SECOND';
-SECONDS: 'SECONDS';
SELECT: 'SELECT';
SEMI: 'SEMI';
SEPARATED: 'SEPARATED';
@@ -1732,14 +1687,11 @@ USER: 'USER';
USING: 'USING';
VALUES: 'VALUES';
VIEW: 'VIEW';
-WEEK: 'WEEK';
-WEEKS: 'WEEKS';
WHEN: 'WHEN';
WHERE: 'WHERE';
WINDOW: 'WINDOW';
WITH: 'WITH';
YEAR: 'YEAR';
-YEARS: 'YEARS';
//============================
// End of the keywords list
//============================
diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
index 1bea1c254c0fc..fd4288dac6ec0 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
@@ -645,11 +645,6 @@ class ExpressionParserSuite extends AnalysisTest {
"-" -> UnaryMinus(expected)
).foreach { case (sign, expectedLiteral) =>
assertEqual(s"${sign}interval $intervalValue", expectedLiteral)
-
- // SPARK-23264 Support interval values without INTERVAL clauses if ANSI SQL enabled
- withSQLConf(SQLConf.ANSI_ENABLED.key -> "true") {
- assertEqual(intervalValue, expected)
- }
}
}
@@ -687,8 +682,7 @@ class ExpressionParserSuite extends AnalysisTest {
Literal(IntervalUtils.stringToInterval("1 second 1 millisecond")))
// Non Existing unit
- intercept("interval 10 nanoseconds",
- "no viable alternative at input '10 nanoseconds'")
+ intercept("interval 10 nanoseconds", "invalid unit 'nanoseconds'")
// Year-Month intervals.
val yearMonthValues = Seq("123-10", "496-0", "-2-3", "-123-0")
@@ -732,34 +726,6 @@ class ExpressionParserSuite extends AnalysisTest {
Literal(new CalendarInterval(3, 4, 22001000L)))
}
- test("SPARK-23264 Interval Compatibility tests") {
- def checkIntervals(intervalValue: String, expected: Literal): Unit = {
- withSQLConf(SQLConf.ANSI_ENABLED.key -> "true") {
- assertEqual(intervalValue, expected)
- }
-
- // Compatibility tests: If ANSI SQL disabled, `intervalValue` should be parsed as an alias
- withSQLConf(SQLConf.ANSI_ENABLED.key -> "false") {
- val aliases = defaultParser.parseExpression(intervalValue).collect {
- case a @ Alias(_: Literal, name)
- if intervalUnits.exists { unit => name.startsWith(unit.toString) } => a
- }
- assert(aliases.size === 1)
- }
- }
- val forms = Seq("", "s")
- val values = Seq("5", "1", "-11", "8")
- intervalUnits.foreach { unit =>
- forms.foreach { form =>
- values.foreach { value =>
- val expected = intervalLiteral(unit, value)
- checkIntervals(s"$value $unit$form", expected)
- checkIntervals(s"'$value' $unit$form", expected)
- }
- }
- }
- }
-
test("composed expressions") {
assertEqual("1 + r.r As q", (Literal(1) + UnresolvedAttribute("r.r")).as("q"))
assertEqual("1 - f('o', o(bar))", Literal(1) - 'f.function("o", 'o.function('bar)))
diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
index 23063bbab7aa2..43244b3c0a57d 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/TableIdentifierParserSuite.scala
@@ -74,7 +74,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"date",
"datetime",
"day",
- "days",
"dbproperties",
"decimal",
"deferred",
@@ -114,7 +113,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"grouping",
"hold_ddltime",
"hour",
- "hours",
"idxproperties",
"ignore",
"import",
@@ -150,15 +148,9 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"mapjoin",
"materialized",
"metadata",
- "microsecond",
- "microseconds",
- "millisecond",
- "milliseconds",
"minus",
"minute",
- "minutes",
"month",
- "months",
"msck",
"no_drop",
"none",
@@ -213,7 +205,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"rows",
"schemas",
"second",
- "seconds",
"serde",
"serdeproperties",
"server",
@@ -263,14 +254,11 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"utctimestamp",
"values",
"view",
- "week",
- "weeks",
"while",
"with",
"work",
"write",
- "year",
- "years")
+ "year")
val hiveStrictNonReservedKeyword = Seq(
"anti",
@@ -351,7 +339,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"database",
"databases",
"day",
- "days",
"dbproperties",
"defined",
"delete",
@@ -398,7 +385,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"grouping",
"having",
"hour",
- "hours",
"if",
"ignore",
"import",
@@ -434,15 +420,9 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"logical",
"macro",
"map",
- "microsecond",
- "microseconds",
- "millisecond",
- "milliseconds",
"minus",
"minute",
- "minutes",
"month",
- "months",
"msck",
"namespaces",
"natural",
@@ -500,7 +480,6 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"rows",
"schema",
"second",
- "seconds",
"select",
"semi",
"separated",
@@ -549,14 +528,11 @@ class TableIdentifierParserSuite extends SparkFunSuite with SQLHelper {
"using",
"values",
"view",
- "week",
- "weeks",
"when",
"where",
"window",
"with",
- "year",
- "years")
+ "year")
val reservedKeywordsInAnsiMode = Set(
"all",
diff --git a/sql/core/src/test/resources/sql-tests/inputs/ansi/interval.sql b/sql/core/src/test/resources/sql-tests/inputs/ansi/interval.sql
index 087914eebb077..215ce9658e1ad 100644
--- a/sql/core/src/test/resources/sql-tests/inputs/ansi/interval.sql
+++ b/sql/core/src/test/resources/sql-tests/inputs/ansi/interval.sql
@@ -1,17 +1 @@
---IMPORT interval.sql
-
--- the `interval` keyword can be omitted with ansi mode
-select 1 year 2 days;
-select '10-9' year to month;
-select '20 15:40:32.99899999' day to second;
-select 30 day day;
-select date'2012-01-01' - '2-2' year to month;
-select 1 month - 1 day;
-
--- malformed interval literal with ansi mode
-select 1 year to month;
-select '1' year to second;
-select 1 year '2-1' year to month;
-select (-30) day;
-select (a + 1) day;
-select 30 day day day;
\ No newline at end of file
+--IMPORT interval.sql
\ No newline at end of file
diff --git a/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out b/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
index 4fceb6b255b02..2a19c26389ce4 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
@@ -1,5 +1,5 @@
-- Automatically generated by SQLQueryTestSuite
--- Number of queries: 131
+-- Number of queries: 119
-- !query 0
@@ -582,11 +582,11 @@ struct<>
-- !query 66 output
org.apache.spark.sql.catalyst.parser.ParseException
-no viable alternative at input '10 nanoseconds'(line 1, pos 19)
+Error parsing ' 10 nanoseconds' to interval, invalid unit 'nanoseconds'(line 1, pos 16)
== SQL ==
select interval 10 nanoseconds
--------------------^^^
+----------------^^^
-- !query 67
@@ -634,11 +634,11 @@ struct<>
-- !query 71 output
org.apache.spark.sql.catalyst.parser.ParseException
-no viable alternative at input '1 fake_unit'(line 1, pos 18)
+Error parsing ' 1 fake_unit' to interval, invalid unit 'fake_unit'(line 1, pos 16)
== SQL ==
select interval 1 fake_unit
-------------------^^^
+----------------^^^
-- !query 72
@@ -1163,141 +1163,3 @@ struct<>
-- !query 118 output
java.lang.ArithmeticException
integer overflow
-
-
--- !query 119
-select 1 year 2 days
--- !query 119 schema
-struct
--- !query 119 output
-1 years 2 days
-
-
--- !query 120
-select '10-9' year to month
--- !query 120 schema
-struct
--- !query 120 output
-10 years 9 months
-
-
--- !query 121
-select '20 15:40:32.99899999' day to second
--- !query 121 schema
-struct
--- !query 121 output
-20 days 15 hours 40 minutes 32.998999 seconds
-
-
--- !query 122
-select 30 day day
--- !query 122 schema
-struct<>
--- !query 122 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-no viable alternative at input 'day'(line 1, pos 14)
-
-== SQL ==
-select 30 day day
---------------^^^
-
-
--- !query 123
-select date'2012-01-01' - '2-2' year to month
--- !query 123 schema
-struct
--- !query 123 output
-2009-11-01
-
-
--- !query 124
-select 1 month - 1 day
--- !query 124 schema
-struct
--- !query 124 output
-1 months -1 days
-
-
--- !query 125
-select 1 year to month
--- !query 125 schema
-struct<>
--- !query 125 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-The value of from-to unit must be a string(line 1, pos 7)
-
-== SQL ==
-select 1 year to month
--------^^^
-
-
--- !query 126
-select '1' year to second
--- !query 126 schema
-struct<>
--- !query 126 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-Intervals FROM year TO second are not supported.(line 1, pos 7)
-
-== SQL ==
-select '1' year to second
--------^^^
-
-
--- !query 127
-select 1 year '2-1' year to month
--- !query 127 schema
-struct<>
--- !query 127 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-Can only have a single from-to unit in the interval literal syntax(line 1, pos 14)
-
-== SQL ==
-select 1 year '2-1' year to month
---------------^^^
-
-
--- !query 128
-select (-30) day
--- !query 128 schema
-struct<>
--- !query 128 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-no viable alternative at input 'day'(line 1, pos 13)
-
-== SQL ==
-select (-30) day
--------------^^^
-
-
--- !query 129
-select (a + 1) day
--- !query 129 schema
-struct<>
--- !query 129 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-no viable alternative at input 'day'(line 1, pos 15)
-
-== SQL ==
-select (a + 1) day
----------------^^^
-
-
--- !query 130
-select 30 day day day
--- !query 130 schema
-struct<>
--- !query 130 output
-org.apache.spark.sql.catalyst.parser.ParseException
-
-no viable alternative at input 'day'(line 1, pos 14)
-
-== SQL ==
-select 30 day day day
---------------^^^
diff --git a/sql/core/src/test/resources/sql-tests/results/interval.sql.out b/sql/core/src/test/resources/sql-tests/results/interval.sql.out
index 1c84bb4502f01..767e3957ba260 100644
--- a/sql/core/src/test/resources/sql-tests/results/interval.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/interval.sql.out
@@ -576,11 +576,11 @@ struct<>
-- !query 66 output
org.apache.spark.sql.catalyst.parser.ParseException
-no viable alternative at input '10 nanoseconds'(line 1, pos 19)
+Error parsing ' 10 nanoseconds' to interval, invalid unit 'nanoseconds'(line 1, pos 16)
== SQL ==
select interval 10 nanoseconds
--------------------^^^
+----------------^^^
-- !query 67
@@ -628,11 +628,11 @@ struct<>
-- !query 71 output
org.apache.spark.sql.catalyst.parser.ParseException
-no viable alternative at input '1 fake_unit'(line 1, pos 18)
+Error parsing ' 1 fake_unit' to interval, invalid unit 'fake_unit'(line 1, pos 16)
== SQL ==
select interval 1 fake_unit
-------------------^^^
+----------------^^^
-- !query 72