Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
756 commits
Select commit Hold shift + click to select a range
1996434
[SPARK-40550][SQL] DataSource V2: Handle DELETE commands for delta-ba…
aokolnychyi Jan 19, 2023
93d6cd0
[MINOR][TESTS][DOCS] Fix instructions for running `AvroWriteBenchmark…
bersprockets Jan 20, 2023
116e07a
[SPARK-42043][CONNECT][TESTS] Fix connect jar finding issue for scala…
zhenlineo Jan 20, 2023
78d4c95
[SPARK-40817][K8S] `spark.files` should preserve remote files
antonipp Jan 20, 2023
3ff0d99
[SPARK-42112][SQL][SS] Add null check before `ContinuousWriteRDD#comp…
LuciferYang Jan 20, 2023
f8f0429
[SPARK-42096][CONNECT] Some code cleanup for `connect` module
LuciferYang Jan 20, 2023
4812df6
[MINOR] Fix typo `cheksum` to `checksum` in `BlockManager`
xiaochen-zhou Jan 20, 2023
6382a3b
[SPARK-38591][SQL] Add flatMapSortedGroups and cogroupSorted
EnricoMi Jan 20, 2023
74c043d
[SPARK-41884][CONNECT] Support naive tuple as a nested row
HyukjinKwon Jan 20, 2023
58256a5
[SPARK-42070][SQL] Change the default value of argument of Mask funct…
vinodkc Jan 20, 2023
9c53845
[SPARK-41976][SQL] Improve error message for `INDEX_NOT_FOUND`
itholic Jan 20, 2023
db06f3e
[SPARK-42105][SS][DOCS] Reflect the change of SPARK-40925 to SS guide…
HeartSaVioR Jan 20, 2023
ec424c5
[SPARK-42075][DSTREAM] Deprecate DStream API
chaoqin-li1123 Jan 20, 2023
e1c630a
[SPARK-42114][SQL][TESTS] Add uniform parquet encryption test case
ggershinsky Jan 20, 2023
83e3b06
[SPARK-42129][BUILD] Upgrade rocksdbjni to 7.9.2
LuciferYang Jan 20, 2023
0c08289
[SPARK-40303][DOCS] Deprecate old Java 8 versions prior to 8u362
wangyum Jan 20, 2023
66f6a63
[SPARK-42130][UI] Handle null string values in AccumulableInfo and Pr…
gengliangwang Jan 20, 2023
a9df2ec
[MINOR][DOCS] Remove Python 3.9 and Apache Arrow warning comment
panbingkun Jan 21, 2023
c1193b8
[SPARK-42137][CORE] Enable `spark.kryo.unsafe` by default
dongjoon-hyun Jan 21, 2023
eb4bb44
[SPARK-42099][SPARK-41845][CONNECT][PYTHON] Fix `count(*)` and `count…
zhengruifeng Jan 21, 2023
dcdcb80
[SPARK-42134][SQL] Fix getPartitionFiltersAndDataFilters() to handle …
peter-toth Jan 21, 2023
9a4804e
[SPARK-42138][UI] Handle null string values in JobData/TaskDataWrappe…
gengliangwang Jan 21, 2023
e5d23af
[SPARK-42056][SQL][PROTOBUF] Add missing options for Protobuf functions
Jan 21, 2023
f7c8ac7
[SPARK-42043][CONNECT][TEST][FOLLOWUP] Fix jar finding bug and use be…
zhenlineo Jan 21, 2023
d970c56
[SPARK-40264][ML][DOCS] Supplement docstring in pyspark.ml.functions.…
leewyang Jan 21, 2023
e38a1b7
[SPARK-41593][PYTHON][ML] Adding logging from executors
rithwik-db Jan 21, 2023
e0b09a1
[SPARK-41777][PYSPARK][ML] Integration testing for TorchDistributor
rithwik-db Jan 21, 2023
68af2fd
[SPARK-42082][SPARK-41598][PYTHON][CONNECT] Introduce `PySparkValueEr…
itholic Jan 21, 2023
e969bb2
[SPARK-41683][CORE] Fix issue of getting incorrect property numActive…
kuwii Jan 21, 2023
76a134a
[SPARK-42148][K8S][BUILD] Upgrade `kubernetes-client` to 6.4.0
dongjoon-hyun Jan 21, 2023
8806efa
[SPARK-42142][UI] Handle null string values in CachedQuantile/Executo…
gengliangwang Jan 21, 2023
4ab394f
[SPARK-42143][UI] Handle null string values in RDDStorageInfo/RDDData…
gengliangwang Jan 21, 2023
074894c
[MINOR][SHUFFLE] Include IOException in warning log of finalizeShuffl…
tedyu Jan 21, 2023
330b796
[SPARK-42140][CORE] Handle null string values in ApplicationEnvironme…
LuciferYang Jan 21, 2023
c1a5e26
[SPARK-42150][K8S][DOCS] Upgrade `Volcano` to 1.7.0
dongjoon-hyun Jan 22, 2023
26e0967
[SPARK-41629][CONNECT][FOLLOW] Enable access to SparkSession from Plugin
grundprinzip Jan 22, 2023
56c691c
[SPARK-42146][CORE] Refactor `Utils#setStringField` to make maven bui…
LuciferYang Jan 22, 2023
dd4f112
[SPARK-42149][YARN] Remove the env `SPARK_USE_CONC_INCR_GC` used to e…
LuciferYang Jan 22, 2023
73b0c71
[SPARK-42153][UI] Handle null string values in PairStrings/RDDOperati…
gengliangwang Jan 22, 2023
b614fbe
[SPARK-41283][CONNECT][PYTHON] Add `array_append` to Connect
zhengruifeng Jan 22, 2023
3c57180
[SPARK-41772][CONNECT][PYTHON] Fix incorrect column name in `withFiel…
zhengruifeng Jan 22, 2023
984cfa1
[SPARK-42139][CORE][SQL] Handle null string values in SQLExecutionUID…
LuciferYang Jan 22, 2023
0c3f4cf
[SPARK-42144][CORE][SQL] Handle null string values in StageDataWrappe…
LuciferYang Jan 22, 2023
041883f
[SPARK-42154][K8S][TESTS] Enable `Volcano` unit and integration tests…
dongjoon-hyun Jan 22, 2023
ea5be38
[SPARK-41775][PYTHON][ML] Adding support for PyTorch functions
rithwik-db Jan 23, 2023
5b9ec43
[SPARK-42152][BUILD][CORE][SQL][PYTHON][PROTOBUF] Use `_` instead of …
LuciferYang Jan 23, 2023
57d06f8
[SPARK-41712][PYTHON][CONNECT] Migrate the Spark Connect errors into …
itholic Jan 23, 2023
04d7265
[SPARK-41979][SQL] Add missing dots for error messages in error classes
itholic Jan 23, 2023
cc1674d
[SPARK-41948][SQL] Fix NPE for error classes: CANNOT_PARSE_JSON_FIELD
panbingkun Jan 23, 2023
934d14d
[SPARK-42133] Add basic Dataset API methods to Spark Connect Scala Cl…
vicennial Jan 24, 2023
7546b44
[SPARK-42164][CORE] Register partitioned-table-related classes to Kry…
dongjoon-hyun Jan 24, 2023
4d51bfa
[SPARK-42157][CORE] `spark.scheduler.mode=FAIR` should provide FAIR s…
dongjoon-hyun Jan 24, 2023
8547046
[SPARK-41413][FOLLOWUP][SQL][TESTS] More test coverage in KeyGroupedP…
sunchao Jan 24, 2023
45dbc44
[MINOR][K8S][DOCS] Add all resource managers in `Scheduling Within an…
dongjoon-hyun Jan 24, 2023
b5090f6
[SPARK-42166][K8S] Make `docker-image-tool.sh` usage message up-to-date
dongjoon-hyun Jan 24, 2023
9b9f433
[SPARK-41775][PYTHON][FOLLOWUP] Use `pyspark.cloudpickle` instead of …
dongjoon-hyun Jan 24, 2023
59d1603
[SPARK-18011] Fix SparkR NA date serialization
joveyuan-db Jan 24, 2023
95b61ab
[SPARK-42044][SQL] Fix incorrect error message for `MUST_AGGREGATE_CO…
itholic Jan 24, 2023
8b8d923
[SPARK-42167][INFRA] Improve GitHub Action `lint` job to stop on fail…
dongjoon-hyun Jan 24, 2023
91910d3
[SPARK-42171][PYSPARK][TESTS] Fix `pyspark-errors` module and enable …
dongjoon-hyun Jan 24, 2023
4ad7f50
[SPARK-36124][SQL] Support subqueries with correlation through UNION
jchen5 Jan 25, 2023
094498b
[SPARK-41775][PYTHON][FOLLOWUP] Fix stdout rerouting
rithwik-db Jan 25, 2023
941567c
[SPARK-42174][PYTHON][INFRA] Use `scikit-learn` instead of `sklearn`
dongjoon-hyun Jan 25, 2023
c43be4e
[SPARK-42119][SQL] Add built-in table-valued functions inline and inl…
allisonwang-db Jan 25, 2023
866343c
[SPARK-42176][SQL] Fix cast of a boolean value to timestamp
sadikovi Jan 25, 2023
c2b8a29
[SPARK-42180][BUILD][DOCS] Update `SCALA_VERSION` in `_config.yml` to…
LuciferYang Jan 25, 2023
7b93415
[SPARK-41677][CORE][SQL][SS][UI] Add Protobuf serializer for `Streami…
LuciferYang Jan 25, 2023
e363e9d
[SPARK-42124][PYTHON][CONNECT] Scalar Inline Python UDF in Spark Connect
xinrong-meng Jan 25, 2023
9835476
[SPARK-42181][PYTHON][TESTS] Skip `torch` tests when `torch` is not i…
dongjoon-hyun Jan 25, 2023
001408c
[SPARK-42185][INFRA] Add `branch-3.4` to `publish_snapshot` GitHub Ac…
dongjoon-hyun Jan 25, 2023
e81c2c8
[SPARK-42183][PYTHON][ML][TESTS] Exclude pyspark.ml.torch.tests in My…
HyukjinKwon Jan 25, 2023
1322c86
[SPARK-42184][BUILD] Setting version to 3.5.0-SNAPSHOT
xinrong-meng Jan 25, 2023
67651a7
[SPARK-42186][R] Make SparkR be able to stop properly when the connec…
HyukjinKwon Jan 25, 2023
be028a0
[SPARK-42178][UI] Handle remaining null string values in ui protobuf …
gengliangwang Jan 25, 2023
e9b12de
[SPARK-42123][SQL] Include column default values in DESCRIBE and SHOW…
dtenedor Jan 25, 2023
31be205
[SPARK-42182][CONNECT][TESTS] Make `ReusedConnectTestCase` to take Sp…
HyukjinKwon Jan 26, 2023
426b115
[SPARK-42190][K8S] Support `local` mode in `spark.kubernetes.driver.m…
dongjoon-hyun Jan 26, 2023
4f60ebc
[SPARK-42187][CONNECT][TESTS] Avoid using RemoteSparkSession.builder.…
HyukjinKwon Jan 26, 2023
dbd667e
[SPARK-42126][PYTHON][CONNECT] Accept return type in DDL strings for …
xinrong-meng Jan 26, 2023
b3f5f81
[SPARK-42197][CONNECT] Reuses JVM initialization, and separate config…
HyukjinKwon Jan 26, 2023
9734998
[SPARK-42195][INFRA] Add Daily Scala 2.13 Github Action Job for branc…
LuciferYang Jan 26, 2023
e30bb53
[SPARK-42173][CORE] RpcAddress equality can fail
holdenk Jan 26, 2023
66ec1eb
[SPARK-42201][BUILD] `build/sbt` should allow `SBT_OPTS` to override …
dongjoon-hyun Jan 27, 2023
2f49c1f
[SPARK-41757][CONNECT][PYTHON][FOLLOW-UP] Enable connect.functions.co…
techaddict Jan 27, 2023
170a8bc
[SPARK-42207][INFRA] Update `build_and_test.yml` to use `Ubuntu 22.04`
dongjoon-hyun Jan 27, 2023
f373df8
[SPARK-42158][SQL] Integrate `_LEGACY_ERROR_TEMP_1003` into `FIELD_NO…
itholic Jan 27, 2023
516cb7f
[SPARK-42190][K8S][FOLLOWUP] Fix to use the user-given number of threads
dongjoon-hyun Jan 27, 2023
ddd8670
[SPARK-33573][CORE][FOLLOW-UP] Enhance ignoredBlockBytes in pushMerge…
Jan 27, 2023
0ef7afe
[SPARK-41931][SQL] Better error message for incomplete complex type d…
RunyaoChen Jan 27, 2023
d9ca982
[SPARK-42168][SQL][PYTHON][FOLLOW-UP] Test FlatMapCoGroupsInPandas wi…
EnricoMi Jan 27, 2023
5a53b7f
[SPARK-42213][BUILD][CONNECT] Add `repl` test dependency to `connect-…
LuciferYang Jan 27, 2023
27d03b9
[SPARK-41849][CONNECT][PYTHON][TESTS][FOLLOWUP] Enable parity test `t…
zhengruifeng Jan 27, 2023
aeb2a13
[SPARK-41875][CONNECT][PYTHON][TESTS][FOLLOWUP] Enable parity test `t…
zhengruifeng Jan 27, 2023
1828880
[SPARK-42216][CORE][TESTS] Fix two check conditions and remove redund…
LuciferYang Jan 28, 2023
47ca7f8
[SPARK-42218][BUILD] Upgrade `netty` to version 4.1.87.Final
bjornjorgensen Jan 28, 2023
c8b1e52
[SPARK-41897][CONNECT][TESTS] Enable tests with error mismatch in con…
techaddict Jan 28, 2023
f348d4f
[SPARK-42214][INFRA] Enable infra image build for scheduled job
Yikun Jan 28, 2023
43b81b7
[SPARK-42161][BUILD] Upgrade Apache Arrow to 11.0.0
LuciferYang Jan 29, 2023
0e46106
[SPARK-42220][CONNECT][BUILD] Upgrade buf from 1.12.0 to 1.13.1
panbingkun Jan 29, 2023
2fa1d6b
[SPARK-41830][CONNECT][PYTHON][TESTS][FOLLOWUP] Enable parity test `t…
zhengruifeng Jan 29, 2023
5e9566a
[SPARK-42226][BUILD] Upgrade `versions-maven-plugin` to 2.14.2
LuciferYang Jan 29, 2023
dbdb06c
[SPARK-42224][CONNECT] Migrate `TypeError` into error framework for S…
itholic Jan 29, 2023
086c8d9
[SPARK-42194][PS] Allow `columns` parameter when creating DataFrame w…
itholic Jan 29, 2023
d2fc199
[SPARK-41489][SQL] Assign name to _LEGACY_ERROR_TEMP_2415
itholic Jan 29, 2023
2842d8a
[SPARK-42225][CONNECT] Add `SparkConnectIllegalArgumentException` to …
itholic Jan 29, 2023
2440b6b
[SPARK-42224][FOLLOWUP] Raise `PySparkTypeError` instead of `TypeError`
itholic Jan 29, 2023
7b426ac
[SPARK-42081][SQL] Improve the plan change validation
cloud-fan Jan 30, 2023
607e753
[SPARK-38591][SQL][FOLLOW-UP] Fix ambiguous references for sorted cog…
EnricoMi Jan 30, 2023
1a5d22a
[SPARK-42196][SS] Fix typo in StreamingQuery.runId
ganeshchand Jan 30, 2023
1e58474
[SPARK-42230][INFRA] Improve `lint` job by skipping PySpark and Spark…
dongjoon-hyun Jan 30, 2023
8b8a5a8
[SPARK-41735][SQL] Use MINIMAL instead of STANDARD for SparkListenerS…
ulysses-you Jan 30, 2023
8fefb62
[SPARK-42066][SQL] The DATATYPE_MISMATCH error class contains inappro…
itholic Jan 30, 2023
fe50713
[SPARK-41855][CONNECT][PYTHON][FOLLOWUP] Make `createDataFrame` accep…
zhengruifeng Jan 30, 2023
04517fc
[SPARK-41490][SQL] Assign name to _LEGACY_ERROR_TEMP_2441
itholic Jan 30, 2023
c1bee10
[SPARK-42233][SQL] Improve error message for `PIVOT_AFTER_GROUP_BY`
itholic Jan 30, 2023
3aa92fb
[SPARK-42239][SQL] Integrate `MUST_AGGREGATE_CORRELATED_SCALAR_SUBQUERY`
itholic Jan 30, 2023
1304a33
[SPARK-42230][INFRA][FOLLOWUP] Add `GITHUB_PREV_SHA` and `APACHE_SPAR…
dongjoon-hyun Jan 30, 2023
973b3d5
[SPARK-42221][SQL] Introduce a new conf for TimestampNTZ schema infer…
gengliangwang Jan 30, 2023
a1362b1
[SPARK-42192][PYTHON] Migrate the `TypeError` from `pyspark/sql/dataf…
itholic Jan 31, 2023
3887e71
[SPARK-41970][SQL][FOLLOWUP] Revert SparkPath changes to FileIndex an…
databricks-david-lewis Jan 31, 2023
a98ac25
[SPARK-42241][CONNECT][TESTS] Fix the find connect jar condition in `…
LuciferYang Jan 31, 2023
0db63df
[SPARK-42125][CONNECT][PYTHON] Pandas UDF in Spark Connect
xinrong-meng Jan 31, 2023
3a0ec35
[SPARK-42202][CONNECT][TEST] Improve the E2E test server stop logic
zhenlineo Jan 31, 2023
16cfa09
[SPARK-42163][SQL] Fix schema pruning for non-foldable array index or…
cashmand Jan 31, 2023
e593344
[SPARK-42231][SQL] Turn `MISSING_STATIC_PARTITION_COLUMN` into `inter…
itholic Jan 31, 2023
a056f69
[SPARK-41219][SQL] IntegralDivide use decimal(1, 0) to represent 0
ulysses-you Jan 31, 2023
d0f5f1d
[SPARK-42023][SPARK-42024][CONNECT][PYTHON] Make `createDataFrame` su…
zhengruifeng Jan 31, 2023
b509ad1
[SPARK-42243][SQL] Use `spark.sql.inferTimestampNTZInDataSources.enab…
gengliangwang Jan 31, 2023
068111f
[SPARK-42156][CONNECT] SparkConnectClient supports RetryPolicies now
grundprinzip Jan 31, 2023
11a7537
[SPARK-42229][CORE] Migrate `SparkCoreErrors` into error classes
itholic Jan 31, 2023
2c104c3
[SPARK-41488][SQL] Assign name to _LEGACY_ERROR_TEMP_1176 (and 1177)
itholic Jan 31, 2023
ccbc075
[SPARK-42250][PYTHON][ML] predict_batch_udf` with float fails when th…
HyukjinKwon Jan 31, 2023
720fe2f
[MINOR] Fix typo `Exlude` to `Exclude` in `HealthTracker`
cxzl25 Jan 31, 2023
6341b06
[SPARK-40086][SPARK-42049][SQL] Improve AliasAwareOutputPartitioning …
peter-toth Jan 31, 2023
93d169d
[SPARK-42245][BUILD] Upgrade scalafmt from 3.6.1 to 3.7.1
panbingkun Jan 31, 2023
1cba3b9
[SPARK-42236][SQL] Refine `NULLABLE_ARRAY_OR_MAP_ELEMENT`
itholic Jan 31, 2023
c6cb21d
[SPARK-42257][CORE] Remove unused variable external sorter
khalidmammadov Jan 31, 2023
4d37e78
[SPARK-42253][PYTHON] Add test for detecting duplicated error class
itholic Feb 1, 2023
b0ac061
[SPARK-42191][SQL] Support udf 'luhn_check'
vinodkc Feb 1, 2023
34fb408
[SPARK-42051][SQL] Codegen Support for HiveGenericUDF
yaooqinn Feb 1, 2023
65a1c16
[SPARK-42242][BUILD] Upgrade `snappy-java` to 1.1.9.1
dongjoon-hyun Feb 1, 2023
c7007b3
[SPARK-42272][CONNEC][TESTS] Use an available ephemeral port for Spar…
HyukjinKwon Feb 1, 2023
1219c84
[SPARK-42259][SQL] ResolveGroupingAnalytics should take care of Pytho…
cloud-fan Feb 1, 2023
e823ce4
[SPARK-42274][BUILD] Upgrade `compress-lzf` to 1.1.2
dongjoon-hyun Feb 1, 2023
20eb546
[SPARK-42278][SQL] DS V2 pushdown supports supports JDBC dialects com…
beliefer Feb 1, 2023
40ca27c
[SPARK-41985][SQL] Centralize more column resolution rules
cloud-fan Feb 1, 2023
48ab301
[SPARK-42228][BUILD][CONNECT] Add shade and relocation rule of grpc t…
LuciferYang Feb 1, 2023
fb9d706
[SPARK-42283][CONNECT][SCALA] Simple Scalar Scala UDFs
vicennial Feb 1, 2023
39c9945
[SPARK-42277][CORE] Use RocksDB for `spark.history.store.hybridStore.…
dongjoon-hyun Feb 1, 2023
0fe361e
[SPARK-42115][SQL] Push down limit through Python UDFs
kelvinjian-db Feb 2, 2023
10b97f8
[SPARK-42284][CONNECT] Make sure connect server assembly is built bef…
hvanhovell Feb 2, 2023
9043224
[SPARK-42279][PS][TESTS] Simplify `pyspark.pandas.tests.test_resample`
zhengruifeng Feb 2, 2023
8cdd268
[SPARK-42275][CONNECT][PYTHON] Avoid using built-in list, dict in sta…
zhengruifeng Feb 2, 2023
9148e97
[SPARK-41931][SQL][FOLLOWUP] Refine example more useful
itholic Feb 2, 2023
7c3ec12
[SPARK-42268][CONNECT][PYTHON] Add UserDefinedType in protos
zhengruifeng Feb 2, 2023
0d93bb2
[SPARK-42093][SQL] Move JavaTypeInference to AgnosticEncoders
hvanhovell Feb 2, 2023
c48b53b
[SPARK-42282][PS][TESTS] Split `pyspark.pandas.tests.test_groupby`
zhengruifeng Feb 2, 2023
461e1b1
[SPARK-42271][CONNECT][PYTHON] Reuse UDF test cases under `pyspark.sq…
xinrong-meng Feb 2, 2023
ca20e40
[SPARK-42273][CONNECT][TESTS] Skip Spark Connect tests if dependencie…
HyukjinKwon Feb 2, 2023
ae24327
[SPARK-38829][SQL] Introduce conf spark.sql.parquet.inferTimestampNTZ…
gengliangwang Feb 2, 2023
6dd88d6
[SPARK-42217][SQL] Support implicit lateral column alias in queries w…
anchovYu Feb 2, 2023
8c68fc7
[SPARK-42281][PYTHON][DOCS] Update Debugging PySpark documents to sho…
itholic Feb 2, 2023
1cae312
[SPARK-42232][SQL] Rename error class: `UNSUPPORTED_FEATURE.JDBC_TRAN…
itholic Feb 2, 2023
15971a0
[SPARK-42172][CONNECT] Scala Client Mima Compatibility Tests
zhenlineo Feb 2, 2023
d740b43
[SPARK-42295][CONNECT][TEST] Tear down the test cleanly
ueshin Feb 3, 2023
4d26c0a
[MINOR][CORE][PYTHON][SQL][PS] Fix argument name in error message
deepyaman Feb 3, 2023
ac105cc
[SPARK-42286][SQL] Fallback to previous codegen code path for complex…
RunyaoChen Feb 3, 2023
c5f72b3
[SPARK-42294][SQL] Include column default values in DESCRIBE output f…
dtenedor Feb 3, 2023
33a37e7
[SPARK-42237][SQL] Change binary to unsupported dataType in CSV format
weiyuyilia Feb 3, 2023
71154dc
[MINOR][DOCS][PYTHON][PS] Fix the `.groupby()` method docstring
deepyaman Feb 3, 2023
a3c6b6b
[MINOR][SQL] Enhance data type check error message
wangyum Feb 3, 2023
a916a05
[SPARK-42234][SQL] Rename error class: `UNSUPPORTED_FEATURE.REPEATED_…
itholic Feb 3, 2023
02b39f0
[SPARK-41985][SQL][FOLLOWUP] Remove alias in GROUP BY only when the e…
cloud-fan Feb 3, 2023
4ebfc0e
[SPARK-42333][SQL] Change log level to debug when fetching result set…
wangyum Feb 3, 2023
4760a8b
[SPARK-42296][SQL] Apply spark.sql.inferTimestampNTZInDataSources.ena…
gengliangwang Feb 3, 2023
69229a5
[SPARK-42297][SQL] Assign name to _LEGACY_ERROR_TEMP_2412
itholic Feb 4, 2023
d9c0e87
[SPARK-42238][SQL] Introduce new error class: `INCOMPATIBLE_JOIN_TYPES`
itholic Feb 4, 2023
c494154
[SPARK-41302][SQL] Assign name to _LEGACY_ERROR_TEMP_1185
NarekDW Feb 4, 2023
80d673d
[SPARK-42341][SQL][TESTS] Fix JoinSelectionHelperSuite and PlanStabil…
dongjoon-hyun Feb 4, 2023
104a546
[SPARK-42334][CONNECT][BUILD] Make sure connect client assembly and s…
LuciferYang Feb 5, 2023
67285c3
[SPARK-42343][CORE] Ignore `IOException` in `handleBlockRemovalFailur…
dongjoon-hyun Feb 5, 2023
c5c1927
[SPARK-42345][SQL] Rename TimestampNTZ inference conf as spark.sql.so…
gengliangwang Feb 5, 2023
9ac4640
[SPARK-42344][K8S] Change the default size of the CONFIG_MAP_MAXSIZE
nineinfra Feb 5, 2023
6bb68b5
[SPARK-41295][SPARK-41296][SQL] Rename the error classes
NarekDW Feb 5, 2023
188e608
[SPARK-42348][SQL] Add new SQLSTATE
itholic Feb 6, 2023
87d4eb6
[SPARK-39347][SS] Bug fix for time window calculation when event time…
WweiL Feb 6, 2023
fdcf85e
[SPARK-42336][CORE] Use `getOrElse()` instead of `contains()` in Reso…
Feb 6, 2023
fbfcd81
[SPARK-41234][SQL][PYTHON] Add `array_insert` function
Daniel-Davies Feb 6, 2023
ceccda0
[SPARK-40819][SQL] Timestamp nanos behaviour regression
awdavidson Feb 6, 2023
537c04f
[SPARK-42002][CONNECT][PYTHON] Implement DataFrameWriterV2
techaddict Feb 6, 2023
17e3ee0
[SPARK-42320][SQL] Assign name to _LEGACY_ERROR_TEMP_2188
itholic Feb 6, 2023
b6eadf0
[SPARK-42302][SQL] Assign name to _LEGACY_ERROR_TEMP_2135
itholic Feb 6, 2023
5940b98
[SPARK-42346][SQL] Rewrite distinct aggregates after subquery merge
peter-toth Feb 6, 2023
3b9d1c6
[SPARK-42255][SQL] Assign name to _LEGACY_ERROR_TEMP_2430
itholic Feb 6, 2023
97a20ed
[SPARK-41470][SQL] SPJ: Relax constraints on Storage-Partitioned-Join…
yabola Feb 6, 2023
3e40b38
[SPARK-42357][CORE] Log `exitCode` when `SparkContext.stop` starts
dongjoon-hyun Feb 6, 2023
286d336
[SPARK-40149][SQL][FOLLOWUP] Avoid adding extra Project in AddMetadat…
allisonwang-db Feb 7, 2023
9b51f54
[SPARK-42362][BUILD] Upgrade `kubernetes-client` to 6.4.1
bjornjorgensen Feb 7, 2023
b99fc5e
[SPARK-42268][CONNECT][PYTHON][TESTS][FOLLOWUP] Add `test_simple_udt`…
zhengruifeng Feb 7, 2023
eb8b97f
[SPARK-42354][BUILD] Upgrade jackson to 2.14.2
LuciferYang Feb 7, 2023
cf3c02e
[SPARK-42038][SQL] SPJ: Support partially clustered distribution
sunchao Feb 7, 2023
9fbdb2d
[SPARK-42363][CONNECT] Remove SparkSession.register_udf
HyukjinKwon Feb 7, 2023
58b6535
[SPARK-42364][PS][TESTS] Split 'pyspark.pandas.tests.test_dataframe'
zhengruifeng Feb 7, 2023
54b5cf6
[SPARK-41600][SPARK-41623][SPARK-41612][CONNECT] Implement Catalog.ca…
HyukjinKwon Feb 7, 2023
6b6bb6f
[SPARK-42306][SQL] Integrate `_LEGACY_ERROR_TEMP_1317` into `UNRESOLV…
itholic Feb 7, 2023
d6134f7
[SPARK-41962][MINOR][SQL] Update the order of imports in class Specif…
wayneguow Feb 7, 2023
8998a3b
[SPARK-42368][INFRA][TESTS] Exclude SparkRemoteFileTest from GitHub A…
dongjoon-hyun Feb 7, 2023
e49466a
[SPARK-40532][CONNECT] Add Python Version into Python UDF message
HyukjinKwon Feb 7, 2023
b30ba71
[SPARK-41716][CONNECT] Rename _catalog_to_pandas to _execute_and_fetc…
HyukjinKwon Feb 7, 2023
d5b0cb4
[SPARK-42365][PS][TESTS] Split 'pyspark.pandas.tests.test_ops_on_diff…
zhengruifeng Feb 7, 2023
56dd20f
[SPARK-41708][SQL][FOLLOWUP] Do not insert columnar to row transition…
cloud-fan Feb 7, 2023
a8d6913
[SPARK-42136] Refactor BroadcastHashJoinExec output partitioning calc…
peter-toth Feb 7, 2023
b5f96fa
[SPARK-42369][CORE] Fix constructor for java.nio.DirectByteBuffer
luhenry Feb 7, 2023
52d3694
[SPARK-42094][PS] Support `fill_value` for `ps.Series.(add|radd)`
itholic Feb 8, 2023
53c1c68
[SPARK-42352][BUILD] Upgrade maven to 3.8.7
LuciferYang Feb 8, 2023
5225a32
[SPARK-42249][SQL] Refining html link for documentation in error mess…
itholic Feb 8, 2023
320097a
[SPARK-42254][SQL] Assign name to _LEGACY_ERROR_TEMP_1117
itholic Feb 8, 2023
05ea27e
[SPARK-42301][SQL] Assign name to _LEGACY_ERROR_TEMP_1129
itholic Feb 8, 2023
839c56a
[SPARK-42244][PYTHON] Refine error classes and messages
itholic Feb 8, 2023
1126031
[SPARK-42371][CONNECT] Add scripts to start and stop Spark Connect se…
HyukjinKwon Feb 8, 2023
fe67269
[SPARK-40045][SQL] Optimize the order of filtering predicates
huaxingao Feb 8, 2023
df52c80
[SPARK-42358][CORE] Send ExecutorUpdated with the message argument in…
bozhang2820 Feb 8, 2023
3a9c867
[SPARK-42315][SQL] Assign name to _LEGACY_ERROR_TEMP_(2091|2092)
itholic Feb 8, 2023
5e1dca9
[SPARK-42372][SQL] Improve performance of HiveGenericUDTF by making i…
yaooqinn Feb 8, 2023
dbc4c62
[SPARK-42378][CONNECT][PYTHON] Make `DataFrame.select` support `a.*`
zhengruifeng Feb 8, 2023
f24ce65
[SPARK-42267][CONNECT][PYTHON] DataFrame.join` should standardize the…
zhengruifeng Feb 8, 2023
2fbf57e
[SPARK-42381][CONNECT][PYTHON] CreateDataFrame` should accept objects
zhengruifeng Feb 8, 2023
7e8e43b
[SPARK-42267][CONNECT][PYTHON][TESTS][FOLLOWUP] Enable `test_udf_in_f…
zhengruifeng Feb 8, 2023
bd34b16
[SPARK-42342][PYTHON][CONNECT] Introduce base hierarchy to exceptions
ueshin Feb 8, 2023
7dbf9f6
[SPARK-42244][PYTHON][FOLLOWUP] Fix error messages to keep the consis…
itholic Feb 8, 2023
04550ed
[SPARK-41708][SQL][TEST][FOLLOWUP] Match non-space chars in path string
cloud-fan Feb 8, 2023
d4e5df8
[SPARK-42303][SQL] Assign name to _LEGACY_ERROR_TEMP_1326
itholic Feb 8, 2023
060a2b8
[SPARK-42131][SQL] Extract the function that construct the select sta…
beliefer Feb 8, 2023
f8e06c1
[SPARK-42305][SQL] Integrate `_LEGACY_ERROR_TEMP_1229` into `DECIMAL_…
itholic Feb 8, 2023
4b50a46
[SPARK-42314][SQL] Assign name to _LEGACY_ERROR_TEMP_2127
itholic Feb 8, 2023
b11fba0
[SPARK-42318][SPARK-42319][SQL] Assign name to _LEGACY_ERROR_TEMP_(21…
itholic Feb 8, 2023
67b6f0e
[SPARK-42335][SQL] Pass the comment option through to univocity if us…
wayneguow Feb 8, 2023
a76bd9d
[SPARK-42379][SS] Use FileSystem.exists in FileSystemBasedCheckpointF…
HeartSaVioR Feb 8, 2023
409c661
[SPARK-40819][SQL][FOLLOWUP] Update SqlConf version for nanosAsLong c…
awdavidson Feb 9, 2023
bdf56c4
[SPARK-40770][PYTHON] Improved error messages for applyInPandas for s…
EnricoMi Feb 9, 2023
72f8a0c
[SPARK-42350][SQL][K8S][SS] Replcace `get().getOrElse` with `getOrElse`
LuciferYang Feb 9, 2023
89b16f2
[SPARK-42355][BUILD] Upgrade some maven-plugins
LuciferYang Feb 9, 2023
12cc49e
[SPARK-42385][BUILD] Upgrade RoaringBitmap to 0.9.39
LuciferYang Feb 9, 2023
e7eb836
[SPARK-42210][CONNECT][PYTHON] Standardize registered pickled Python …
xinrong-meng Feb 9, 2023
201a91b
[SPARK-42366][SHUFFLE] Log shuffle data corruption diagnose cause
cxzl25 Feb 9, 2023
004731e
[SPARK-42353][SS] Cleanup orphan sst and log files in RocksDB checkpo…
chaoqin-li1123 Feb 9, 2023
ced6750
[SPARK-42338][CONNECT] Add details to non-fatal errors to raise a pro…
ueshin Feb 9, 2023
c5230e4
[SPARK-40453][SPARK-41715][CONNECT] Take super class into account whe…
HyukjinKwon Feb 10, 2023
a180e67
[MINOR][SS] Fix setTimeoutTimestamp doc
viirya Feb 10, 2023
af50b47
[SPARK-42276][BUILD][CONNECT] Add `ServicesResourceTransformer` rule …
LuciferYang Feb 10, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
99 changes: 54 additions & 45 deletions .github/workflows/build_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,15 +51,15 @@ on:
jobs:
precondition:
name: Check changes
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
env:
GITHUB_PREV_SHA: ${{ github.event.before }}
outputs:
required: ${{ steps.set-outputs.outputs.required }}
image_url: >-
${{
(inputs.branch == 'master' && steps.infra-image-outputs.outputs.image_url)
|| 'dongjoon/apache-spark-github-action-image:20220207'
((inputs.branch == 'branch-3.2' || inputs.branch == 'branch-3.3') && 'dongjoon/apache-spark-github-action-image:20220207')
|| steps.infra-image-outputs.outputs.image_url
}}
steps:
- name: Checkout Spark repository
Expand Down Expand Up @@ -127,8 +127,7 @@ jobs:
name: "Build modules: ${{ matrix.modules }} ${{ matrix.comment }}"
needs: precondition
if: fromJson(needs.precondition.outputs.required).build == 'true'
# Ubuntu 20.04 is the latest LTS. The next LTS is 22.04.
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
Expand Down Expand Up @@ -269,12 +268,12 @@ jobs:
infra-image:
name: "Base image build"
needs: precondition
# Currently, only enable docker build from cache for `master` branch jobs
# Currently, enable docker build from cache for `master` and branch (since 3.4) jobs
if: >-
(fromJson(needs.precondition.outputs.required).pyspark == 'true' ||
fromJson(needs.precondition.outputs.required).lint == 'true' ||
fromJson(needs.precondition.outputs.required).sparkr == 'true') &&
inputs.branch == 'master'
(inputs.branch != 'branch-3.2' && inputs.branch != 'branch-3.3')
runs-on: ubuntu-latest
permissions:
packages: write
Expand Down Expand Up @@ -319,7 +318,7 @@ jobs:
# always run if pyspark == 'true', even infra-image is skip (such as non-master job)
if: always() && fromJson(needs.precondition.outputs.required).pyspark == 'true'
name: "Build modules: ${{ matrix.modules }}"
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
container:
image: ${{ needs.precondition.outputs.image_url }}
strategy:
Expand All @@ -337,7 +336,7 @@ jobs:
- >-
pyspark-pandas-slow
- >-
pyspark-connect
pyspark-connect, pyspark-errors
env:
MODULES_TO_TEST: ${{ matrix.modules }}
HADOOP_PROFILE: ${{ inputs.hadoop }}
Expand Down Expand Up @@ -404,7 +403,7 @@ jobs:
export PATH=$PATH:$HOME/miniconda/bin
./dev/run-tests --parallelism 1 --modules "$MODULES_TO_TEST"
- name: Upload coverage to Codecov
if: inputs.type == 'pyspark-coverage-scheduled'
if: fromJSON(inputs.envs).PYSPARK_CODECOV == 'true'
uses: codecov/codecov-action@v2
with:
files: ./python/coverage.xml
Expand All @@ -428,7 +427,7 @@ jobs:
# always run if sparkr == 'true', even infra-image is skip (such as non-master job)
if: always() && fromJson(needs.precondition.outputs.required).sparkr == 'true'
name: "Build modules: sparkr"
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
container:
image: ${{ needs.precondition.outputs.image_url }}
env:
Expand Down Expand Up @@ -500,12 +499,13 @@ jobs:
# always run if lint == 'true', even infra-image is skip (such as non-master job)
if: always() && fromJson(needs.precondition.outputs.required).lint == 'true'
name: Linters, licenses, dependencies and documentation generation
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
env:
LC_ALL: C.UTF-8
LANG: C.UTF-8
PYSPARK_DRIVER_PYTHON: python3.9
PYSPARK_PYTHON: python3.9
GITHUB_PREV_SHA: ${{ github.event.before }}
container:
image: ${{ needs.precondition.outputs.image_url }}
steps:
Expand All @@ -521,6 +521,7 @@ jobs:
- name: Sync the current branch with the latest in Apache Spark
if: github.repository != 'apache/spark'
run: |
echo "APACHE_SPARK_REF=$(git rev-parse HEAD)" >> $GITHUB_ENV
git fetch https://github.com/$GITHUB_REPOSITORY.git ${GITHUB_REF#refs/heads/}
git -c user.name='Apache Spark Test Account' -c user.email='sparktestacc@gmail.com' merge --no-commit --progress --squash FETCH_HEAD
git -c user.name='Apache Spark Test Account' -c user.email='sparktestacc@gmail.com' commit -m "Merged commit" --allow-empty
Expand Down Expand Up @@ -550,20 +551,44 @@ jobs:
key: docs-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
docs-maven-
- name: Install Java 8
uses: actions/setup-java@v3
with:
distribution: temurin
java-version: 8
- name: License test
run: ./dev/check-license
- name: Dependencies test
run: ./dev/test-dependencies.sh
- name: Scala linter
run: ./dev/lint-scala
- name: Java linter
run: ./dev/lint-java
- name: Install Python linter dependencies
run: |
# TODO(SPARK-32407): Sphinx 3.1+ does not correctly index nested classes.
# See also https://github.com/sphinx-doc/sphinx/issues/7551.
# Jinja2 3.0.0+ causes error when building with Sphinx.
# See also https://issues.apache.org/jira/browse/SPARK-35375.
python3.9 -m pip install 'flake8==3.9.0' pydata_sphinx_theme 'mypy==0.920' 'pytest==7.1.3' 'pytest-mypy-plugins==1.9.3' numpydoc 'jinja2<3.0.0' 'black==22.6.0'
python3.9 -m pip install 'pandas-stubs==1.2.0.53'
python3.9 -m pip install 'pandas-stubs==1.2.0.53' ipython 'grpcio==1.48.1' 'grpc-stubs==1.24.11' 'googleapis-common-protos-stubs==2.2.0'
- name: Python linter
run: PYTHON_EXECUTABLE=python3.9 ./dev/lint-python
- name: Install dependencies for Python code generation check
run: |
# See more in "Installation" https://docs.buf.build/installation#tarball
curl -LO https://github.com/bufbuild/buf/releases/download/v1.9.0/buf-Linux-x86_64.tar.gz
curl -LO https://github.com/bufbuild/buf/releases/download/v1.13.1/buf-Linux-x86_64.tar.gz
mkdir -p $HOME/buf
tar -xvzf buf-Linux-x86_64.tar.gz -C $HOME/buf --strip-components 1
python3.9 -m pip install 'protobuf==3.19.5' 'mypy-protobuf==3.3.0'
- name: Python code generation check
run: if test -f ./dev/connect-check-protos.py; then PATH=$PATH:$HOME/buf/bin PYTHON_EXECUTABLE=python3.9 ./dev/connect-check-protos.py; fi
- name: Install JavaScript linter dependencies
run: |
apt update
apt-get install -y nodejs npm
- name: JS linter
run: ./dev/lint-js
- name: Install R linter dependencies and SparkR
run: |
apt update
Expand All @@ -573,10 +598,6 @@ jobs:
Rscript -e "install.packages(c('devtools'), repos='https://cloud.r-project.org/')"
Rscript -e "devtools::install_version('lintr', version='2.0.1', repos='https://cloud.r-project.org')"
./R/install-dev.sh
- name: Install JavaScript linter dependencies
run: |
apt update
apt-get install -y nodejs npm
- name: Install dependencies for documentation generation
run: |
# pandoc is required to generate PySpark APIs as well in nbsphinx.
Expand All @@ -587,9 +608,9 @@ jobs:
# See also https://issues.apache.org/jira/browse/SPARK-35375.
# Pin the MarkupSafe to 2.0.1 to resolve the CI error.
# See also https://issues.apache.org/jira/browse/SPARK-38279.
python3.9 -m pip install 'sphinx<3.1.0' mkdocs pydata_sphinx_theme ipython nbsphinx numpydoc 'jinja2<3.0.0' 'markupsafe==2.0.1' 'pyzmq<24.0.0'
python3.9 -m pip install 'sphinx<3.1.0' mkdocs pydata_sphinx_theme nbsphinx numpydoc 'jinja2<3.0.0' 'markupsafe==2.0.1' 'pyzmq<24.0.0'
python3.9 -m pip install ipython_genutils # See SPARK-38517
python3.9 -m pip install sphinx_plotly_directive 'numpy>=1.20.0' pyarrow pandas 'plotly>=4.8' 'grpcio==1.48.1' 'protobuf==3.19.5' 'mypy-protobuf==3.3.0'
python3.9 -m pip install sphinx_plotly_directive 'numpy>=1.20.0' pyarrow pandas 'plotly>=4.8'
python3.9 -m pip install 'docutils<0.18.0' # See SPARK-39421
apt-get update -y
apt-get install -y ruby ruby-dev
Expand All @@ -599,29 +620,16 @@ jobs:
gem install bundler
cd docs
bundle install
- name: Install Java 8
uses: actions/setup-java@v3
with:
distribution: temurin
java-version: 8
- name: Scala linter
run: ./dev/lint-scala
- name: Java linter
run: ./dev/lint-java
- name: Python linter
run: PYTHON_EXECUTABLE=python3.9 ./dev/lint-python
- name: Python code generation check
run: if test -f ./dev/check-codegen-python.py; then PATH=$PATH:$HOME/buf/bin PYTHON_EXECUTABLE=python3.9 ./dev/check-codegen-python.py; fi
- name: R linter
run: ./dev/lint-r
- name: JS linter
run: ./dev/lint-js
- name: License test
run: ./dev/check-license
- name: Dependencies test
run: ./dev/test-dependencies.sh
- name: Run documentation build
run: |
if [ -f "./dev/is-changed.py" ]; then
# Skip PySpark and SparkR docs while keeping Scala/Java/SQL docs
pyspark_modules=`cd dev && python3.9 -c "import sparktestsupport.modules as m; print(','.join(m.name for m in m.all_modules if m.name.startswith('pyspark')))"`
if [ `./dev/is-changed.py -m $pyspark_modules` = false ]; then export SKIP_PYTHONDOC=1; fi
if [ `./dev/is-changed.py -m sparkr` = false ]; then export SKIP_RDOC=1; fi
fi
cd docs
bundle exec jekyll build

Expand All @@ -635,7 +643,7 @@ jobs:
java:
- 11
- 17
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout Spark repository
uses: actions/checkout@v3
Expand Down Expand Up @@ -685,7 +693,7 @@ jobs:
needs: precondition
if: fromJson(needs.precondition.outputs.required).scala-213 == 'true'
name: Scala 2.13 build with SBT
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout Spark repository
uses: actions/checkout@v3
Expand Down Expand Up @@ -732,6 +740,7 @@ jobs:
needs: precondition
if: fromJson(needs.precondition.outputs.required).tpcds-1g == 'true'
name: Run TPC-DS queries with SF=1
# Pin to 'Ubuntu 20.04' due to 'databricks/tpcds-kit' compilation
runs-on: ubuntu-20.04
env:
SPARK_LOCAL_IP: localhost
Expand Down Expand Up @@ -830,7 +839,7 @@ jobs:
needs: precondition
if: fromJson(needs.precondition.outputs.required).docker-integration-tests == 'true'
name: Run Docker integration tests
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
env:
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
Expand Down Expand Up @@ -895,7 +904,7 @@ jobs:
needs: precondition
if: fromJson(needs.precondition.outputs.required).k8s-integration-tests == 'true'
name: Run Spark on Kubernetes Integration test
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout Spark repository
uses: actions/checkout@v3
Expand Down Expand Up @@ -952,9 +961,9 @@ jobs:
export PVC_TESTS_VM_PATH=$PVC_TMP_DIR
minikube mount ${PVC_TESTS_HOST_PATH}:${PVC_TESTS_VM_PATH} --gid=0 --uid=185 &
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts || true
kubectl apply -f https://raw.githubusercontent.com/volcano-sh/volcano/v1.7.0/installer/volcano-development.yaml || true
eval $(minikube docker-env)
# - Exclude Volcano test (-Pvolcano), batch jobs need more CPU resource
build/sbt -Psparkr -Pkubernetes -Pkubernetes-integration-tests -Dspark.kubernetes.test.driverRequestCores=0.5 -Dspark.kubernetes.test.executorRequestCores=0.2 "kubernetes-integration-tests/test"
build/sbt -Psparkr -Pkubernetes -Pvolcano -Pkubernetes-integration-tests -Dspark.kubernetes.test.driverRequestCores=0.5 -Dspark.kubernetes.test.executorRequestCores=0.2 -Dspark.kubernetes.test.volcanoMaxConcurrencyJobNum=1 -Dtest.exclude.tags=local "kubernetes-integration-tests/test"
- name: Upload Spark on K8S integration tests log files
if: failure()
uses: actions/upload-artifact@v3
Expand Down
49 changes: 49 additions & 0 deletions .github/workflows/build_branch34.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

name: "Build (branch-3.4, Scala 2.13, Hadoop 3, JDK 8)"

on:
schedule:
- cron: '0 9 * * *'

jobs:
run-build:
permissions:
packages: write
name: Run
uses: ./.github/workflows/build_and_test.yml
if: github.repository == 'apache/spark'
with:
java: 8
branch: branch-3.4
hadoop: hadoop3
envs: >-
{
"SCALA_PROFILE": "scala2.13"
}
jobs: >-
{
"build": "true",
"pyspark": "true",
"sparkr": "true",
"tpcds-1g": "true",
"docker-integration-tests": "true",
"lint" : "true"
}
48 changes: 48 additions & 0 deletions .github/workflows/build_rockdb_as_ui_backend.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

name: "Build / RocksDB as UI Backend (master, Hadoop 3, JDK 8, Scala 2.12)"

on:
schedule:
- cron: '0 6 * * *'

jobs:
run-build:
permissions:
packages: write
name: Run
uses: ./.github/workflows/build_and_test.yml
if: github.repository == 'apache/spark'
with:
java: 8
branch: master
hadoop: hadoop3
envs: >-
{
"LIVE_UI_LOCAL_STORE_DIR": "/tmp/kvStore",
}
jobs: >-
{
"build": "true",
"pyspark": "true",
"sparkr": "true",
"tpcds-1g": "true",
"docker-integration-tests": "true"
}
2 changes: 1 addition & 1 deletion .github/workflows/publish_snapshot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ jobs:
matrix:
branch:
- master
- branch-3.4
- branch-3.3
- branch-3.2
- branch-3.1
steps:
- name: Checkout Spark repository
uses: actions/checkout@v3
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: SparkR
Type: Package
Version: 3.4.0
Version: 3.5.0
Title: R Front End for 'Apache Spark'
Description: Provides an R Front end for 'Apache Spark' <https://spark.apache.org>.
Authors@R:
Expand Down
Loading