Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
80 commits
Select commit Hold shift + click to select a range
40ffdef
[SPARK-50250][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 13, 2024
ede05fa
[SPARK-50248][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 13, 2024
6fb1d43
[SPARK-50246][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 13, 2024
898bff2
[SPARK-50245][SQL][TESTS] Extended CollationSuite and added tests whe…
vladanvasi-db Nov 13, 2024
bd94419
[SPARK-50226][SQL] Correct MakeDTInterval and MakeYMInterval to catch…
gotocoding-DB Nov 13, 2024
bc9b259
[SPARK-50066][SQL] Codegen Support for `SchemaOfXml` (by Invoke & Run…
panbingkun Nov 13, 2024
558fc89
[SPARK-49611][SQL][FOLLOW-UP] Make collations TVF consistent and retu…
mihailomilosevic2001 Nov 13, 2024
7b1b450
Revert [SPARK-50215][SQL] Refactored StringType pattern matching in j…
vladanvasi-db Nov 13, 2024
87ad4b4
[SPARK-50139][INFRA][SS][PYTHON] Introduce scripts to re-generate and…
LuciferYang Nov 13, 2024
05508cf
[SPARK-42838][SQL] Assign a name to the error class _LEGACY_ERROR_TEM…
mihailomilosevic2001 Nov 13, 2024
5cc60f4
[SPARK-50300][BUILD] Use mirror host instead of `archive.apache.org`
dongjoon-hyun Nov 13, 2024
33378a6
[SPARK-50304][INFRA] Remove `(any|empty).proto` from RAT exclusion
dongjoon-hyun Nov 14, 2024
891f694
[SPARK-50306][PYTHON][CONNECT] Support Python 3.13 in Spark Connect
HyukjinKwon Nov 14, 2024
2fd4702
[SPARK-49913][SQL] Add check for unique label names in nested labeled…
miland-db Nov 14, 2024
6bee268
[SPARK-50299][BUILD] Upgrade jupiter-interface to 0.13.1 and Junit5 t…
LuciferYang Nov 14, 2024
09d6b32
[SPARK-48755][DOCS][PYTHON][FOLLOWUP] Add PySpark doc for `transformW…
itholic Nov 14, 2024
0b1b676
[SPARK-50092][SQL] Fix PostgreSQL connector behaviour for multidimens…
PetarVasiljevic-DB Nov 14, 2024
aea9e87
[SPARK-50291][PYTHON] Standardize verifySchema parameter of createDat…
xinrong-meng Nov 14, 2024
c1968a1
[SPARK-50216][SQL][TESTS] Update `CollationBenchmark` to invoke `coll…
stevomitric Nov 14, 2024
0aee601
[SPARK-50153][SQL] Add `name` to `RuleExecutor` to make printing `Que…
panbingkun Nov 14, 2024
c2343f7
[SPARK-45265][SQL] Support Hive 4.0 metastore
yaooqinn Nov 14, 2024
e0a83f6
[SPARK-50317][BUILD] Upgrade ORC to 2.0.3
dongjoon-hyun Nov 14, 2024
c90efae
[SPARK-50318][SQL] Add IntervalUtils.makeYearMonthInterval to dedupli…
gotocoding-DB Nov 15, 2024
3237885
[SPARK-50312][SQL] SparkThriftServer createServer parameter passing e…
CuiYanxiang Nov 15, 2024
e615e3f
[SPARK-50049][SQL] Support custom driver metrics in writing to v2 table
cloud-fan Nov 15, 2024
3f5e846
[SPARK-50237][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 15, 2024
cf90271
[MINOR] Fix code style for if/for/while statements
exmy Nov 15, 2024
cc81ed0
[SPARK-50325][SQL] Factor out alias resolution to be reused in the si…
vladimirg-db Nov 15, 2024
d317002
[SPARK-50322][SQL] Fix parameterized identifier in a sub-query
MaxGekk Nov 15, 2024
77e006f
[SPARK-50327][SQL] Factor out function resolution to be reused in the…
vladimirg-db Nov 15, 2024
11e4706
[SPARK-50320][CORE] Make `--remote` an official option by removing `e…
dongjoon-hyun Nov 15, 2024
007c31d
[SPARK-50236][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 15, 2024
281a8e1
[SPARK-50309][DOCS] Document `SQL Pipe` Syntax
dtenedor Nov 15, 2024
b626528
[SPARK-50313][SQL][TESTS] Enable ANSI in SQL *SQLQueryTestSuite by de…
yaooqinn Nov 18, 2024
a01856d
[SPARK-50330][SQL] Add hints to Sort and Window nodes
agubichev Nov 18, 2024
8b2d032
[SPARK-45265][SQL][BUILD][FOLLOWUP] Add `-Xss64m` for Maven testing o…
LuciferYang Nov 18, 2024
05750de
[MINOR][PYTHON][DOCS] Fix the type hint of `histogram_numeric`
zhengruifeng Nov 18, 2024
400a8d3
Revert "[SPARK-49787][SQL] Cast between UDT and other types"
cloud-fan Nov 18, 2024
fa36e8b
[SPARK-50335][PYTHON][DOCS] Refine docstrings for window/aggregation …
zhengruifeng Nov 19, 2024
b61411d
[SPARK-50328][INFRA] Add a separate docker file for SparkR
zhengruifeng Nov 19, 2024
e1477a3
[SPARK-50298][PYTHON][CONNECT] Implement verifySchema parameter of cr…
xinrong-meng Nov 19, 2024
6d47981
[SPARK-50331][INFRA] Add a daily test for PySpark on MacOS
LuciferYang Nov 19, 2024
5a57efd
[SPARK-50313][SQL][TESTS][FOLLOWUP] Restore some tests in *SQLQueryTe…
yaooqinn Nov 19, 2024
b74aa8c
[SPARK-50340][SQL] Unwrap UDT in INSERT input query
cloud-fan Nov 19, 2024
87a5b37
[SPARK-50313][SQL][TESTS][FOLLOWUP] Regenerate golden files for Java 21
LuciferYang Nov 19, 2024
f1b68d8
[SPARK-50315][SQL] Support custom metrics for V1Fallback writes
olaky Nov 19, 2024
19509d0
Revert "[SPARK-49002][SQL] Consistently handle invalid locations in W…
cloud-fan Nov 19, 2024
37497e6
[SPARK-50335][PYTHON][DOCS][FOLLOW-UP] Make percentile doctests more …
zhengruifeng Nov 20, 2024
c149dcb
[SPARK-50352][PYTHON][DOCS] Refine docstrings for window/aggregation …
zhengruifeng Nov 20, 2024
8791767
[SPARK-48344][SQL] Prepare SQL Scripting for addition of Execution Fr…
miland-db Nov 20, 2024
b7cf448
[SPARK-49550][FOLLOWUP][SQL][DOC] Switch Hadoop to 3.4.1 in IsolatedC…
pan3793 Nov 20, 2024
2185f3c
[SPARK-50359][PYTHON] Upgrade PyArrow to 18.0
zhengruifeng Nov 20, 2024
0157778
[SPARK-50358][SQL][TESTS] Update postgres docker image to 17.1
panbingkun Nov 20, 2024
b582dac
[MINOR][DOCS] Fix a HTML/Markdown syntax error in sql-migration-guide.md
yaooqinn Nov 20, 2024
19b8250
[SPARK-50331][INFRA][FOLLOW-UP] Skip Torch/DeepSpeed tests in MacOS P…
zhengruifeng Nov 20, 2024
7a4f3c4
[SPARK-50345][BUILD] Upgrade Kafka to 3.9.0
panbingkun Nov 20, 2024
3151d97
[SPARK-49801][INFRA][FOLLOWUP] Sync pandas version in release environ…
yaooqinn Nov 20, 2024
23f276f
[SPARK-50353][SQL] Refactor ResolveSQLOnFile
mihailoale-db Nov 20, 2024
533b8ca
[SPARK-50363][PYTHON][DOCS] Refine the docstring for datetime functio…
zhengruifeng Nov 20, 2024
81a56df
[SPARK-50362][PYTHON][ML] Skip `CrossValidatorTests` if `torch/torche…
LuciferYang Nov 20, 2024
6ee53da
[SPARK-50258][SQL] Fix output column order changed issue after AQE op…
wangyum Nov 20, 2024
30d0b01
[SPARK-50364][SQL] Implement serialization for LocalDateTime type in …
krm95 Nov 20, 2024
ad46db4
[SPARK-50130][SQL][FOLLOWUP] Make Encoder generation lazy
ueshin Nov 20, 2024
a409199
[SPARK-50376][PYTHON][ML][TESTS] Centralize the dependency check in M…
zhengruifeng Nov 21, 2024
3bc374d
[SPARK-50333][SQL] Codegen Support for `CsvToStructs` (by Invoke & Ru…
panbingkun Nov 21, 2024
95faa02
[SPARK-49490][SQL] Add benchmarks for initCap
mrk-andreev Nov 21, 2024
ee21e6b
[SPARK-50113][CONNECT][PYTHON][TESTS] Add `@remote_only` to check the…
itholic Nov 21, 2024
0f1e410
[SPARK-50016][SQL] Assign appropriate error condition for `_LEGACY_ER…
itholic Nov 21, 2024
b05ef45
[SPARK-50175][SQL] Change collation precedence calculation
stefankandic Nov 21, 2024
fbf255e
[SPARK-50379][SQL] Fix DayTimeIntevalType handling in WindowExecBase
mihailomilosevic2001 Nov 21, 2024
cbb16b9
[MINOR][DOCS] Fix miss semicolon on create table example sql
camilesing Nov 21, 2024
f2de888
[MINOR][DOCS] Remove wrong and ambiguous default statement in datetim…
yaooqinn Nov 21, 2024
229b1b8
[SPARK-50375][BUILD] Upgrade `commons-io` to 2.18.0
panbingkun Nov 21, 2024
136c722
[SPARK-50334][SQL] Extract common logic for reading the descriptor of…
panbingkun Nov 21, 2024
2e1c3dc
[SPARK-50087] Robust handling of boolean expressions in CASE WHEN for…
cloud-fan Nov 21, 2024
2d09ef2
[SPARK-50381][CORE] Support `spark.master.rest.maxThreads`
dongjoon-hyun Nov 21, 2024
69324bd
Merge branch 'master' into pr48820
ueshin Nov 21, 2024
349df78
Fix.
ueshin Nov 21, 2024
1079339
Fix.
ueshin Nov 21, 2024
c6b0651
Fix.
ueshin Nov 22, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
31 changes: 29 additions & 2 deletions .github/workflows/build_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ jobs:
image_docs_url_link: ${{ steps.infra-image-link.outputs.image_docs_url_link }}
image_lint_url: ${{ steps.infra-image-lint-outputs.outputs.image_lint_url }}
image_lint_url_link: ${{ steps.infra-image-link.outputs.image_lint_url_link }}
image_sparkr_url: ${{ steps.infra-image-sparkr-outputs.outputs.image_sparkr_url }}
image_sparkr_url_link: ${{ steps.infra-image-link.outputs.image_sparkr_url_link }}
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down Expand Up @@ -154,6 +156,14 @@ jobs:
IMG_NAME="apache-spark-ci-image-lint:${{ inputs.branch }}-${{ github.run_id }}"
IMG_URL="ghcr.io/$REPO_OWNER/$IMG_NAME"
echo "image_lint_url=$IMG_URL" >> $GITHUB_OUTPUT
- name: Generate infra image URL (SparkR)
id: infra-image-sparkr-outputs
run: |
# Convert to lowercase to meet Docker repo name requirement
REPO_OWNER=$(echo "${{ github.repository_owner }}" | tr '[:upper:]' '[:lower:]')
IMG_NAME="apache-spark-ci-image-sparkr:${{ inputs.branch }}-${{ github.run_id }}"
IMG_URL="ghcr.io/$REPO_OWNER/$IMG_NAME"
echo "image_sparkr_url=$IMG_URL" >> $GITHUB_OUTPUT
- name: Link the docker images
id: infra-image-link
run: |
Expand All @@ -162,9 +172,11 @@ jobs:
if [[ "${{ inputs.branch }}" == 'branch-3.5' ]]; then
echo "image_docs_url_link=${{ steps.infra-image-outputs.outputs.image_url }}" >> $GITHUB_OUTPUT
echo "image_lint_url_link=${{ steps.infra-image-outputs.outputs.image_url }}" >> $GITHUB_OUTPUT
echo "image_sparkr_url_link=${{ steps.infra-image-outputs.outputs.image_url }}" >> $GITHUB_OUTPUT
else
echo "image_docs_url_link=${{ steps.infra-image-docs-outputs.outputs.image_docs_url }}" >> $GITHUB_OUTPUT
echo "image_lint_url_link=${{ steps.infra-image-lint-outputs.outputs.image_lint_url }}" >> $GITHUB_OUTPUT
echo "image_sparkr_url_link=${{ steps.infra-image-sparkr-outputs.outputs.image_sparkr_url }}" >> $GITHUB_OUTPUT
fi

# Build: build Spark and run the tests for specified modules.
Expand Down Expand Up @@ -405,6 +417,17 @@ jobs:
${{ needs.precondition.outputs.image_lint_url }}
# Use the infra image cache to speed up
cache-from: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-lint-cache:${{ inputs.branch }}
- name: Build and push (SparkR)
if: hashFiles('dev/spark-test-image/sparkr/Dockerfile') != ''
id: docker_build_sparkr
uses: docker/build-push-action@v6
with:
context: ./dev/spark-test-image/sparkr/
push: true
tags: |
${{ needs.precondition.outputs.image_sparkr_url }}
# Use the infra image cache to speed up
cache-from: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-sparkr-cache:${{ inputs.branch }}


pyspark:
Expand Down Expand Up @@ -564,7 +587,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 180
container:
image: ${{ needs.precondition.outputs.image_url }}
image: ${{ needs.precondition.outputs.image_sparkr_url_link }}
env:
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
Expand Down Expand Up @@ -671,8 +694,12 @@ jobs:
run: |
python3.11 -m pip install 'black==23.9.1' 'protobuf==5.28.3' 'mypy==1.8.0' 'mypy-protobuf==3.3.0'
python3.11 -m pip list
- name: Python CodeGen check
- name: Python CodeGen check for branch-3.5
if: inputs.branch == 'branch-3.5'
run: ./dev/connect-check-protos.py
- name: Python CodeGen check
if: inputs.branch != 'branch-3.5'
run: ./dev/check-protos.py

# Static analysis
lint:
Expand Down
14 changes: 14 additions & 0 deletions .github/workflows/build_infra_images_cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ on:
- 'dev/infra/Dockerfile'
- 'dev/spark-test-image/docs/Dockerfile'
- 'dev/spark-test-image/lint/Dockerfile'
- 'dev/spark-test-image/sparkr/Dockerfile'
- '.github/workflows/build_infra_images_cache.yml'
# Create infra image when cutting down branches/tags
create:
Expand Down Expand Up @@ -88,3 +89,16 @@ jobs:
- name: Image digest (Linter)
if: hashFiles('dev/spark-test-image/lint/Dockerfile') != ''
run: echo ${{ steps.docker_build_lint.outputs.digest }}
- name: Build and push (SparkR)
if: hashFiles('dev/spark-test-image/sparkr/Dockerfile') != ''
id: docker_build_sparkr
uses: docker/build-push-action@v6
with:
context: ./dev/spark-test-image/sparkr/
push: true
tags: ghcr.io/apache/spark/apache-spark-github-action-image-sparkr-cache:${{ github.ref_name }}-static
cache-from: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-sparkr-cache:${{ github.ref_name }}
cache-to: type=registry,ref=ghcr.io/apache/spark/apache-spark-github-action-image-sparkr-cache:${{ github.ref_name }},mode=max
- name: Image digest (SparkR)
if: hashFiles('dev/spark-test-image/sparkr/Dockerfile') != ''
run: echo ${{ steps.docker_build_sparkr.outputs.digest }}
32 changes: 32 additions & 0 deletions .github/workflows/build_python_3.11_macos.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

name: "Build / Python-only (master, Python 3.11, MacOS)"

on:
schedule:
- cron: '0 21 * * *'

jobs:
run-build:
permissions:
packages: write
name: Run
uses: ./.github/workflows/python_macos_test.yml
if: github.repository == 'apache/spark'
162 changes: 162 additions & 0 deletions .github/workflows/python_macos_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

name: Build and test PySpark on macOS

on:
workflow_call:
inputs:
java:
required: false
type: string
default: 17
python:
required: false
type: string
default: 3.11
branch:
description: Branch to run the build against
required: false
type: string
default: master
hadoop:
description: Hadoop version to run with. HADOOP_PROFILE environment variable should accept it.
required: false
type: string
default: hadoop3
envs:
description: Additional environment variables to set when running the tests. Should be in JSON format.
required: false
type: string
default: '{}'
jobs:
build:
name: "PySpark test on macos: ${{ matrix.modules }}"
runs-on: macos-15
strategy:
fail-fast: false
matrix:
java:
- ${{ inputs.java }}
python:
- ${{inputs.python}}
modules:
- >-
pyspark-sql, pyspark-resource, pyspark-testing
- >-
pyspark-core, pyspark-errors, pyspark-streaming
- >-
pyspark-mllib, pyspark-ml, pyspark-ml-connect
- >-
pyspark-connect
- >-
pyspark-pandas
- >-
pyspark-pandas-slow
- >-
pyspark-pandas-connect-part0
- >-
pyspark-pandas-connect-part1
- >-
pyspark-pandas-connect-part2
- >-
pyspark-pandas-connect-part3
env:
MODULES_TO_TEST: ${{ matrix.modules }}
PYTHON_TO_TEST: python${{inputs.python}}
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
# GitHub Actions' default miniconda to use in pip packaging test.
CONDA_PREFIX: /usr/share/miniconda
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
METASPACE_SIZE: 1g
BRANCH: ${{ inputs.branch }}
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
# In order to fetch changed files
with:
fetch-depth: 0
repository: apache/spark
ref: ${{ inputs.branch }}
- name: Sync the current branch with the latest in Apache Spark
if: github.repository != 'apache/spark'
run: |
echo "APACHE_SPARK_REF=$(git rev-parse HEAD)" >> $GITHUB_ENV
git fetch https://github.com/$GITHUB_REPOSITORY.git ${GITHUB_REF#refs/heads/}
git -c user.name='Apache Spark Test Account' -c user.email='sparktestacc@gmail.com' merge --no-commit --progress --squash FETCH_HEAD
git -c user.name='Apache Spark Test Account' -c user.email='sparktestacc@gmail.com' commit -m "Merged commit" --allow-empty
# Cache local repositories. Note that GitHub Actions cache has a 10G limit.
- name: Cache SBT and Maven
uses: actions/cache@v4
with:
path: |
build/apache-maven-*
build/*.jar
~/.sbt
key: build-${{ hashFiles('**/pom.xml', 'project/build.properties', 'build/mvn', 'build/sbt', 'build/sbt-launch-lib.bash', 'build/spark-build-info') }}
restore-keys: |
build-
- name: Cache Coursier local repository
uses: actions/cache@v4
with:
path: ~/.cache/coursier
key: pyspark-coursier-${{ hashFiles('**/pom.xml', '**/plugins.sbt') }}
restore-keys: |
pyspark-coursier-
- name: Install Java ${{ matrix.java }}
uses: actions/setup-java@v4
with:
distribution: zulu
java-version: ${{ matrix.java }}
- name: Install Python packages (Python ${{matrix.python}})
run: |
python${{matrix.python}} -m pip install --ignore-installed 'blinker>=1.6.2'
python${{matrix.python}} -m pip install --ignore-installed 'six==1.16.0'
python${{matrix.python}} -m pip install numpy 'pyarrow>=15.0.0' 'six==1.16.0' 'pandas==2.2.3' scipy 'plotly>=4.8' 'mlflow>=2.8.1' coverage matplotlib openpyxl 'memory-profiler>=0.61.0' 'scikit-learn>=1.3.2' unittest-xml-reporting && \
python${{matrix.python}} -m pip install 'grpcio==1.67.0' 'grpcio-status==1.67.0' 'protobuf==5.28.3' 'googleapis-common-protos==1.65.0' 'graphviz==0.20.3' && \
python${{matrix.python}} -m pip cache purge && \
python${{matrix.python}} -m pip list
# Run the tests.
- name: Run tests
env: ${{ fromJSON(inputs.envs) }}
run: |
if [[ "$MODULES_TO_TEST" == *"pyspark-errors"* ]]; then
export SKIP_PACKAGING=false
echo "Python Packaging Tests Enabled!"
fi
./dev/run-tests --parallelism 1 --modules "$MODULES_TO_TEST" --python-executables "$PYTHON_TO_TEST"
- name: Upload test results to report
env: ${{ fromJSON(inputs.envs) }}
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results-${{ matrix.modules }}--${{ matrix.java }}-${{ inputs.hadoop }}-hive2.3-${{ env.PYTHON_TO_TEST }}
path: "**/target/test-reports/*.xml"
- name: Upload unit tests log files
env: ${{ fromJSON(inputs.envs) }}
if: ${{ !success() }}
uses: actions/upload-artifact@v4
with:
name: unit-tests-log-${{ matrix.modules }}--${{ matrix.java }}-${{ inputs.hadoop }}-hive2.3-${{ env.PYTHON_TO_TEST }}
path: "**/target/unit-tests.log"
2 changes: 1 addition & 1 deletion assembly/README
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ This module is off by default. To activate it specify the profile in the command

If you need to build an assembly for a different version of Hadoop the
hadoop-version system property needs to be set as in this example:
-Dhadoop.version=3.4.0
-Dhadoop.version=3.4.1
2 changes: 1 addition & 1 deletion build/mvn
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ install_app() {
local binary="${_DIR}/$6"
local remote_tarball="${mirror_host}/${url_path}${url_query}"
local local_checksum="${local_tarball}.${checksum_suffix}"
local remote_checksum="https://archive.apache.org/dist/${url_path}.${checksum_suffix}"
local remote_checksum="${mirror_host}/${url_path}.${checksum_suffix}${url_query}"

local curl_opts="--retry 3 --silent --show-error -L"
local wget_opts="--no-verbose"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1023,12 +1023,14 @@ protected Collation buildCollation() {

@Override
protected CollationMeta buildCollationMeta() {
String language = ICULocaleMap.get(locale).getDisplayLanguage();
String country = ICULocaleMap.get(locale).getDisplayCountry();
return new CollationMeta(
CATALOG,
SCHEMA,
normalizedCollationName(),
ICULocaleMap.get(locale).getDisplayLanguage(),
ICULocaleMap.get(locale).getDisplayCountry(),
language.isEmpty() ? null : language,
country.isEmpty() ? null : country,
VersionInfo.ICU_VERSION.toString(),
COLLATION_PAD_ATTRIBUTE,
accentSensitivity == AccentSensitivity.AS,
Expand Down
Loading