Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
5dbb271
MDT Test framework without writing data files
Dec 23, 2025
677aa96
MDT Test framework - using filterFileSlics for colstats
Jan 2, 2026
7294abc
MDT Test framework - using createCommitMetadata, tagLocation and file…
Jan 2, 2026
c688bf1
MDT Test framework - using createCommitMetadata, tagLocation and file…
Jan 2, 2026
5ed79d6
MDT Test framework - initializing files partition before commit
Jan 2, 2026
68a9fab
MDT Test framework - bug fixes with files partition
Jan 3, 2026
a8f01d3
MDT Test framework - Added writing colStats to same upsertPreppedReco…
Jan 3, 2026
328065f
MDT Test framework - Added read path changes using filterFileSlices
Jan 5, 2026
3fc0d7c
MDT Test framework - create empty parquet data files from commit meta…
Jan 6, 2026
718056e
MDT Test framework - disable partition stats and reconcile markers
Jan 7, 2026
a49beee
Add spark context to the HoodieMDTStats class
vamsikarnika Jan 7, 2026
8ebdb0a
Modify colsToIndex config to take column names
vamsikarnika Jan 7, 2026
4c2a7e6
Fix Partition field config
vamsikarnika Jan 7, 2026
ef17392
Add md file on how to use the tool
vamsikarnika Jan 7, 2026
24f41a6
Fix usage file
vamsikarnika Jan 7, 2026
6807f9c
Add config for enabling partition stats
vamsikarnika Jan 7, 2026
d68824f
Creating files using engine context
vamsikarnika Jan 7, 2026
fe41155
parallelize the empty parquet file creation based on no of partitions
vamsikarnika Jan 7, 2026
5e05e80
Writes files through multiple commits
vamsikarnika Jan 7, 2026
8abc18d
Bulk Insert Files & Column stats
vamsikarnika Jan 8, 2026
edf9e77
test: Fix flaky test ITTestHoodieFlinkCompactor#testHoodieFlinkCompac…
cshuo Dec 24, 2025
62b982c
chore: Test Runtime Improvements: lower number of files, parallelize …
the-other-tim-brown Dec 24, 2025
e95b664
feat(schema): phase 17 - Remove AvroSchemaUtils usage (part 2) (#17581)
voonhous Dec 24, 2025
8c52833
test(ci): Add JVM tuning for Java 11+ test execution to reduce CI run…
yihua Dec 25, 2025
8790698
refactor: Add Lombok annotations to hudi-kafka-connect (#17715)
voonhous Dec 26, 2025
42a22f2
refactor: Add Lombok annotations to hudi-io module (#17685)
voonhous Dec 26, 2025
f4dc9f3
test: Fix flaky test `testLatestCheckpointCarryOverWithMultipleWriter…
yihua Dec 26, 2025
327d10f
fix: Fix ConcurrentModificationException in RocksDBDAO when accessed …
yihua Dec 26, 2025
1a81593
feat: Add HoodieSparkLanceReader for reading lance files to internal …
rahil-c Dec 26, 2025
f07d3a7
refactor: Add Lombok annotations to hudi-platform-service (#17719)
voonhous Dec 26, 2025
23c4ea6
chore(ci): Use non-archive repo and upgrade maven binary (#17723)
voonhous Dec 26, 2025
99b91d2
feat(schema): Migrate clustering operations to use HoodieSchema (#17691)
the-other-tim-brown Dec 26, 2025
50ced87
refactor: Add Lombok annotations to hudi-spark,hudi-spark-common (#17…
voonhous Dec 27, 2025
ef976ba
chore(ci): Retry spark downloads (#17732)
the-other-tim-brown Dec 29, 2025
180bc88
remove arg line overrides (#17741)
the-other-tim-brown Dec 29, 2025
7282ffa
perf: optimize rollback validation by checking lazy rollback policy b…
suryaprasanna Dec 30, 2025
fc1d0b8
perf: use shallow projection where applicable (#17682)
kamronis Dec 30, 2025
729b8d6
chore(ci): upgrade to newer plugins and new test dependencies that re…
the-other-tim-brown Dec 30, 2025
e812533
docs: Improve the annotation format of examples (#17749)
huangxiaopingRD Dec 30, 2025
4cd05b0
refactor: Replace HoodieHadoopStorage instantiation with HoodieStorag…
KiteSoar Dec 30, 2025
b9bd787
feat: Implement SparkColumnarFileReader for Datasource integration wi…
rahil-c Dec 30, 2025
f613407
chore(ci): add cache arg (#17738)
the-other-tim-brown Dec 30, 2025
bdb3d15
feat(schema): Migrate hudi spark client to use HoodieSchema (#17743)
rahil-c Dec 31, 2025
298f112
Remove HoodieAvroUtils from hudi-client-common (#17599)
voonhous Dec 31, 2025
282f9ae
feat: Support COW bulk-insert, insert, upsert, delete works with spar…
rahil-c Dec 31, 2025
65e5c46
chore(ci): remove repeated checkout (#17755)
the-other-tim-brown Dec 31, 2025
961e8cc
chore(ci): Hudi-utilities test improvements (#17758)
the-other-tim-brown Jan 1, 2026
99e6fc3
feat(schema): Migrate spark schema conversion utils to their HoodieSc…
the-other-tim-brown Jan 2, 2026
5ece39d
feat(schema): Migrate json and proto converters to use HoodieSchema (…
the-other-tim-brown Jan 2, 2026
f6a348d
refactor: Remove Builder from DynamoDbBasedLockConfig (#17780)
voonhous Jan 5, 2026
ce6e364
perf: Avoid unnecessary timeline loading when create `HoodieFlinkTabl…
TheR1sing3un Jan 5, 2026
96c6c8f
style: Correct wrong apache license (#17790)
huangxiaopingRD Jan 6, 2026
8fad1e7
fix(metadata): propagate timeline server config from main dataset to …
prashantwason Jan 6, 2026
eabaaf6
test: Fix flaky test in TestHoodieClientMultiWriter (#17793)
yihua Jan 7, 2026
01e8fd4
fix: Fix the timeline compaction blocked caused by the archived file …
TheR1sing3un Jan 7, 2026
36d57ee
fix: Handle hudi table reads when databaseName is not set during init…
vinishjail97 Jan 7, 2026
0f0358f
feat(schema): Phase 18 - HoodieAvroUtils removal (Part 2) (#17763)
voonhous Jan 8, 2026
02a1abe
feat: Support splitting tasks based on file size when reading the cow…
TheR1sing3un Jan 8, 2026
700e99f
fix: Add complex types testing for lance (#17769)
rahil-c Jan 8, 2026
6f40dd6
refactor: Add Lombok annotations to hudi-timeline-service (#17742)
voonhous Jan 8, 2026
c3d8343
Revert "fix: Partition stats should be controlled using column stats …
Jan 8, 2026
c4fc478
MDT Test framework - disable partition stats and reconcile markers
Jan 7, 2026
0279763
Add config for enabling partition stats
vamsikarnika Jan 7, 2026
b8a48d3
MDT Test framework - rename files and disabling partition stats
Jan 8, 2026
36e800c
MDT Test framework - rename files and disabling partition stats
Jan 8, 2026
f08c349
MDT Test framework - removing disabling partition stats code
Jan 8, 2026
064ee28
MDT Test framework - Setting partition stats as false in table config
Jan 8, 2026
7dd621a
MDT Test framework - Setting partition stats as false in table config
Jan 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
114 changes: 53 additions & 61 deletions .github/workflows/bot.yml

Large diffs are not rendered by default.

10 changes: 0 additions & 10 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -358,13 +358,3 @@ Copyright (c) 2005, European Commission project OneLab under contract 034819 (ht

Home page: https://github.com/streamsets/datacollector-oss
License: http://www.apache.org/licenses/LICENSE-2.0

-------------------------------------------------------------------------------

This product includes code from Apache Avro

* org.apache.hudi.avro.JsonEncoder adapted from org.apache.avro.io.JsonEncoder

Copyright: 2010-2019 The Apache Software Foundation
Home page: https://avro.apache.org
License: http://www.apache.org/licenses/LICENSE-2.0
34 changes: 18 additions & 16 deletions azure-pipelines-20230430.yml
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,8 @@ variables:
PLUGIN_OPTS: '-Dcheckstyle.skip=true -Drat.skip=true -ntp -B -V -Pwarn-log -Dorg.slf4j.simpleLogger.log.org.apache.maven.plugins.shade=warn -Dorg.slf4j.simpleLogger.log.org.apache.maven.plugins.dependency=warn'
MVN_OPTS_INSTALL: '-T 3 -Phudi-platform-service -DskipTests $(BUILD_PROFILES) $(PLUGIN_OPTS) -Dmaven.wagon.httpconnectionManager.ttlSeconds=25 -Dmaven.wagon.http.retryHandler.count=5'
MVN_OPTS_TEST: '-fae -Pwarn-log $(BUILD_PROFILES) $(PLUGIN_OPTS)'
JAVA_MVN_TEST_FILTER: '-DwildcardSuites=skipScalaTests -DfailIfNoTests=false'
SCALA_MVN_TEST_FILTER: '-Dtest=skipJavaTests -DfailIfNoTests=false'
JAVA_MVN_TEST_FILTER: '-DwildcardSuites=skipScalaTests -Dsurefire.failIfNoSpecifiedTests=false'
SCALA_MVN_TEST_FILTER: '-Dtest=skipJavaTests -Dsurefire.failIfNoSpecifiedTests=false'
JOB3456_MODULES: ${{ join(',',parameters.job3456UTModules) }}
JAVA_FUNCTIONAL_PACKAGE_TEST_FILTER: '**/org/apache/hudi/functional/**/*'
MVN_ARG_FUNCTIONAL_PACKAGE_TEST: "-Dtest=\"$(JAVA_FUNCTIONAL_PACKAGE_TEST_FILTER)\""
Expand All @@ -126,7 +126,7 @@ stages:
jobs:
- job: UT_FT_1
displayName: UT hudi-hadoop-common & UT FT client/spark-client
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -180,7 +180,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_2
displayName: FTA hudi-spark
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -250,7 +250,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_4
displayName: UT spark-datasource Java Test 2
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -285,7 +285,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_5
displayName: UT spark-datasource DML
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -320,7 +320,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_6
displayName: UT spark-datasource DDL & Others
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -355,7 +355,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_7
displayName: UT Hudi Streamer & FT utilities
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Docker@2
displayName: "login to docker hub"
Expand All @@ -378,10 +378,11 @@ stages:
command: 'run'
arguments: >
-v $(Build.SourcesDirectory):/hudi
-v /var/run/docker.sock:/var/run/docker.sock
-i docker.io/apachehudi/hudi-ci-bundle-validation-base:$(Build.BuildId)
/bin/bash -c "mvn clean install $(MVN_OPTS_INSTALL) -Phudi-platform-service -Pthrift-gen-source -pl hudi-utilities -am
&& mvn test $(MVN_OPTS_TEST) -Punit-tests $(JACOCO_AGENT_DESTFILE1_ARG) -Dtest="TestHoodie*" -DfailIfNoTests=false -DargLine="-Xmx4g" -pl hudi-utilities
&& mvn test $(MVN_OPTS_TEST) -Pfunctional-tests $(JACOCO_AGENT_DESTFILE2_ARG) -DfailIfNoTests=false -DargLine="-Xmx4g" -pl hudi-utilities"
&& mvn test $(MVN_OPTS_TEST) -Punit-tests $(JACOCO_AGENT_DESTFILE1_ARG) -Dtest="TestHoodie*" -Dsurefire.failIfNoSpecifiedTests=false -pl hudi-utilities
&& mvn test $(MVN_OPTS_TEST) -Pfunctional-tests $(JACOCO_AGENT_DESTFILE2_ARG) -Dsurefire.failIfNoSpecifiedTests=false -pl hudi-utilities"
- task: PublishTestResults@2
displayName: 'Publish Test Results'
inputs:
Expand All @@ -404,7 +405,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_8
displayName: UT FT Spark and SQL (additional)
timeoutInMinutes: '110'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -457,7 +458,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_9
displayName: FT spark 2
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Maven@4
displayName: maven install
Expand Down Expand Up @@ -492,7 +493,7 @@ stages:
displayName: Top 100 long-running testcases
- job: UT_FT_10
displayName: UT FT common & other modules
timeoutInMinutes: '90'
timeoutInMinutes: '120'
steps:
- task: Docker@2
displayName: "login to docker hub"
Expand All @@ -515,11 +516,12 @@ stages:
command: 'run'
arguments: >
-v $(Build.SourcesDirectory):/hudi
-v /var/run/docker.sock:/var/run/docker.sock
-i docker.io/apachehudi/hudi-ci-bundle-validation-base:$(Build.BuildId)
/bin/bash -c "mvn clean install $(MVN_OPTS_INSTALL) -Phudi-platform-service -Pthrift-gen-source
&& mvn test $(MVN_OPTS_TEST) -Punit-tests -DfailIfNoTests=false -DargLine="-Xmx4g" $(JACOCO_AGENT_DESTFILE1_ARG) -pl $(JOB10_UT_MODULES)
&& mvn test $(MVN_OPTS_TEST) -Punit-tests $(JACOCO_AGENT_DESTFILE2_ARG) -Dtest="!TestHoodie*" -DfailIfNoTests=false -DargLine="-Xmx4g" -pl hudi-utilities
&& mvn test $(MVN_OPTS_TEST) -Pfunctional-tests -DfailIfNoTests=false -DargLine="-Xmx4g" $(JACOCO_AGENT_DESTFILE3_ARG) -pl $(JOB10_FT_MODULES)"
&& mvn test $(MVN_OPTS_TEST) -Punit-tests -Dsurefire.failIfNoSpecifiedTests=false $(JACOCO_AGENT_DESTFILE1_ARG) -pl $(JOB10_UT_MODULES)
&& mvn test $(MVN_OPTS_TEST) -Punit-tests $(JACOCO_AGENT_DESTFILE2_ARG) -Dtest="!TestHoodie*" -Dsurefire.failIfNoSpecifiedTests=false -pl hudi-utilities
&& mvn test $(MVN_OPTS_TEST) -Pfunctional-tests -Dsurefire.failIfNoSpecifiedTests=false $(JACOCO_AGENT_DESTFILE3_ARG) -pl $(JOB10_FT_MODULES)"
- task: PublishTestResults@2
displayName: 'Publish Test Results'
inputs:
Expand Down
8 changes: 4 additions & 4 deletions hudi-aws/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -247,13 +247,13 @@
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<groupId>com.esotericsoftware</groupId>
<artifactId>kryo-shaded</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.esotericsoftware</groupId>
<artifactId>kryo-shaded</artifactId>
<groupId>org.testcontainers</groupId>
<artifactId>localstack</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,7 @@ public DynamoDBBasedLockProvider(final LockConfiguration lockConfiguration, fina

@Override
public String getDynamoDBPartitionKey(LockConfiguration lockConfiguration) {
DynamoDbBasedLockConfig config = new DynamoDbBasedLockConfig.Builder()
.fromProperties(lockConfiguration.getConfig()).build();
DynamoDbBasedLockConfig config = DynamoDbBasedLockConfig.from(lockConfiguration.getConfig());
ValidationUtils.checkArgument(
config.contains(DYNAMODB_LOCK_PARTITION_KEY),
"Config key is not found: " + DYNAMODB_LOCK_PARTITION_KEY.key());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,7 @@ public abstract class DynamoDBBasedLockProviderBase implements LockProvider<Lock
protected volatile LockItem lock;

protected DynamoDBBasedLockProviderBase(final LockConfiguration lockConfiguration, final StorageConfiguration<?> conf, DynamoDbClient dynamoDB) {
this.dynamoDbBasedLockConfig = new DynamoDbBasedLockConfig.Builder()
.fromProperties(lockConfiguration.getConfig())
.build();
this.dynamoDbBasedLockConfig = DynamoDbBasedLockConfig.from(lockConfiguration.getConfig());
this.tableName = dynamoDbBasedLockConfig.getString(DynamoDbBasedLockConfig.DYNAMODB_LOCK_TABLE_NAME);
long leaseDuration = dynamoDbBasedLockConfig.getInt(DynamoDbBasedLockConfig.LOCK_ACQUIRE_WAIT_TIMEOUT_MS_PROP_KEY);
dynamoDBPartitionKey = getDynamoDBPartitionKey(lockConfiguration);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@
import org.apache.hudi.common.util.Option;
import org.apache.hudi.common.util.ValidationUtils;

import lombok.AccessLevel;
import lombok.NoArgsConstructor;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.regions.RegionMetadata;
import software.amazon.awssdk.services.dynamodb.model.BillingMode;
Expand All @@ -41,12 +43,9 @@
description = "Configs that control DynamoDB based locking mechanisms required for concurrency control "
+ " between writers to a Hudi table. Concurrency between Hudi's own table services "
+ " are auto managed internally.")
@NoArgsConstructor(access = AccessLevel.PACKAGE)
public class DynamoDbBasedLockConfig extends HoodieConfig {

public static DynamoDbBasedLockConfig.Builder newBuilder() {
return new DynamoDbBasedLockConfig.Builder();
}

// configs for DynamoDb based locks
public static final String DYNAMODB_BASED_LOCK_PROPERTY_PREFIX = LockConfiguration.LOCK_PREFIX + "dynamodb.";

Expand Down Expand Up @@ -132,31 +131,12 @@ public static DynamoDbBasedLockConfig.Builder newBuilder() {
.sinceVersion("0.10.0")
.withDocumentation("Lock Acquire Wait Timeout in milliseconds");

/**
* Builder for {@link DynamoDbBasedLockConfig}.
*/
public static class Builder {
private final DynamoDbBasedLockConfig lockConfig = new DynamoDbBasedLockConfig();

public DynamoDbBasedLockConfig build() {
lockConfig.setDefaults(DynamoDbBasedLockConfig.class.getName());
checkRequiredProps();
return lockConfig;
}

public Builder fromProperties(TypedProperties props) {
lockConfig.getProps().putAll(props);
return this;
}

private void checkRequiredProps() {
String errorMsg = "Config key is not found: ";
ValidationUtils.checkArgument(
lockConfig.contains(DYNAMODB_LOCK_TABLE_NAME.key()),
errorMsg + DYNAMODB_LOCK_TABLE_NAME.key());
ValidationUtils.checkArgument(
lockConfig.contains(DYNAMODB_LOCK_REGION.key()),
errorMsg + DYNAMODB_LOCK_REGION.key());
}
public static DynamoDbBasedLockConfig from(TypedProperties properties) {
DynamoDbBasedLockConfig config = new DynamoDbBasedLockConfig();
config.getProps().putAll(properties);
config.setDefaults(DynamoDbBasedLockConfig.class.getName());
ValidationUtils.checkArgument(config.contains(DYNAMODB_LOCK_TABLE_NAME.key()), "Config key is not found: " + DYNAMODB_LOCK_TABLE_NAME.key());
ValidationUtils.checkArgument(config.contains(DYNAMODB_LOCK_REGION.key()), "Config key is not found: " + DYNAMODB_LOCK_REGION.key());
return config;
}
}
}
6 changes: 4 additions & 2 deletions hudi-cli/src/main/java/org/apache/hudi/cli/HoodieCLI.java
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
import org.apache.hudi.hadoop.fs.HadoopFSUtils;
import org.apache.hudi.storage.HoodieStorage;
import org.apache.hudi.storage.StorageConfiguration;
import org.apache.hudi.storage.hadoop.HoodieHadoopStorage;
import org.apache.hudi.storage.HoodieStorageUtils;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
Expand Down Expand Up @@ -84,7 +84,9 @@ public static void initFS(boolean force) throws IOException {
if (storage == null || force) {
storage = (tableMetadata != null)
? tableMetadata.getStorage()
: new HoodieHadoopStorage(FileSystem.get(conf.unwrap()));
: HoodieStorageUtils.getStorage(
HadoopFSUtils.convertToStoragePath(FileSystem.get(conf.unwrap()).getWorkingDirectory()),
conf);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
import org.apache.hudi.hadoop.fs.HadoopFSUtils;
import org.apache.hudi.keygen.SimpleKeyGenerator;
import org.apache.hudi.storage.StorageConfiguration;
import org.apache.hudi.storage.hadoop.HoodieHadoopStorage;
import org.apache.hudi.storage.HoodieStorageUtils;
import org.apache.hudi.testutils.Assertions;

import org.apache.avro.generic.GenericRecord;
Expand Down Expand Up @@ -141,7 +141,10 @@ public void testAddPartitionMetaWithDryRun() throws IOException {
assertTrue(ShellEvaluationResultUtil.isSuccess(result));

// expected all 'No'.
String[][] rows = FSUtils.getAllPartitionFoldersThreeLevelsDown(new HoodieHadoopStorage(fs), tablePath)
String[][] rows = FSUtils.getAllPartitionFoldersThreeLevelsDown(
HoodieStorageUtils.getStorage(
HadoopFSUtils.convertToStoragePath(new Path(tablePath)),
HadoopFSUtils.getStorageConf(fs.getConf())), tablePath)
.stream()
.map(partition -> new String[] {partition, "No", "None"})
.toArray(String[][]::new);
Expand Down Expand Up @@ -171,7 +174,10 @@ public void testAddPartitionMetaWithRealRun() throws IOException {
Object result = shell.evaluate(() -> "repair addpartitionmeta --dryrun false");
assertTrue(ShellEvaluationResultUtil.isSuccess(result));

List<String> paths = FSUtils.getAllPartitionFoldersThreeLevelsDown(new HoodieHadoopStorage(fs), tablePath);
List<String> paths = FSUtils.getAllPartitionFoldersThreeLevelsDown(
HoodieStorageUtils.getStorage(
HadoopFSUtils.convertToStoragePath(new Path(tablePath)),
HadoopFSUtils.getStorageConf(fs.getConf())), tablePath);
// after dry run, the action will be 'Repaired'
String[][] rows = paths.stream()
.map(partition -> new String[] {partition, "No", "Repaired"})
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@

package org.apache.hudi.cli.commands;

import org.apache.hudi.avro.HoodieAvroUtils;
import org.apache.hudi.cli.HoodieCLI;
import org.apache.hudi.cli.functional.CLIFunctionalTestHarness;
import org.apache.hudi.cli.testutils.HoodieTestCommitMetadataGenerator;
Expand All @@ -27,6 +26,8 @@
import org.apache.hudi.common.fs.ConsistencyGuardConfig;
import org.apache.hudi.common.model.HoodieCommitMetadata;
import org.apache.hudi.common.model.HoodieTableType;
import org.apache.hudi.common.schema.HoodieSchema;
import org.apache.hudi.common.schema.HoodieSchemaUtils;
import org.apache.hudi.common.table.HoodieTableConfig;
import org.apache.hudi.common.table.HoodieTableMetaClient;
import org.apache.hudi.common.table.HoodieTableVersion;
Expand All @@ -36,7 +37,6 @@
import org.apache.hudi.common.util.Option;
import org.apache.hudi.storage.StoragePath;

import org.apache.avro.Schema;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.junit.jupiter.api.BeforeEach;
Expand Down Expand Up @@ -259,18 +259,18 @@ public void testFetchTableSchema() throws Exception {
assertTrue(ShellEvaluationResultUtil.isSuccess(result));

String actualSchemaStr = result.toString().substring(result.toString().indexOf("{"));
Schema actualSchema = new Schema.Parser().parse(actualSchemaStr);
HoodieSchema actualSchema = HoodieSchema.parse(actualSchemaStr);

Schema expectedSchema = new Schema.Parser().parse(schemaStr);
expectedSchema = HoodieAvroUtils.addMetadataFields(expectedSchema);
HoodieSchema expectedSchema = HoodieSchema.parse(schemaStr);
expectedSchema = HoodieSchemaUtils.addMetadataFields(expectedSchema);
assertEquals(actualSchema, expectedSchema);

File file = File.createTempFile("temp", null);
result = shell.evaluate(() -> "fetch table schema --outputFilePath " + file.getAbsolutePath());
assertTrue(ShellEvaluationResultUtil.isSuccess(result));

actualSchemaStr = getFileContent(file.getAbsolutePath());
actualSchema = new Schema.Parser().parse(actualSchemaStr);
actualSchema = HoodieSchema.parse(actualSchemaStr);
assertEquals(actualSchema, expectedSchema);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ public void init() throws Exception {
"", HoodieTableVersion.current().versionCode(),
"org.apache.hudi.common.model.HoodieAvroPayload");

HoodieSparkWriteableTestTable cowTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema.toAvroSchema());
HoodieSparkWriteableTestTable cowTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema);

cowTable.addCommit("20160401010101")
.withInserts(HoodieTestDataGenerator.DEFAULT_FIRST_PARTITION_PATH, "1", hoodieRecords1)
Expand All @@ -127,7 +127,7 @@ public void init() throws Exception {
morTablePath, "mor_table", HoodieTableType.MERGE_ON_READ.name(),
"", HoodieTableVersion.current().versionCode(),
"org.apache.hudi.common.model.HoodieAvroPayload");
HoodieSparkWriteableTestTable morTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema.toAvroSchema());
HoodieSparkWriteableTestTable morTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema);

morTable.addDeltaCommit("20160401010101");
morTable.withInserts(HoodieTestDataGenerator.DEFAULT_FIRST_PARTITION_PATH, "1", hoodieRecords1)
Expand All @@ -151,7 +151,7 @@ public void init() throws Exception {
"", HoodieTableVersion.current().versionCode(),
"org.apache.hudi.common.model.HoodieAvroPayload");

HoodieSparkWriteableTestTable cowNonPartitionedTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema.toAvroSchema());
HoodieSparkWriteableTestTable cowNonPartitionedTable = HoodieSparkWriteableTestTable.of(HoodieCLI.getTableMetaClient(), schema);

cowNonPartitionedTable.addCommit("20160401010101")
.withInserts(HoodieTestDataGenerator.NO_PARTITION_PATH, "1", hoodieRecords1)
Expand Down
5 changes: 0 additions & 5 deletions hudi-client/hudi-client-common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -187,11 +187,6 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

<build>
Expand Down
Loading