Skip to content

Commit 0ddab9c

Browse files
committed
fix default value
1 parent 4e9cd2e commit 0ddab9c

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -432,7 +432,7 @@ interface ByteArrayManager {
432432
interface ECRedunency {
433433
String DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED =
434434
"dfs.client.ec.write.failed.blocks.tolerated";
435-
int DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED_DEFAILT = Integer.MAX_VALUE;
435+
int DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED_DEFAILT = -1;
436436
}
437437
}
438438

hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3925,14 +3925,14 @@
39253925

39263926
<property>
39273927
<name>dfs.client.ec.write.failed.blocks.tolerated</name>
3928-
<value>2147483647</value>
3928+
<value>-1</value>
39293929
<description>
39303930
Provide extra tolerated failed streamer for ec policy to prevent
39313931
the potential data loss. For example, if we use RS-6-3-1024K ec policy.
39323932
We can write successfully when there are 3 failure streamers. But if one of the six
39333933
replicas lost during reconstruction, we may lose the data forever.
3934-
It should better configured between [0, numParityBlocks], the default value is
3935-
the parity block number of the specified ec policy we are using.
3934+
It should better configured between [0, numParityBlocks], the default value is -1 which
3935+
means the parity block number of the specified ec policy we are using.
39363936
</description>
39373937
</property>
39383938

0 commit comments

Comments
 (0)