File tree Expand file tree Collapse file tree 2 files changed +4
-4
lines changed
hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client
hadoop-hdfs/src/main/resources Expand file tree Collapse file tree 2 files changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -432,7 +432,7 @@ interface ByteArrayManager {
432432 interface ECRedunency {
433433 String DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED =
434434 "dfs.client.ec.write.failed.blocks.tolerated" ;
435- int DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED_DEFAILT = Integer . MAX_VALUE ;
435+ int DFS_CLIENT_EC_WRITE_FAILED_BLOCKS_TOLERATED_DEFAILT = - 1 ;
436436 }
437437 }
438438
Original file line number Diff line number Diff line change 39253925
39263926<property >
39273927 <name >dfs.client.ec.write.failed.blocks.tolerated</name >
3928- <value >2147483647 </value >
3928+ <value >-1 </value >
39293929 <description >
39303930 Provide extra tolerated failed streamer for ec policy to prevent
39313931 the potential data loss. For example, if we use RS-6-3-1024K ec policy.
39323932 We can write successfully when there are 3 failure streamers. But if one of the six
39333933 replicas lost during reconstruction, we may lose the data forever.
3934- It should better configured between [0, numParityBlocks], the default value is
3935- the parity block number of the specified ec policy we are using.
3934+ It should better configured between [0, numParityBlocks], the default value is -1 which
3935+ means the parity block number of the specified ec policy we are using.
39363936 </description >
39373937</property >
39383938
You can’t perform that action at this time.
0 commit comments