Skip to content

Commit

Permalink
[FAB-4620] Update Docker Compose config files
Browse files Browse the repository at this point in the history
Change-Id: Ice318f1b9432d6bab1c686a1ddb452519b86fb14
Signed-off-by: Kostas Christidis <kostas@christidis.io>
  • Loading branch information
kchristidis committed Jun 14, 2017
1 parent 0a72230 commit 56667c1
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 12 deletions.
8 changes: 8 additions & 0 deletions bddtests/dc-orderer-kafka-base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,5 +47,13 @@ services:
# overwriting the offsets that the previous leader produced, and --as a
# result-- rewriting the blockchain that the orderers produce.
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
#
# log.retention.ms
# Until the ordering service in Fabric adds support for pruning of the
# Kafka logs, time-based retention should be disabled so as to prevent
# segments from expiring. (Size-based retention -- see
# log.retention.bytes -- is disabled by default so there is no need to set
# it explicitly.)
# - KAFKA_LOG_RETENTION_MS=-1
ports:
- '9092'
28 changes: 16 additions & 12 deletions bddtests/dc-orderer-kafka.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,23 +62,27 @@ services:
# it is written to at least M replicas (which are then considered in-sync
# and belong to the in-sync replica set, or ISR). In any other case, the
# write operation returns an error. Then:
# 1. if just one replica out of the N (see default.replication.factor
# below) that the channel data is written to becomes unavailable,
# 1. If up to M-N replicas -- out of the N (see default.replication.factor
# below) that the channel data is written to -- become unavailable,
# operations proceed normally.
# 2. If N - M + 1 (or more) replicas become unavailable, Kafka cannot
# maintain an ISR set of M, so it stops accepting writes. Reads work
# without issues. The cluster becomes writeable again when M replicas get
# in-sync.
# 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
# of M, so it stops accepting writes. Reads work without issues. The
# channel becomes writeable again when M replicas get in-sync.
- KAFKA_MIN_INSYNC_REPLICAS=2
#
# default.replication.factor
# Let the value of this setting be M. This means that:
# 1. Each channel will have its data replicated to N brokers. These are
# the candidates for the ISR set for a channel. As we've noted in the
# Let the value of this setting be N. A replication factor of N means that
# each channel will have its data replicated to N brokers. These are the
# candidates for the ISR set of a channel. As we noted in the
# min.insync.replicas section above, not all of these brokers have to be
# available all the time. We choose a default.replication.factor of N so
# as to have the largest possible candidate set for a channel's ISR.
# 2. Channel creations cannot go forward if less than N brokers are up.
# available all the time. In this sample configuration we choose a
# default.replication.factor of K-1 (where K is the total number of brokers in
# our Kafka cluster) so as to have the largest possible candidate set for
# a channel's ISR. We explicitly avoid setting N equal to K because
# channel creations cannot go forward if less than N brokers are up. If N
# were set equal to K, a single broker going down would mean that we would
# not be able to create new channels, i.e. the crash fault tolerance of
# the ordering service would be non-existent.
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
#
# zookeper.connect
Expand Down

0 comments on commit 56667c1

Please sign in to comment.