Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: linewrapping comments to 100 width #1274

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ fmt:
@find . -name '*.go' -type f -not -path "*.git*" -not -name '*.pb.go' -not -name '*pb_test.go' | xargs gofmt -w -s
@find . -name '*.go' -type f -not -path "*.git*" -not -name '*.pb.go' -not -name '*pb_test.go' | xargs goimports -w -local github.com/celestiaorg
@go mod tidy -compat=1.17
@cfmt -w -m=100 ./...
MSevey marked this conversation as resolved.
Show resolved Hide resolved
@markdownlint --fix --quiet --config .markdownlint.yaml .
.PHONY: fmt

Expand Down
4 changes: 2 additions & 2 deletions api/rpc/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ func NewServer(address, port string) *Server {
}
}

// RegisterService registers a service onto the RPC server. All methods on the service will then be exposed over the
// RPC.
// RegisterService registers a service onto the RPC server. All methods on the service will then be
// exposed over the RPC.
func (s *Server) RegisterService(namespace string, service interface{}) {
s.rpc.Register(namespace, service)
}
Expand Down
4 changes: 2 additions & 2 deletions cmd/celestia/bridge.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ import (
"github.com/celestiaorg/celestia-node/nodebuilder/state"
)

// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the PersistentPreRun func on
// parent command.
// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the
// PersistentPreRun func on parent command.

func init() {
bridgeCmd.AddCommand(
Expand Down
4 changes: 2 additions & 2 deletions cmd/celestia/full.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ import (
"github.com/celestiaorg/celestia-node/nodebuilder/state"
)

// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the PersistentPreRun func on
// parent command.
// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the
// PersistentPreRun func on parent command.

func init() {
fullCmd.AddCommand(
Expand Down
4 changes: 2 additions & 2 deletions cmd/celestia/light.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ import (
"github.com/celestiaorg/celestia-node/nodebuilder/state"
)

// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the PersistentPreRun func on
// parent command.
// NOTE: We should always ensure that the added Flags below are parsed somewhere, like in the
// PersistentPreRun func on parent command.

func init() {
lightCmd.AddCommand(
Expand Down
15 changes: 9 additions & 6 deletions das/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ type Parameters struct {
// ConcurrencyLimit defines the maximum amount of sampling workers running in parallel.
ConcurrencyLimit int

// BackgroundStoreInterval is the period of time for background checkpointStore to perform a checkpoint backup.
// BackgroundStoreInterval is the period of time for background checkpointStore to perform a
// checkpoint backup.
BackgroundStoreInterval time.Duration

// PriorityQueueSize defines the size limit of the priority queue
Expand All @@ -40,7 +41,8 @@ type Parameters struct {

// DefaultParameters returns the default configuration values for the daser parameters
func DefaultParameters() Parameters {
// TODO(@derrandz): parameters needs performance testing on real network to define optimal values (#1261)
// TODO(@derrandz): parameters needs performance testing on real network to define optimal values
// (#1261)
return Parameters{
SamplingRange: 100,
ConcurrencyLimit: 16,
Expand Down Expand Up @@ -115,16 +117,17 @@ func WithConcurrencyLimit(concurrencyLimit int) Option {
}
}

// WithBackgroundStoreInterval is a functional option to configure the daser's `backgroundStoreInterval` parameter
// Refer to WithSamplingRange documentation to see an example of how to use this
// WithBackgroundStoreInterval is a functional option to configure the daser's
// `backgroundStoreInterval` parameter Refer to WithSamplingRange documentation to see an example
// of how to use this
func WithBackgroundStoreInterval(backgroundStoreInterval time.Duration) Option {
return func(d *DASer) {
d.params.BackgroundStoreInterval = backgroundStoreInterval
}
}

// WithPriorityQueueSize is a functional option to configure the daser's `priorityQueuSize` parameter
// Refer to WithSamplingRange documentation to see an example of how to use this
// WithPriorityQueueSize is a functional option to configure the daser's `priorityQueuSize`
// parameter Refer to WithSamplingRange documentation to see an example of how to use this
func WithPriorityQueueSize(priorityQueueSize int) Option {
return func(d *DASer) {
d.params.PriorityQueueSize = priorityQueueSize
Expand Down
3 changes: 2 additions & 1 deletion fraud/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ type Service interface {
Getter
}

// Broadcaster is a generic interface that sends a `Proof` to all nodes subscribed on the Broadcaster's topic.
// Broadcaster is a generic interface that sends a `Proof` to all nodes subscribed on the
// Broadcaster's topic.
type Broadcaster interface {
// Broadcast takes a fraud `Proof` data structure that implements standard BinaryMarshal
// interface and broadcasts it to all subscribed peers.
Expand Down
3 changes: 2 additions & 1 deletion fraud/proof.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,8 @@ type Proof interface {
Height() uint64
// Validate check the validity of fraud proof.
// Validate throws an error if some conditions don't pass and thus fraud proof is not valid.
// NOTE: header.ExtendedHeader should pass basic validation otherwise it will panic if it's malformed.
// NOTE: header.ExtendedHeader should pass basic validation otherwise it will panic if it's
// malformed.
Validate(*header.ExtendedHeader) error

encoding.BinaryMarshaler
Expand Down
4 changes: 2 additions & 2 deletions fraud/registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ func Register(p Proof) {
panic(fmt.Sprintf("fraud: unmarshaler for %s proof is registered", p.Type()))
}
defaultUnmarshalers[p.Type()] = func(data []byte) (Proof, error) {
// the underlying type of `p` is a pointer to a struct and assigning `p` to a new variable is not the
// case, because it could lead to data races.
// the underlying type of `p` is a pointer to a struct and assigning `p` to a new variable is not
// the case, because it could lead to data races.
// So, there is no easier way to create a hard copy of Proof other than using a reflection.
proof := reflect.New(reflect.ValueOf(p).Elem().Type()).Interface().(Proof)
err := proof.UnmarshalBinary(data)
Expand Down
6 changes: 4 additions & 2 deletions fraud/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ import (
"go.opentelemetry.io/otel/trace"
)

// fraudRequests is the amount of external requests that will be tried to get fraud proofs from other peers.
// fraudRequests is the amount of external requests that will be tried to get fraud proofs from
// other peers.
const fraudRequests = 5

// ProofService is responsible for validating and propagating Fraud Proofs.
Expand Down Expand Up @@ -77,7 +78,8 @@ func (f *ProofService) registerProofTopics(proofTypes ...ProofType) error {
return nil
}

// Start joins fraud proofs topics, sets the stream handler for fraudProtocolID and starts syncing if syncer is enabled.
// Start joins fraud proofs topics, sets the stream handler for fraudProtocolID and starts syncing
// if syncer is enabled.
func (f *ProofService) Start(context.Context) error {
f.ctx, f.cancel = context.WithCancel(context.Background())
if err := f.registerProofTopics(registeredProofTypes()...); err != nil {
Expand Down
11 changes: 6 additions & 5 deletions header/header.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ func MakeExtendedHeader(
}

// Hash returns Hash of the wrapped RawHeader.
// NOTE: It purposely overrides Hash method of RawHeader to get it directly from Commit without recomputing.
// NOTE: It purposely overrides Hash method of RawHeader to get it directly from Commit without
// recomputing.
func (eh *ExtendedHeader) Hash() bts.HexBytes {
return eh.Commit.BlockID.Hash
}
Expand Down Expand Up @@ -147,8 +148,8 @@ func (eh *ExtendedHeader) UnmarshalBinary(data []byte) error {
return nil
}

// MarshalJSON marshals an ExtendedHeader to JSON. The ValidatorSet is wrapped with amino encoding, to be able to
// unmarshal the crypto.PubKey type back from JSON.
// MarshalJSON marshals an ExtendedHeader to JSON. The ValidatorSet is wrapped with amino encoding,
// to be able to unmarshal the crypto.PubKey type back from JSON.
func (eh *ExtendedHeader) MarshalJSON() ([]byte, error) {
type Alias ExtendedHeader
validatorSet, err := amino.Marshal(eh.ValidatorSet)
Expand All @@ -164,8 +165,8 @@ func (eh *ExtendedHeader) MarshalJSON() ([]byte, error) {
})
}

// UnmarshalJSON unmarshals an ExtendedHeader from JSON. The ValidatorSet is wrapped with amino encoding, to be able to
// unmarshal the crypto.PubKey type back from JSON.
// UnmarshalJSON unmarshals an ExtendedHeader from JSON. The ValidatorSet is wrapped with amino
// encoding, to be able to unmarshal the crypto.PubKey type back from JSON.
func (eh *ExtendedHeader) UnmarshalJSON(data []byte) error {
type Alias ExtendedHeader
aux := &struct {
Expand Down
4 changes: 2 additions & 2 deletions header/p2p/exchange.go
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,6 @@ func (ex *Exchange) request(
if err = stream.Close(); err != nil {
log.Errorw("closing stream", "err", err)
}
// ensure at least one header was retrieved
if len(headers) == 0 {
return nil, header.ErrNotFound
}
Expand All @@ -230,7 +229,8 @@ func (ex *Exchange) request(
// bestHead chooses ExtendedHeader that matches the conditions:
// * should have max height among received;
// * should be received at least from 2 peers;
// If neither condition is met, then latest ExtendedHeader will be returned (header of the highest height).
// If neither condition is met, then latest ExtendedHeader will be returned (header of the highest
// height).
func bestHead(result []*header.ExtendedHeader) (*header.ExtendedHeader, error) {
if len(result) == 0 {
return nil, header.ErrNotFound
Expand Down
17 changes: 9 additions & 8 deletions header/p2p/subscriber.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,8 @@ func (p *Subscriber) Broadcast(ctx context.Context, header *header.ExtendedHeade
}

// msgID computes an id for a pubsub message
// TODO(@Wondertan): This cause additional allocations per each recvd message in the topic. Find a way to avoid those.
// TODO(@Wondertan): This cause additional allocations per each recvd message in the topic. Find a
// way to avoid those.
func msgID(pmsg *pb.Message) string {
mID := func(data []byte) string {
hash := blake2b.Sum256(data)
Expand All @@ -95,13 +96,13 @@ func msgID(pmsg *pb.Message) string {
}

// IMPORTANT NOTE:
// Due to the nature of the Tendermint consensus, validators don't necessarily collect commit signatures from the
// entire validator set, but only the minimum required amount of them (>2/3 of voting power). In addition,
// signatures are collected asynchronously. Therefore, each validator may have a different set of signatures that
// pass the minimum required voting power threshold, causing nondeterminism in the header message gossiped over the
// network. Subsequently, this causes message duplicates as each Bridge Node, connected to a personal validator,
// sends the validator's own view of commits of effectively the same header.
//
// Due to the nature of the Tendermint consensus, validators don't necessarily collect commit
// signatures from the entire validator set, but only the minimum required amount of them (>2/3 of
// voting power). In addition, signatures are collected asynchronously. Therefore, each validator
// may have a different set of signatures that pass the minimum required voting power threshold,
// causing nondeterminism in the header message gossiped over the network. Subsequently, this
// causes message duplicates as each Bridge Node, connected to a personal validator, sends the
// validator's own view of commits of effectively the same header.
// To solve the problem above, we exclude nondeterministic value from message id calculation
h.Commit.Signatures = nil

Expand Down
5 changes: 3 additions & 2 deletions header/store/height_indexer.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,9 @@ import (
"github.com/celestiaorg/celestia-node/header"
)

// TODO(@Wondertan): There should be a more clever way to index heights, than just storing HeightToHash pair...
// heightIndexer simply stores and cashes mappings between header Height and Hash.
// TODO(@Wondertan): There should be a more clever way to index heights, than just storing
// HeightToHash pair... heightIndexer simply stores and cashes mappings between header Height and
// Hash.
type heightIndexer struct {
ds datastore.Batching
cache *lru.ARCCache
Expand Down
6 changes: 4 additions & 2 deletions header/store/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,13 @@ import (

var log = logging.Logger("header/store")

// TODO(@Wondertan): Those values must be configurable and proper defaults should be set for specific node type. (#709)
// TODO(@Wondertan): Those values must be configurable and proper defaults should be set for
// specific node type. (#709)
var (
// DefaultStoreCacheSize defines the amount of max entries allowed in the Header Store cache.
DefaultStoreCacheSize = 4096
// DefaultIndexCacheSize defines the amount of max entries allowed in the Height to Hash index cache.
// DefaultIndexCacheSize defines the amount of max entries allowed in the Height to Hash index
// cache.
DefaultIndexCacheSize = 16384
// DefaultWriteBatchSize defines the size of the batched header write.
// Headers are written in batches not to thrash the underlying Datastore with writes.
Expand Down
5 changes: 3 additions & 2 deletions header/sync/ranges.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,9 @@ import (
"github.com/celestiaorg/celestia-node/header"
)

// ranges keeps non-overlapping and non-adjacent header ranges which are used to cache headers (in ascending order).
// This prevents unnecessary / duplicate network requests for additional headers during sync.
// ranges keeps non-overlapping and non-adjacent header ranges which are used to cache headers (in
// ascending order). This prevents unnecessary / duplicate network requests for additional headers
// during sync.
type ranges struct {
lk sync.RWMutex
ranges []*headerRange
Expand Down
14 changes: 9 additions & 5 deletions header/sync/sync.go
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,9 @@ func (s State) Duration() time.Duration {
}

// State reports state of the current (if in progress), or last sync (if finished).
// Note that throughout the whole Syncer lifetime there might an initial sync and multiple catch-ups.
// All of them are treated as different syncs with different state IDs and other information.
// Note that throughout the whole Syncer lifetime there might an initial sync and multiple
// catch-ups. All of them are treated as different syncs with different state IDs and other
// information.
func (s *Syncer) State() State {
s.stateLk.RLock()
state := s.state
Expand Down Expand Up @@ -227,7 +228,8 @@ func (s *Syncer) doSync(ctx context.Context, fromHead, toHead *header.ExtendedHe
return err
}

// processHeaders gets and stores headers starting at the given 'from' height up to 'to' height - [from:to]
// processHeaders gets and stores headers starting at the given 'from' height up to 'to' height -
// [from:to]
func (s *Syncer) processHeaders(ctx context.Context, from, to uint64) (int, error) {
headers, err := s.findHeaders(ctx, from, to)
if err != nil {
Expand All @@ -237,14 +239,16 @@ func (s *Syncer) processHeaders(ctx context.Context, from, to uint64) (int, erro
return s.store.Append(ctx, headers...)
}

// TODO(@Wondertan): Number of headers that can be requested at once. Either make this configurable or,
// TODO(@Wondertan): Number of headers that can be requested at once. Either make this configurable
// or,
//
// find a proper rationale for constant.
//
// TODO(@Wondertan): Make configurable
var requestSize uint64 = 512

// findHeaders gets headers from either remote peers or from local cache of headers received by PubSub - [from:to]
// findHeaders gets headers from either remote peers or from local cache of headers received by
// PubSub - [from:to]
func (s *Syncer) findHeaders(ctx context.Context, from, to uint64) ([]*header.ExtendedHeader, error) {
amount := to - from + 1 // + 1 to include 'to' height as well
if amount > requestSize {
Expand Down
12 changes: 7 additions & 5 deletions header/sync/sync_head.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,9 @@ func (s *Syncer) subjectiveHead(ctx context.Context) (*header.ExtendedHeader, er
}

// networkHead returns the latest network header.
// Known subjective head is considered network head if it is recent enough(now-timestamp<=blocktime).
// Otherwise, network header is requested from a trusted peer and set as the new subjective head,
// assuming that trusted peer is always synced.
// Known subjective head is considered network head if it is recent
// enough(now-timestamp<=blocktime). Otherwise, network header is requested from a trusted peer and
// set as the new subjective head, assuming that trusted peer is always synced.
func (s *Syncer) networkHead(ctx context.Context) (*header.ExtendedHeader, error) {
sbjHead, err := s.subjectiveHead(ctx)
if err != nil {
Expand Down Expand Up @@ -102,7 +102,8 @@ func (s *Syncer) networkHead(ctx context.Context) (*header.ExtendedHeader, error

// incomingNetHead processes new gossiped network headers.
func (s *Syncer) incomingNetHead(ctx context.Context, netHead *header.ExtendedHeader) pubsub.ValidationResult {
// Try to short-circuit netHead with append. If not adjacent/from future - try it as new network header
// Try to short-circuit netHead with append. If not adjacent/from future - try it as new network
// header
_, err := s.store.Append(ctx, netHead)
if err == nil {
// a happy case where we appended maybe head directly, so accept
Expand All @@ -128,7 +129,8 @@ func (s *Syncer) incomingNetHead(ctx context.Context, netHead *header.ExtendedHe
return s.newNetHead(ctx, netHead, false)
}

// newNetHead sets the network header as the new subjective head with preceding validation(per request).
// newNetHead sets the network header as the new subjective head with preceding validation(per
// request).
func (s *Syncer) newNetHead(ctx context.Context, netHead *header.ExtendedHeader, trust bool) pubsub.ValidationResult {
// validate netHead against subjective head
if !trust {
Expand Down
4 changes: 2 additions & 2 deletions header/testing.go
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
// TODO(@Wondertan): Ideally, we should move that into subpackage, so this does not get included into binary of
// production code, but that does not matter at the moment.
// TODO(@Wondertan): Ideally, we should move that into subpackage, so this does not get included
// into binary of production code, but that does not matter at the moment.

package header

Expand Down
4 changes: 2 additions & 2 deletions libs/fslock/locker.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ func Lock(path string) (*Locker, error) {
}

// Locker is a simple utility meant to create lock files.
// This is to prevent multiple processes from managing the same working directory by purpose or accident.
// NOTE: Windows is not supported.
// This is to prevent multiple processes from managing the same working directory by purpose or
// accident. NOTE: Windows is not supported.
type Locker struct {
file *os.File
path string
Expand Down
8 changes: 4 additions & 4 deletions libs/fxutil/fxutil.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,14 @@ func InvokeIf(cond bool, function interface{}) fx.Option {
return fx.Options()
}

// ProvideAs creates an FX option that provides constructor 'cnstr' with the returned values types as 'cnstrs'
// It is as simple utility that hides away FX annotation details.
// ProvideAs creates an FX option that provides constructor 'cnstr' with the returned values types
// as 'cnstrs' It is as simple utility that hides away FX annotation details.
func ProvideAs(cnstr interface{}, cnstrs ...interface{}) fx.Option {
return fx.Provide(fx.Annotate(cnstr, fx.As(cnstrs...)))
}

// ReplaceAs creates an FX option that substitutes types defined by constructors 'cnstrs' with the value 'val'.
// It is as simple utility that hides away FX annotation details.
// ReplaceAs creates an FX option that substitutes types defined by constructors 'cnstrs' with the
// value 'val'. It is as simple utility that hides away FX annotation details.
func ReplaceAs(val interface{}, cnstrs ...interface{}) fx.Option {
return fx.Replace(fx.Annotate(val, fx.As(cnstrs...)))
}
5 changes: 3 additions & 2 deletions nodebuilder/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -77,9 +77,10 @@ func LoadConfig(path string) (*Config, error) {
}

// TODO(@Wondertan): We should have a description for each field written into w,
// so users can instantly understand purpose of each field. Ideally, we should have a utility program to parse comments
// from actual sources(*.go files) and generate docs from comments. Hint: use 'ast' package.
// so users can instantly understand purpose of each field. Ideally, we should have a utility
// program to parse comments from actual sources(*.go files) and generate docs from comments.

// Hint: use 'ast' package.
// Encode encodes a given Config into w.
func (cfg *Config) Encode(w io.Writer) error {
return toml.NewEncoder(w).Encode(cfg)
Expand Down
Loading