Skip to content

Commit

Permalink
Merge pull request #9 from ccoveille-forks/typos-suggestion
Browse files Browse the repository at this point in the history
Fix typos and rewrite some sentences
  • Loading branch information
satmihir authored Nov 12, 2024
2 parents afa854e + fd20a49 commit d644207
Show file tree
Hide file tree
Showing 7 changed files with 25 additions and 25 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ FAIR is a Go library designed to ensure fairness in the resource-constrained env

## Introduction

The core algorithm of FAIR is based on the [Stochastic Fair BLUE](https://rtcl.eecs.umich.edu/rtclweb/assets/publications/2001/feng2001fair.pdf) often used for network congestion control with a few modifications. The philosophy of FAIR is to only throttle when there's a genuine shortage of resources as opposed to the approaches like token bucket or leaky bucket which may reject requests even when the resource is still available (a creative configuration of FAIR can enable that type of behavior but we don't encourage it). Since the state is stored in a multi-level [Bloom Filter](https://medium.com/p/e25942ab6093) style data structure, the memory needed is constant and does not scale with the number of clients. When properly configured, FAIR can scale to a very large number of clients with a low probability of false positives and a near zero probability of persistent false positives thanks to the hash rotation mechanism that regularly rehashes clients to avoid any correlated behavior longer than a few minutes.
The core algorithm of FAIR is based on the [Stochastic Fair BLUE](https://rtcl.eecs.umich.edu/rtclweb/assets/publications/2001/feng2001fair.pdf) often used for network congestion control with a few modifications. The philosophy of FAIR is to only throttle when there's a genuine shortage of resources as opposed to the approaches like token bucket or leaky bucket which may reject requests even when the resource is still available (a creative configuration of FAIR can enable that type of behavior, but we don't encourage it). Since the state is stored in a multi-level [Bloom Filter](https://medium.com/p/e25942ab6093) style data structure, the memory needed is constant and does not scale with the number of clients. When properly configured, FAIR can scale to a very large number of clients with a low probability of false positives and a near zero probability of persistent false-positives. The hash rotation mechanism regularly rehashes clients to avoid any correlated behavior longer than a few minutes.

### Key Features

Expand All @@ -23,7 +23,7 @@ The core algorithm of FAIR is based on the [Stochastic Fair BLUE](https://rtcl.e

![Evaluation](eval.png)

In this example, 20 clients are competing for a resource that regenerates at the rate of 20/s (every data point in the graph is 5s apart). 18 out of 20 clients are "well behaved" because they request a resource every second while the remaining two clients try to get a resource every 100ms which is an "unfair" rate. On the left, we see that when left unthrottled, the two unfair clients grab a disproportionately large amount of resource while the regular workloads starve and get a lot less than 1/s rate. On the right, when throttled with fair, the regular workloads stay virtually unaffected while the unfair ones get throttled. On average, even the unfair workloads get their fair share when seen over larger time periods.
In this example, 20 clients are competing for a resource that regenerates at the rate of 20/s (every data point in the graph is 5s apart). 18 out of 20 clients are "well-behaved" because they request a resource every second while the remaining two clients try to get a resource every 100ms which is an "unfair" rate. On the left, we see that when left unthrottled, the two unfair clients grab a disproportionately large amount of resource while the regular workloads starve and get a lot less than 1/s rate. On the right, when throttled with fair, the regular workloads stay virtually unaffected while the unfair ones get throttled. On average, even the unfair workloads get their fair share when seen over larger time periods.

## Installation

Expand Down Expand Up @@ -55,7 +55,7 @@ trk, err := trkB.Build()
defer trk.Close()
```

For every incoming request, you have to pass the flow identifier (the id over which you want to maintain fairness) into the tracker to see if it needs to be throttled. A client ID for example could be such ID to maintain resource fairness among all your clients.
For every incoming request, you have to pass the flow identifier (the identifier over which you want to maintain fairness) into the tracker to see if it needs to be throttled. A client ID for example could be such ID to maintain resource fairness among all your clients.

```go
ctx := context.Background()
Expand Down Expand Up @@ -90,7 +90,7 @@ trk.ReportOutcome(ctx, id, request.OutcomeSuccess)
You can use the `GenerateTunedStructureConfig` to tune the tracker without directly touching the algorithm parameters. It exposes a simple interface where you have to pass the following things based on your application logic and scaling requirements.
- `expectedClientFlows` - Number of concurrent clients you expect to your app
- `bucketsPerLevel` - Number of buckets per level in the core structure
- `tolerableBadRequestsPerBadFlow` - Number of requests we can tolerate before we fully shut down a flow
- `tolerableBadRequestsPerBadFlow` - Number of requests we can tolerate before we fully shut down a flow.

```go
conf := config.GenerateTunedStructureConfig(1000, 1000, 25)
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/tuning.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ const (
lowProbability = 0.0001
// The slowing factor from Pi to Pd (10x successes to get a flow fully exonerated)
pdSlowingFactor = 0.001
// The minimum number og levels to use despite what the calculation says
// The minimum number of levels to use despite what the calculation says
minL = 3
// The default rotation duration
defaultRotationDuration = time.Minute * 5
Expand Down
8 changes: 4 additions & 4 deletions pkg/data/data.go
Original file line number Diff line number Diff line change
Expand Up @@ -105,16 +105,16 @@ func (s *Structure) RegisterRequest(ctx context.Context, clientIdentifier []byte
return nil
})

pfinal := s.config.FinalProbabilityFunction(bucketProbabilities)
pFinal := s.config.FinalProbabilityFunction(bucketProbabilities)

if s.includeStats {
stats.BucketProbabilities = bucketProbabilities
stats.FinalProbability = pfinal
stats.FinalProbability = pFinal
}

// Decide whether to throttle the request based on the probability
shouldThrottle := false
if rand.Float64() <= pfinal {
if rand.Float64() <= pFinal {
shouldThrottle = true
}

Expand Down Expand Up @@ -207,7 +207,7 @@ func validateStructureConfig(config *config.FairnessTrackerConfig) error {
}

// Calculate n hashes of the given input using murmur hash.
// To optimize, we only calculate a single 64 bit hash and use a technique outlined in
// To optimize, we only calculate a single 64-bit hash and use a technique outlined in
// the paper below to compute more based on them:
// https://www.eecs.harvard.edu/~michaelm/postscripts/rsa2008.pdf
func generateNHashesUsing64Bit(input []byte, n uint32, seed uint32) []uint32 {
Expand Down
6 changes: 3 additions & 3 deletions pkg/request/type.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ type ResultStats struct {
// The response object of the ReportOutcome function
type ReportOutcomeResult struct{}

// The data struecture interface
// The data structure interface
type Tracker interface {
// Return the int ID of this structure. Used for implementing moving hashes.
GetID() uint64
Expand All @@ -48,11 +48,11 @@ type Tracker interface {
// it will be used to hash and locate the corresponding buckets.
RegisterRequest(ctx context.Context, clientIdentifier []byte) (*RegisterRequestResult, error)

// Report the outcome of a requests from the given client so we can update the
// Report the outcome of a request from the given client so we can update the
// probabilities of the corresponding buckets.
// Only report the outcomes on the requests where you could either conclusively
// get the resource or not. For outcomes such as user errors or network failures
// or timeout with upstream, do NOT report any outcome or we may wrongly throttle
// or timeout with upstream, do NOT report any outcome, or we may wrongly throttle
// requests based on things not related to resource contention.
// You don't have to report an outcome to every registered request.
ReportOutcome(ctx context.Context, clientIdentifier []byte, outcome Outcome) (*ReportOutcomeResult, error)
Expand Down
18 changes: 9 additions & 9 deletions pkg/tracker/tracker.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,17 @@ type FairnessTracker struct {
mainStructure request.Tracker
secondaryStructure request.Tracker

tikr utils.ITicker
ticker utils.ITicker

// Rotation lock to ensure that we don't rotate while updating the structures
// The act of updating is a "read" in this case since multiple updates can happen
// concurrently but none can happen while we are rotating so that's a write.
// concurrently, but none can happen while we are rotating so that's a write.
rotationLock *sync.RWMutex
stopRotation chan bool
}

// Allows passing an external ticket for simulations
func NewFairnessTrackerWithClockAndTicker(trackerConfig *config.FairnessTrackerConfig, clock utils.IClock, tikr utils.ITicker) (*FairnessTracker, error) {
func NewFairnessTrackerWithClockAndTicker(trackerConfig *config.FairnessTrackerConfig, clock utils.IClock, ticker utils.ITicker) (*FairnessTracker, error) {
st1, err := data.NewStructureWithClock(trackerConfig, 1, trackerConfig.IncludeStats, clock)
if err != nil {
return nil, NewFairnessTrackerError(err, "Failed to create a structure")
Expand All @@ -51,7 +51,7 @@ func NewFairnessTrackerWithClockAndTicker(trackerConfig *config.FairnessTrackerC
mainStructure: st1,
secondaryStructure: st2,

tikr: tikr,
ticker: ticker,

rotationLock: &sync.RWMutex{},
stopRotation: stopRotation,
Expand All @@ -65,7 +65,7 @@ func NewFairnessTrackerWithClockAndTicker(trackerConfig *config.FairnessTrackerC
select {
case <-stopRotation:
return
case <-tikr.C():
case <-ticker.C():
s, err := data.NewStructureWithClock(trackerConfig, ft.structureIdCtr, trackerConfig.IncludeStats, clock)
if err != nil {
// TODO: While this should never happen, think if we want to handle this more gracefully
Expand All @@ -86,8 +86,8 @@ func NewFairnessTrackerWithClockAndTicker(trackerConfig *config.FairnessTrackerC

func NewFairnessTracker(trackerConfig *config.FairnessTrackerConfig) (*FairnessTracker, error) {
clk := utils.NewRealClock()
tikr := utils.NewRealTicker(trackerConfig.RotationFrequency)
return NewFairnessTrackerWithClockAndTicker(trackerConfig, clk, tikr)
ticker := utils.NewRealTicker(trackerConfig.RotationFrequency)
return NewFairnessTrackerWithClockAndTicker(trackerConfig, clk, ticker)
}

func (ft *FairnessTracker) RegisterRequest(ctx context.Context, clientIdentifier []byte) (*request.RegisterRequestResult, error) {
Expand All @@ -102,7 +102,7 @@ func (ft *FairnessTracker) RegisterRequest(ctx context.Context, clientIdentifier

// To keep the bad workloads data "warm" in the rotated structure, we will update both
if _, err := ft.secondaryStructure.RegisterRequest(ctx, clientIdentifier); err != nil {
// TODO: We don't really have to fail here perhaps but I cannot think any reason this will actually fail
// TODO: We don't really have to fail here perhaps, but I cannot think any reason this will actually fail
return nil, NewFairnessTrackerError(err, "Failed updating the secondary structure")
}

Expand All @@ -129,5 +129,5 @@ func (ft *FairnessTracker) ReportOutcome(ctx context.Context, clientIdentifier [

func (ft *FairnessTracker) Close() {
close(ft.stopRotation)
ft.tikr.Stop()
ft.ticker.Stop()
}
2 changes: 1 addition & 1 deletion pkg/tracker/tracker_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ func TestEndToEnd(t *testing.T) {
_, err = trk.ReportOutcome(ctx, id, request.OutcomeFailure)
assert.NoError(t, err)

// 24 failures are enough but there's decay so we will add a few more
// 24 failures are enough, but there's decay so we will add a few more
for i := 0; i < 30; i++ {
_, err = trk.ReportOutcome(ctx, id, request.OutcomeFailure)
assert.NoError(t, err)
Expand Down
6 changes: 3 additions & 3 deletions pkg/utils/time_utils_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ func TestClock(t *testing.T) {
assert.True(t, clk.Now().Sub(t2) >= 10*time.Millisecond)
}

func TestTicket(t *testing.T) {
var tikr ITicker = NewRealTicker(10 * time.Millisecond)
func TestTicker(t *testing.T) {
var ticker ITicker = NewRealTicker(10 * time.Millisecond)
var found bool

select {
case <-tikr.C():
case <-ticker.C():
found = true
break
}
Expand Down

0 comments on commit d644207

Please sign in to comment.