Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Fallback after retries #1698

Merged
merged 3 commits into from
Oct 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/migration-v8.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ ResiliencePipeline pipeline = new ResiliencePipelineBuilder()
<!-- endSnippet -->

> [!IMPORTANT]
> In v7, the policy wrap ordering is different; the policy added first was executed last (FILO). In v8, the execution order matches the order in which they were added (FIFO).
> In v7, the policy wrap ordering is different; the policy added first was executed last (FILO). In v8, the execution order matches the order in which they were added (FIFO). See [fallback after retries](strategies/fallback.md#fallback-after-retries) for an example on how the strategies are executed.

## Migrating retry policies

Expand Down
4 changes: 2 additions & 2 deletions docs/strategies/circuit-breaker.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ await manualControl.CloseAsync();
- [Circuit Breaker Pattern by Microsoft](https://msdn.microsoft.com/en-us/library/dn589784.aspx)
- [Original Circuit Breaking Article](https://web.archive.org/web/20160106203951/http://thatextramile.be/blog/2008/05/the-circuit-breaker)

## Patterns and anti-patterns
## Anti-patterns

Over the years, many developers have used Polly in various ways. Some of these recurring patterns may not be ideal. This section highlights the recommended practices and those to avoid.

Expand Down Expand Up @@ -263,7 +263,7 @@ circuitBreaker = new ResiliencePipelineBuilder()

✅ DO

The `CircuitBreakerStartegyOptions` currently do not support defining break durations dynamically. This may be re-evaluated in the future. For now, refer to the first example for a potential workaround. However, please use it with caution.
The `CircuitBreakerStrategyOptions` currently do not support defining break durations dynamically. This may be re-evaluated in the future. For now, refer to the first example for a potential workaround. However, please use it with caution.

### 3 - Wrapping each endpoint with a circuit breaker

Expand Down
49 changes: 48 additions & 1 deletion docs/strategies/fallback.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,54 @@ new ResiliencePipelineBuilder<UserAvatar>()
| `FallbackAction` | `Null`, **Required** | Fallback action to be executed. |
| `OnFallback` | `null` | Event that is raised when fallback happens. |

## Patterns and anti-patterns
## Patterns

### Fallback after retries

When designing resilient systems, a common pattern is to use a fallback after multiple failed retry attempts. This approach is especially relevant when a fallback strategy can provide a sensible default value.

<!-- snippet: fallback-after-retries -->
```cs
// Define a common predicates re-used by both fallback and retries
var predicateBuilder = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>()
.HandleResult(r => r.StatusCode == HttpStatusCode.InternalServerError);

var pipeline = new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddFallback(new()
{
ShouldHandle = predicateBuilder,
FallbackAction = args =>
{
// Try to resolve the fallback response
HttpResponseMessage fallbackResponse = ResolveFallbackResponse(args.Outcome);

return Outcome.FromResultAsValueTask(fallbackResponse);
}
})
.AddRetry(new()
{
ShouldHandle = predicateBuilder,
MaxRetryAttempts = 3,
})
.Build();

// Demonstrative execution that always produces invalid result
pipeline.Execute(() => new HttpResponseMessage(HttpStatusCode.InternalServerError));
```
<!-- endSnippet -->

Here's a breakdown of the behavior when the callback produces either an `HttpStatusCode.InternalServerError` or an `HttpRequestException`:

- The fallback strategy initiates by executing the provided callback, then immediately passes the execution to the retry strategy.
- The retry strategy starts execution, makes 3 retry attempts and yields the outcome that represents an error.
- The fallback strategy resumes execution, assesses the outcome generated by the callback, and if necessary, supplies the fallback value.
- The fallback strategy completes its execution.

> [!NOTE]
> The preceding example also demonstrates how to re-use `ResiliencePipelineBuilder<HttpResponseMessage>` across multiple strategies.

## Anti-patterns

Over the years, many developers have used Polly in various ways. Some of these recurring patterns may not be ideal. This section highlights the recommended practices and ones to avoid.

Expand Down
75 changes: 39 additions & 36 deletions docs/strategies/retry.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,45 @@ new ResiliencePipelineBuilder().AddRetry(new RetryStrategyOptions
| `OnRetry` | `null` | Action executed when retry occurs. |
| `MaxDelay` | `null` | Caps the calculated retry delay to a specified maximum duration. |

## Patterns and anti-patterns
## Patterns

### Limiting the maximum delay

In some cases, you might want to set a limit on the calculated delay. This is beneficial when multiple retries are anticipated, and you wish to prevent excessive wait times between these retries.

Consider the following example of a long-running background job:

<!-- snippet: retry-pattern-max-delay -->
```cs
ResiliencePipeline pipeline = new ResiliencePipelineBuilder()
.AddRetry(new()
{
Delay = TimeSpan.FromSeconds(2),
MaxRetryAttempts = int.MaxValue,
BackoffType = DelayBackoffType.Exponential,

// Initially, we aim for an exponential backoff, but after a certain number of retries, we set a maximum delay of 15 minutes.
martintmk marked this conversation as resolved.
Show resolved Hide resolved
MaxDelay = TimeSpan.FromMinutes(15),
UseJitter = true
})
.Build();

// Background processing
while (!cancellationToken.IsCancellationRequested)
{
await pipeline.ExecuteAsync(async token =>
{
// In the event of a prolonged service outage, we can afford to wait for a successful retry since this is a background task.
await SynchronizeDataAsync(token);
},
cancellationToken);

await Task.Delay(TimeSpan.FromMinutes(30)); // The sync runs every 30 minutes.
}
```
<!-- endSnippet -->

## Anti-patterns

Over the years, many developers have used Polly in various ways. Some of these recurring patterns may not be ideal. This section highlights the recommended practices and those to avoid.

Expand Down Expand Up @@ -480,38 +518,3 @@ var retry = new ResiliencePipelineBuilder()
**Reasoning**:

As previously mentioned, always use the designated area to define retry conditions. Re-frame your original exit conditions to specify when a retry should be initiated.

### Limiting the maximum delay

In some cases, you might want to set a limit on the calculated delay. This is beneficial when multiple retries are anticipated, and you wish to prevent excessive wait times between these retries.

Consider the following example of a long-running background job:

<!-- snippet: retry-pattern-max-delay -->
```cs
ResiliencePipeline pipeline = new ResiliencePipelineBuilder()
.AddRetry(new()
{
Delay = TimeSpan.FromSeconds(2),
MaxRetryAttempts = int.MaxValue,

// Initially, we aim for an exponential backoff, but after a certain number of retries, we set a maximum delay of 15 minutes.
MaxDelay = TimeSpan.FromMinutes(15),
UseJitter = true
})
.Build();

// Background processing
while (!cancellationToken.IsCancellationRequested)
{
await pipeline.ExecuteAsync(async token =>
{
// In the event of a prolonged service outage, we can afford to wait for a successful retry since this is a background task.
await SynchronizeDataAsync(token);
},
cancellationToken);

await Task.Delay(TimeSpan.FromMinutes(30)); // The sync runs every 30 minutes.
}
```
<!-- endSnippet -->
36 changes: 36 additions & 0 deletions src/Snippets/Docs/Fallback.cs
Original file line number Diff line number Diff line change
Expand Up @@ -232,4 +232,40 @@ private static ValueTask<HttpResponseMessage> ActionCore()
return await pipeline.ExecuteAsync(CallExternalSystem, CancellationToken.None);
#endregion
}

public static void FallbackAfterRetries()
{
#region fallback-after-retries

// Define a common predicates re-used by both fallback and retries
var predicateBuilder = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>()
.HandleResult(r => r.StatusCode == HttpStatusCode.InternalServerError);

var pipeline = new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddFallback(new()
{
ShouldHandle = predicateBuilder,
FallbackAction = args =>
{
// Try to resolve the fallback response
HttpResponseMessage fallbackResponse = ResolveFallbackResponse(args.Outcome);

return Outcome.FromResultAsValueTask(fallbackResponse);
}
})
.AddRetry(new()
{
ShouldHandle = predicateBuilder,
MaxRetryAttempts = 3,
})
.Build();

// Demonstrative execution that always produces invalid result
pipeline.Execute(() => new HttpResponseMessage(HttpStatusCode.InternalServerError));

#endregion
}

private static HttpResponseMessage ResolveFallbackResponse(Outcome<HttpResponseMessage> outcome) => new();
}
1 change: 1 addition & 0 deletions src/Snippets/Docs/Retry.cs
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ public static async Task MaxDelay()
{
Delay = TimeSpan.FromSeconds(2),
MaxRetryAttempts = int.MaxValue,
BackoffType = DelayBackoffType.Exponential,

// Initially, we aim for an exponential backoff, but after a certain number of retries, we set a maximum delay of 15 minutes.
MaxDelay = TimeSpan.FromMinutes(15),
Expand Down