Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add retry mechanism #1035

Merged
merged 120 commits into from
Sep 13, 2023
Merged

feat: add retry mechanism #1035

merged 120 commits into from
Sep 13, 2023

Conversation

Salka1988
Copy link
Member

@Salka1988 Salka1988 commented Jul 9, 2023

Closes #1020.

This code is an implementation of a retry mechanism that allows a function to be retried a certain number of times with a specified interval between attempts.

RetryConfig: This is a configuration struct that holds the retry configuration. It contains the following fields:

  • max_attempts: The maximum number of times the action should be retried.
  • interval: Determines the backoff strategy and timing to be used during retry attempts. It allows you to configure how much time a system should wait before retrying an action after a failure.

retry<Fut, T, ShouldRetry>: This is the main function responsible for implementing the retry logic.

This function takes several parameters:

  • action: A closure that performs the action you want to retry. It returns a Future of the result.
  • retry_config: A reference to a RetryConfig struct that specifies the retry behavior.
  • should_retry: A closure that takes a reference to the result of the action and returns a boolean indicating whether the action should be retried based on the result.

Aditional Work:
I've separated sending a transaction from getting back its value:

  let response = contract_methods.get(1, 2).submit().await?;
  let value = response.response().await?;

Added RetryableClient a wrapper around the FuelClient that provides a convenient way to interact with a remote service. It also enables automatic retry functionality in scenarios where network issues may occur.

#[derive(Debug, Clone)]
pub(crate) struct RetryableClient {
    client: FuelClient,
    url: String,
    retry_config: RetryConfig,
}

Checklist

  • I have linked to any relevant issues.
  • I have updated the documentation.
  • I have added necessary labels.
  • I have done my best to ensure that my PR adheres to the Fuel Labs Code Review Standards.
  • I have requested a review from the relevant team or maintainers.

@Salka1988 Salka1988 added the enhancement New feature or request label Jul 9, 2023
@Salka1988 Salka1988 self-assigned this Jul 9, 2023
@Salka1988 Salka1988 linked an issue Jul 9, 2023 that may be closed by this pull request
@Salka1988 Salka1988 marked this pull request as ready for review July 9, 2023 22:05
Copy link
Contributor

@Br1ght0ne Br1ght0ne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great UX and overall approach! I have a small consideration regarding the logic itself.

When designing a retry mechanism for anything with a notion of “network call” in some way, I like implementing a delay to allow the network to self-regulate its throughput when congested and often requiring less attempts overall.

I usually go for exponential backoff - delays start off small and ramp up steadily. There are crates like backon and backoff with full support of sync/async functions.

Extra considerations:

  • what are acceptable minimum/maximum delays for our use case?

Alternatives:

  • hand-roll our own delayed retrying (only if adding a dependency is a problem)
  • skip having delays altogether (I hereby request your comments on it)

@Salka1988
Copy link
Member Author

Salka1988 commented Jul 11, 2023

Great UX and overall approach! I have a small consideration regarding the logic itself.

When designing a retry mechanism for anything with a notion of “network call” in some way, I like implementing a delay to allow the network to self-regulate its throughput when congested and often requiring less attempts overall.

I usually go for exponential backoff - delays start off small and ramp up steadily. There are crates like backon and backoff with full support of sync/async functions.

Extra considerations:

  • what are acceptable minimum/maximum delays for our use case?

Alternatives:

  • hand-roll our own delayed retrying (only if adding a dependency is a problem)
  • skip having delays altogether (I hereby request your comments on it)

I will use one of the crates you recommended or similar. If that's a problem, I'll implement `exponential backoff'.
Or maybe simple:

    let mut delay = Duration::from_secs(1);

    for _ in 0..retries {
        sleep(delay).await;
        delay *= 2; 
    } 

will be enough.

@Salka1988 Salka1988 requested a review from Br1ght0ne July 11, 2023 11:46
Copy link
Contributor

@segfault-magnet segfault-magnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all errors benefit from retrying. Usually, you want to wait out a transient error (network or some other kind) and not actually offer a fully-fledged retry mechanism to the user especially when retrying again so fast.

For example, if the tx is out of gas it will be reverted and you'll retry again most probably changing nothing (and more importantly - wasting gas).

@Salka1988
Copy link
Member Author

Not all errors benefit from retrying. Usually, you want to wait out a transient error (network or some other kind) and not actually offer a fully-fledged retry mechanism to the user especially when retrying again so fast.

For example, if the tx is out of gas it will be reverted and you'll retry again most probably changing nothing (and more importantly - wasting gas).

You are right, I understand you but will we waste gas if we fail?

@segfault-magnet
Copy link
Contributor

Not all errors benefit from retrying. Usually, you want to wait out a transient error (network or some other kind) and not actually offer a fully-fledged retry mechanism to the user especially when retrying again so fast.
For example, if the tx is out of gas it will be reverted and you'll retry again most probably changing nothing (and more importantly - wasting gas).

You are right, I understand you but will we waste gas if we fail?

Yep. You lose the gas used to execute the contract up until the point of reverting. The rest will be refunded if you provided an adequate change output.

1 similar comment
@segfault-magnet
Copy link
Contributor

Not all errors benefit from retrying. Usually, you want to wait out a transient error (network or some other kind) and not actually offer a fully-fledged retry mechanism to the user especially when retrying again so fast.
For example, if the tx is out of gas it will be reverted and you'll retry again most probably changing nothing (and more importantly - wasting gas).

You are right, I understand you but will we waste gas if we fail?

Yep. You lose the gas used to execute the contract up until the point of reverting. The rest will be refunded if you provided an adequate change output.

@Br1ght0ne Br1ght0ne dismissed their stale review July 11, 2023 20:31

Taking a different approach

@Br1ght0ne
Copy link
Contributor

Thanks for your consideration @Salka1988! I agree with @segfault-magnet - saving on gas is important, and a proper max_attempts should be set to avoid excessive retrying. I dismissed my blocking review, and will approve whatever logic we agree on (if the code is correct 😉)

@Salka1988
Copy link
Member Author

Not all errors benefit from retrying. Usually, you want to wait out a transient error (network or some other kind) and not actually offer a fully-fledged retry mechanism to the user especially when retrying again so fast.
For example, if the tx is out of gas it will be reverted and you'll retry again most probably changing nothing (and more importantly - wasting gas).

You are right, I understand you but will we waste gas if we fail?

Yep. You lose the gas used to execute the contract up until the point of reverting. The rest will be refunded if you provided an adequate change output.

Yea! I forgot about simulate()

@Salka1988 Salka1988 added the breaking Introduces or requires breaking changes label Sep 11, 2023
iqdecay
iqdecay previously approved these changes Sep 12, 2023
Copy link
Contributor

@segfault-magnet segfault-magnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed and gave feedback live with @Salka1988 .

@Salka1988
Copy link
Member Author

Reviewed and gave feedback live with @Salka1988 .

@segfault-magnet creativity and expertise truly elevated the outcome. Thanx!

# Conflicts:
#	Cargo.toml
#	packages/fuels-code-gen/src/program_bindings/abigen/bindings/contract.rs
#	packages/fuels-code-gen/src/program_bindings/abigen/bindings/script.rs
#	packages/fuels-core/src/codec/logs.rs
#	packages/fuels-programs/src/contract.rs
#	packages/fuels-programs/src/script_calls.rs
#	packages/fuels/tests/scripts.rs
Copy link
Contributor

@hal3e hal3e left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work!

@Salka1988 Salka1988 merged commit 337d0ea into master Sep 13, 2023
@Salka1988 Salka1988 deleted the Salka1988/retry_mech branch September 13, 2023 10:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Introduces or requires breaking changes enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Retry mechanism
6 participants