Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add exponential retry mechanism for rpc requests using utility function #593

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

najeal
Copy link
Contributor

@najeal najeal commented Dec 12, 2024

Why this should be merged

Relates to #453

How this works

Functions needing retry are passed to the utility function managing the retry mechanism.

How this was tested

unit test has been added for WithMaxRetries() embedding WithMaxRetriesLog()

How is this documented

Copy link
Collaborator

@cam-schultz cam-schultz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a handful of comments, but overall the backoff utility looks good.

There are a couple of other places that exponential backoff would make sense. Can you please integrate the utility in these places as well?

// WithMaxRetriesLog runs the operation until it succeeds or max retries has been reached.
// It uses exponential back off.
// It optionally logs information if logger is set.
func WithMaxRetriesLog(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than separate with/without log functions, I think we should instead have a single method that takes a logging.Logger and emits warning logs on retries. I can't think of any use cases where we'd want retry attempts to be logged as Warn for some operations, but not logged at all for others.

Comment on lines +19 to +20
msg string,
fields ...zapcore.Field,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think logging msg and fields on each attempt is not worth the added complexity of having to pass them in as arguments. Rather, WithMaxRetries should emit a generic warning log on each attempt failure, and we can leave it to the caller to construct a cohesive error log in the failure case.

fields ...zapcore.Field,
) error {
attempt := uint(1)
expBackOff := backoff.WithMaxRetries(backoff.NewExponentialBackOff(), max)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use backoff.WithMaxElapsedTime instead. At present, we choose the number of retries and the delay between each to resolve to a set elapsed time before we emit an error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand well, you want to use backoff.WithMaxElapsedTime instead of backoff.WithMaxRetries.

Instead of giving a maxRetry to the utility function, I will give a maxElapsedTime derived from the existing values (numberOfRetries * delayBetweenEach)

r.logger.Warn(
"Failed to get aggregate signature from node endpoint",
zap.Int("attempts", maxRelayerQueryAttempts),
err = utils.WithMaxRetriesLog(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a head's up, the Warp API integration is planned to be deprecated in the near future. It's still reasonable to integrate exponential backoff here for the time being.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants