-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add exponential retry mechanism for rpc requests using utility function #593
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a handful of comments, but overall the backoff utility looks good.
There are a couple of other places that exponential backoff would make sense. Can you please integrate the utility in these places as well?
- p2p signature aggregation If this proves to be non-trivial, we can defer to a later ticket
- The existing call with retry utility and its callsites
// WithMaxRetriesLog runs the operation until it succeeds or max retries has been reached. | ||
// It uses exponential back off. | ||
// It optionally logs information if logger is set. | ||
func WithMaxRetriesLog( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than separate with/without log functions, I think we should instead have a single method that takes a logging.Logger
and emits warning logs on retries. I can't think of any use cases where we'd want retry attempts to be logged as Warn for some operations, but not logged at all for others.
msg string, | ||
fields ...zapcore.Field, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think logging msg
and fields
on each attempt is not worth the added complexity of having to pass them in as arguments. Rather, WithMaxRetries
should emit a generic warning log on each attempt failure, and we can leave it to the caller to construct a cohesive error log in the failure case.
fields ...zapcore.Field, | ||
) error { | ||
attempt := uint(1) | ||
expBackOff := backoff.WithMaxRetries(backoff.NewExponentialBackOff(), max) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use backoff.WithMaxElapsedTime
instead. At present, we choose the number of retries and the delay between each to resolve to a set elapsed time before we emit an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand well, you want to use backoff.WithMaxElapsedTime
instead of backoff.WithMaxRetries
.
Instead of giving a maxRetry
to the utility function, I will give a maxElapsedTime
derived from the existing values (numberOfRetries * delayBetweenEach
)
r.logger.Warn( | ||
"Failed to get aggregate signature from node endpoint", | ||
zap.Int("attempts", maxRelayerQueryAttempts), | ||
err = utils.WithMaxRetriesLog( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a head's up, the Warp API integration is planned to be deprecated in the near future. It's still reasonable to integrate exponential backoff here for the time being.
Why this should be merged
Relates to #453
How this works
Functions needing retry are passed to the utility function managing the retry mechanism.
How this was tested
unit test has been added for
WithMaxRetries()
embeddingWithMaxRetriesLog()
How is this documented