-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete secondary VRG before primary #1689
base: main
Are you sure you want to change the base?
Conversation
efa29e0
to
a684fe8
Compare
Errors with the first commit - without using OperationInProgressSo far it works (4 successful builds in 4 tries), but we need to avoid the noisy errors when the reconcile fails: We have 18 errors - 15 errors for the secondary vrg: 2024-12-01T19:37:37.543Z ERROR controller/controller.go:316 Reconciler error {"controller": "drplacementcontrol", "controllerGroup": "ramendr.openshift.io", "controllerKind": "DRPlacementControl", "DRPlacementControl": {"name":"subscr-deploy-rbd-busybox","namespace":"subscr-deploy-rbd-busybox"}, "namespace": "subscr-deploy-rbd-busybox", "name": "subscr-deploy-rbd-busybox", "reconcileID": "3041f0da-e705-450c-bdd1-e159ffcecef7", "error": "secondary VRG manifestwork deletion in progress"} And 3 errors for the primary vrg: 2024-12-01T19:38:32.151Z ERROR controller/controller.go:316 Reconciler error {"controller": "drplacementcontrol", "controllerGroup": "ramendr.openshift.io", "controllerKind": "DRPlacementControl", "DRPlacementControl": {"name":"subscr-deploy-rbd-busybox","namespace":"subscr-deploy-rbd-busybox"}, "namespace": "subscr-deploy-rbd-busybox", "name": "subscr-deploy-rbd-busybox", "reconcileID": "0f352732-07e2-40d5-bd3f-a9a97181d94a", "error": "primary VRG manifestwork deletion in progress"} Examples errors for single workload during e2e flow:
|
82b60e2
to
ffc3b3b
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
ffc3b3b
to
4bfc77f
Compare
Example run
|
Example deleting drpc progress logs
|
4bfc77f
to
9487e4e
Compare
// DeleteInProgress error is returned during DRPC deletion when we need to wait and try later. | ||
type deleteInProgress struct{ string } | ||
|
||
func (e deleteInProgress) Error() string { return e.string } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type? Why not simply use errors.New()
similar to these ? One might argue it’s for errors.Is()
support, but is it necessary to check for this specific error type at line 189? Instead, consider returning a custom error directly. The reconciler will eventually retry, and the error will still be logged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type? Why not simply use
errors.New()
similar to these ?
Using a sentinel error works when there one special error like io.EOF. But we have multiple reasons to requeue the request during delete:
- adopting a vrg
- waiting until secondary is deleted
- waiting until primary is deleted
- vrg count > 0 (maybe not possible now)
Using a a type matching any error of the same type, we can return multiple errors have the same handling.
We already use this in
func RequestProcessingErrorCreate(s string) RequestProcessingError { return RequestProcessingError{s} } |
I started with a local error in this package, but this can move to a shared package and used in all controllers.
One might argue it’s for
errors.Is()
support, but is it necessary to check for this specific error type at line 189?
Yes, see comment bellow.
Instead, consider returning a custom error directly. The reconciler will eventually retry, and the error will still be logged.
This is what the first commit is doing, using fmt.Errorf() - the result is having 18 errors with stacktrace for every disable dr. This is huge amount of noise and it is incorrect to log a normal condition as an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the errors generated by the first commit - each one has a 10 lines stacktrace that I did not copy:
#1689 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is huge amount of noise and it is incorrect to log a normal condition as an error
Since deleteInProgress
implements the Error
interface, it is inherently treated as an error object. This creates a bias in how it is perceived. While it may not represent an error condition for the reconcile server, within the context of Ramen, it is treated as one. Therefore, the classification as an error is intentional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using an error is just a limit of the current code, that have single error return value for the entire finalizeDRPC code path. We can change this but this is a big change that will be hard to backport, and I don't think it will be better to return multiple arguments like we have in other places. Using specific error type for the common case of "not ready yet" seems better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type?
Using existing type now.
return ctrl.Result{Requeue: true}, nil | ||
} | ||
|
||
// Unexpected error. | ||
return ctrl.Result{}, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean just let this be returned and remove the block starting from line 189 to 195 inclusive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result will be ERROR log with a stacktrace - we see 18 of these for every delete operation. Since this is an expected condition, it is wrong to treat this as an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is an expected condition, it is wrong to treat this as an error.
Same as my comment above
Since we added a VRG on the secondary cluster we have a random failure when deleting the DRPC after relocate. When this happens, we find the PVC in terminating state on the secondary cluster, and the VR and VRG are never deleted. This change attempt to avoid this issue by deleting the secondary VRG first, and deleting the primary VRG only after the secondary VRG was deleted. During DRPC delete we expect to see these errors one or more times: ERROR Secondary VRG manifestwork deletion in progress very noisy stacktrace... ERROR Primary VRG manifestwork deletion in progress very noisy stacktrace... Fixes: RamenDR#1659 Signed-off-by: Nir Soffer <[email protected]>
9487e4e
to
4ffd7ee
Compare
4ffd7ee
to
299ec4f
Compare
This error is used to signal that an operation is in progress, and the caller should log the error in normal log level and try again later. This is a common concept for reconciling, and we can sue the same error in other packages. Changes: - Rename the error to OperationInProgress - Simplify to a string instead of embedding a string - update the tests to reflect the common usage with errors.Is(). Signed-off-by: Nir Soffer <[email protected]>
We want to use it more in ramen, to remove bugus errors for logs. To encourage use this this pattern and make it easier to use correctly, add util.IsOperationInProgress() helper. Improve the test to use the new IsOperationInProgress() and test also wrapped OperationInProgress error. Signed-off-by: Nir Soffer <[email protected]>
When deleting the DRPC we may need to adopt the VRG, delete the secondary VRG, wait until the secondary VRG is deleted, delete the primary VRG, and wait until the primary VRG is deleted. This takes 60-90 seconds and many reconciles (18 seen in e2e test), and creates huge amount of noise in the log. Suppress the noise using util.OperationInProgress error. When the reconcile is successful but it is still in progress, we return a util.OperationInProgress error describing the current progression. The top level error handler logs an INFO message and requeue the request. With this change we will see multiple logs for the secondary VRG: INFO Deleting DRPC in progress {"reason", "secondary VRG deletion in progress"} ... And finally more logs for the primary VRG: INFO Deleting DRPC in progress {"reason", "primary VRG deletion in progress"} ... Notes: - We logged errors during finalizeDRPC twice; once as INFO log, and once as ERROR with a stacktrace when we return error from the reconcile. Remove the duplicate INFO log. - The linter is not happy about the new nested if. We can avoid this by extracting a helper to handle finalize errors, but I want to keep the change minimal for easy backport. We can improve this later upstream. Signed-off-by: Nir Soffer <[email protected]>
299ec4f
to
e66f588
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the change.
|
||
// IsOperationInProgress returns true if err or error wrapped by it is an OperationInProgress error. | ||
func IsOperationInProgress(err error) bool { | ||
return errors.Is(err, OperationInProgress("")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question though, is this call going to propagate back to line 15? I doubt it. errors.Is(...) is a free function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, see errors.Is() implementation:
https://cs.opensource.google/go/go/+/refs/tags/go1.23.3:src/errors/wrap.go;l=58
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it does here:
if x, ok := err.(interface{ Is(error) bool }); ok && x.Is(target) {
return true
}
All good
Since we added a VRG on the secondary cluster we have a random failure when deleting the DRPC after relocate. When this happens, we find the PVC in terminating state on the secondary cluster, and the VR and VRG are never deleted.
This change attempt to avoid this issue by deleting the secondary VRG first, and deleting the primary VRG only after the secondary VRG was deleted.
During DRPC delete we expect to see these logs one or more times:
Fixes #1659