forked from kedacore/keda-docs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add more debugging scenarios for not scaling as expected
Reference kedacore/keda#5263, adding some more documentation for guiding debugging, and highlighting known issues Signed-off-by: OfficiallySomeGuy <[email protected]>
- Loading branch information
1 parent
bd1589b
commit f033d28
Showing
1 changed file
with
26 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,14 +1,36 @@ | ||
+++ | ||
title = "Why is my `ScaledObject` paused?" | ||
title = "Why is my `ScaledObject` not scaling as expected?" | ||
weight = 1 | ||
+++ | ||
|
||
When KEDA has upstream errors to get scaler source information it will keep the current instance count of the workload unless the `fallback` section is defined. | ||
There are a number of reasons why your KEDA scaling may not be working as expected, | ||
|
||
This behavior might feel like the autoscaling is not happening, but in reality, it is because of problems related to the scaler source. | ||
## Common issues | ||
|
||
You can check if this is your case by reviewing the logs from the KEDA pods where you should see errors in both our Operator and Metrics server. You can also check a status of the ScaledObject (`READY` and `ACTIVE` condition) by running following command: | ||
### KEDA scaler source is invalid | ||
|
||
When KEDA has upstream errors to get scaler source information it will keep the current instance count of the workload unless the `fallback` section is defined. This behavior might feel like the autoscaling is not happening, but in reality, it is because of problems related to the scaler source. | ||
|
||
### DesiredReplicas is within 10% of the target value | ||
|
||
If the replica mismatch is within 10% of the target value, you may be experiencing a known issue whereby Kubernetes `HorizontalPodAutoscalers` have a default tolerance of 10%. If your desired value is within 10% of the scaling metric, the HPA will not scale. More detail on this scenario can be found in [this issue against the KEDA project](https://github.com/kedacore/keda/issues/5263) and [this issue against the Kubernetes project](https://github.com/kubernetes/kubernetes/issues/116984). | ||
|
||
## Debugging process | ||
|
||
### Check KEDA pods for errors | ||
|
||
The most effective place to start for debugging your issue is by reviewing the logs from the KEDA pods where you may see errors in both our Operator and Metrics server. You can [enable debug logging](https://github.com/kedacore/keda/blob/main/BUILD.md#setting-log-levels) to provide more verbose output, which may be helpful in some scenarios. | ||
|
||
### Check the Kubernetes resources for errors | ||
|
||
You can check the status of the `ScaledObject` (`READY` and `ACTIVE` condition) by running following command: | ||
|
||
```bash | ||
$ kubectl get scaledobject MY-SCALED-OBJECT | ||
``` | ||
|
||
It is also possible to check the status conditions on the related HPA itself to determine whether the HPA is also healthy: | ||
|
||
```bash | ||
$ kubectl get horizontalpodautoscaler MY-HPA | ||
``` |