The stateful pods keeps on requiring more disk space. #8662
Unanswered
sidkram
asked this question in
Help and support
Replies: 1 comment 1 reply
-
Converting to discussion, as there's no indication of a bug. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the bug
I have loki running on EKS with s3 backend. The performance is splendid and the system is so resilient. But I frequently face a few issues with the stateful components make use of all the disk space. I have frequently updated the disk size of all these components once atleast over a period of 3 weeks. Below is my distribution of statefulsets across different regions and different filesystem sizes. The question here is, 1. based on what factors can I assume the optimal disk size that has to be provisioned to these components 2. At what interval do each of these components do they clean their existing data and pull newer chunks into the filesystem. Probably my second question might look off-point, but I want to understand more in depth on how can I set the alerting right.
|-----------------|-----|----------------|---------|
I'm facing the same issue on Loki as well. Is there any configuration parameter that's supposed to be made use of in deciding what should be the optimal disk size. The reason being, I've started at 15 GB but I've already scaled it up to 75 GB over 7-8 iterations. I'd need a possible permanent solution around this.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The stateful pods must be able to go with frequent compaction to eliminate the errors of
No space left on the device
and run seamlesslyEnvironment
Additional Context
mimir-distributed.txt
Beta Was this translation helpful? Give feedback.
All reactions