Skip to content

Commit

Permalink
[controller] Update sds-node-configuration api version (#83)
Browse files Browse the repository at this point in the history
Signed-off-by: Viktor Kramarenko <[email protected]>
  • Loading branch information
ViktorKram authored Sep 25, 2024
1 parent 33d927d commit dc89220
Show file tree
Hide file tree
Showing 26 changed files with 255 additions and 205 deletions.
18 changes: 9 additions & 9 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,22 +120,22 @@ You can verify this using the command:
kubectl get node %node-name% --show-labels
```

If the labels from `nodeSelector` are not present on the node, ensure that this node does not own any `LvmVolumeGroup` resources used by `LocalStorageClass` resources. More details about this check can be found [here](#how-to-check-if-there-are-dependent-resources-lvmvolumegroup-on-the-node).
If the labels from `nodeSelector` are not present on the node, ensure that this node does not own any `LVMVolumeGroup` resources used by `LocalStorageClass` resources. More details about this check can be found [here](#how-to-check-if-there-are-dependent-resources-lvmvolumegroup-on-the-node).


> Please note that on the `LvmVolumeGroup` and `LocalStorageClass` resources, which prevent the node from being taken out of the module's control, the label `storage.deckhouse.io/sds-local-volume-candidate-for-eviction` will be displayed.
> Please note that on the `LVMVolumeGroup` and `LocalStorageClass` resources, which prevent the node from being taken out of the module's control, the label `storage.deckhouse.io/sds-local-volume-candidate-for-eviction` will be displayed.
On the node itself, the label `storage.deckhouse.io/sds-local-volume-need-manual-eviction` will be present.


## How to check if there are dependent resources `LvmVolumeGroup` on the node?
## How to check if there are dependent resources `LVMVolumeGroup` on the node?
To check for such resources, follow these steps:
1. Display the existing `LocalStorageClass` resources

```shell
kubectl get lsc
```

2. Check each of them for the list of used `LvmVolumeGroup` resources.
2. Check each of them for the list of used `LVMVolumeGroup` resources.

> If you want to list all `LocalStorageClass` resources at once, run the command:
>
Expand Down Expand Up @@ -170,15 +170,15 @@ items:
kind: List
```

> Please pay attention to the `spec.lvm.lvmVolumeGroups` field - it specifies the used `LvmVolumeGroup` resources.
> Please pay attention to the `spec.lvm.lvmVolumeGroups` field - it specifies the used `LVMVolumeGroup` resources.

3. Display the list of existing `LvmVolumeGroup` resources.
3. Display the list of existing `LVMVolumeGroup` resources.

```shell
kubectl get lvg
```

An approximate representation of `LvmVolumeGroup` could be:
An approximate representation of `LVMVolumeGroup` could be:

```text
NAME HEALTH NODE SIZE ALLOCATED SIZE VG AGE
Expand All @@ -190,12 +190,12 @@ lvg-on-worker-4 Operational node-worker-4 307196Mi 0 test
lvg-on-worker-5 Operational node-worker-5 204796Mi 0 test-vg 15d
```

4. Ensure that the node you intend to remove from the module's control does not have any `LvmVolumeGroup` resources used in `LocalStorageClass` resources.
4. Ensure that the node you intend to remove from the module's control does not have any `LVMVolumeGroup` resources used in `LocalStorageClass` resources.

> To avoid unintentionally losing control over volumes already created using the module, the user needs to manually delete dependent resources by performing necessary operations on the volume.

## I removed the labels from the node, but the `sds-local-volume-csi-node` pod is still there. Why did this happen?
Most likely, there are `LvmVolumeGroup` resources present on the node, which are used in one of the `LocalStorageClass` resources.
Most likely, there are `LVMVolumeGroup` resources present on the node, which are used in one of the `LocalStorageClass` resources.

To avoid unintentionally losing control over volumes already created using the module, the user needs to manually delete dependent resources by performing necessary operations on the volume."

Expand Down
18 changes: 9 additions & 9 deletions docs/FAQ.ru.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,22 +117,22 @@ kubectl -n d8-sds-local-volume get po -owide
kubectl get node %node-name% --show-labels
```

Если метки из `nodeSelector` не присутствуют на узле, то убедитесь, что данному узлу не принадлежат `LvmVolumeGroup` ресурсы, использующиеся `LocalStorageClass` ресурсами. Подробнее об этой проверке можно прочитать [здесь](#как-проверить-имеются-ли-зависимые-ресурсы-lvmvolumegroup-на-узле).
Если метки из `nodeSelector` не присутствуют на узле, то убедитесь, что данному узлу не принадлежат `LVMVolumeGroup` ресурсы, использующиеся `LocalStorageClass` ресурсами. Подробнее об этой проверке можно прочитать [здесь](#как-проверить-имеются-ли-зависимые-ресурсы-lvmvolumegroup-на-узле).


> Обратите внимание, что на ресурсах `LvmVolumeGroup` и `LocalStorageClass`, из-за которых не удается вывести узел из-под управления модуля будет отображена метка `storage.deckhouse.io/sds-local-volume-candidate-for-eviction`.
> Обратите внимание, что на ресурсах `LVMVolumeGroup` и `LocalStorageClass`, из-за которых не удается вывести узел из-под управления модуля будет отображена метка `storage.deckhouse.io/sds-local-volume-candidate-for-eviction`.
>
> На самом узле будет присутствовать метка `storage.deckhouse.io/sds-local-volume-need-manual-eviction`.

## Как проверить, имеются ли зависимые ресурсы `LvmVolumeGroup` на узле?
## Как проверить, имеются ли зависимые ресурсы `LVMVolumeGroup` на узле?
Для проверки таковых ресурсов необходимо выполнить следующие шаги:
1. Отобразить имеющиеся `LocalStorageClass` ресурсы

```shell
kubectl get lsc
```

2. Проверить у каждого из них список используемых `LvmVolumeGroup` ресурсов
2. Проверить у каждого из них список используемых `LVMVolumeGroup` ресурсов

> Вы можете сразу отобразить содержимое всех `LocalStorageClass` ресурсов, выполнив команду:
>
Expand Down Expand Up @@ -167,15 +167,15 @@ items:
kind: List
```

> Обратите внимание на поле spec.lvm.lvmVolumeGroups - именно в нем указаны используемые `LvmVolumeGroup` ресурсы.
> Обратите внимание на поле spec.lvm.lvmVolumeGroups - именно в нем указаны используемые `LVMVolumeGroup` ресурсы.

3. Отобразите список существующих `LvmVolumeGroup` ресурсов
3. Отобразите список существующих `LVMVolumeGroup` ресурсов

```shell
kubectl get lvg
```

Примерный вывод `LvmVolumeGroup` ресурсов:
Примерный вывод `LVMVolumeGroup` ресурсов:

```text
NAME HEALTH NODE SIZE ALLOCATED SIZE VG AGE
Expand All @@ -187,12 +187,12 @@ lvg-on-worker-4 Operational node-worker-4 307196Mi 0 test
lvg-on-worker-5 Operational node-worker-5 204796Mi 0 test-vg 15d
```

4. Проверьте, что на узле, который вы собираетесь вывести из-под управления модуля, не присутствует какой-либо `LvmVolumeGroup` ресурс, используемый в `LocalStorageClass` ресурсах.
4. Проверьте, что на узле, который вы собираетесь вывести из-под управления модуля, не присутствует какой-либо `LVMVolumeGroup` ресурс, используемый в `LocalStorageClass` ресурсах.

> Во избежание непредвиденной потери контроля за уже созданными с помощью модуля томами пользователю необходимо вручную удалить зависимые ресурсы, совершив необходимые операции над томом.

## Я убрал метки с узла, но pod `sds-local-volume-csi-node` остался. Почему так произошло?
Вероятнее всего, на узле присутствуют `LvmVolumeGroup` ресурсы, которые используются в одном из `LocalStorageClass` ресурсов.
Вероятнее всего, на узле присутствуют `LVMVolumeGroup` ресурсы, которые используются в одном из `LocalStorageClass` ресурсов.

Во избежание непредвиденной потери контроля за уже созданными с помощью модуля томами пользователю необходимо вручную удалить зависимые ресурсы, совершив необходимые операции над томом.

Expand Down
62 changes: 40 additions & 22 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ moduleStatus: preview
---

This module manages local block storage based on `LVM`. The module allows you to create a `StorageClass` in `Kubernetes` by creating [Kubernetes custom resources](./cr.html) `LocalStorageClass` (see example below).
To create a `Storage Class`, you will need the `LvmVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/stable/) module.
To create a `Storage Class`, you will need the `LVMVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/stable/) module.
> **Caution!** Before enabling the `sds-local-volume` module, you must enable the `sds-node-configurator` module.
>
After you enable the `sds-local-volume` module in the Deckhouse Kubernetes Platform configuration, you have to create StorageClasses.
Expand Down Expand Up @@ -82,9 +82,9 @@ By default, the pods will be scheduled on all nodes in the cluster. You can veri
### Configuring storage on nodes

You need to create `LVM` volume groups on the nodes using `LvmVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thick` storage.
You need to create `LVM` volume groups on the nodes using `LVMVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thick` storage.

> Please ensure that the `sds-local-volume-csi-node` pod is running on the node before creating the `LvmVolumeGroup`. You can do this using the command:
> Please ensure that the `sds-local-volume-csi-node` pod is running on the node before creating the `LVMVolumeGroup`. You can do this using the command:
> ```shell
> kubectl -n d8-sds-local-volume get pod -owide
Expand All @@ -106,74 +106,92 @@ dev-53d904f18b912187ac82de29af06a34d9ae23199 worker-2 false 976762584
dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 worker-2 false 894006140416 /dev/nvme0n1p6
```
- Create an [LvmVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-0`:
- Create an [LVMVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-0`:

```yaml
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LvmVolumeGroup
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-0" # The name can be any fully qualified resource name in Kubernetes. This LvmVolumeGroup resource name will be used to create LocalStorageClass in the future
name: "vg-1-on-worker-0" # The name can be any fully qualified resource name in Kubernetes. This LVMVolumeGroup resource name will be used to create LocalStorageClass in the future
spec:
type: Local
blockDeviceNames: # specify the names of the BlockDevice resources that are located on the target node and whose CONSUMABLE is set to true. Note that the node name is not specified anywhere since it is derived from BlockDevice resources.
- dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa
- dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd
local:
nodeName: "worker-0"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa
- dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd
actualVGNameOnTheNode: "vg-1" # the name of the LVM VG to be created from the above block devices on the node
EOF
```

- Wait for the created `LvmVolumeGroup` resource to become `Operational`:
- Wait for the created `LVMVolumeGroup` resource to become `Operational`:

```shell
kubectl get lvg vg-1-on-worker-0 -w
```

- The resource becoming `Operational` means that an LVM VG named `vg-1` made up of the `/dev/nvme1n1` and `/dev/nvme0n1p6` block devices has been created on the `worker-0` node.

- Next, create an [LvmVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-1`:
- Next, create an [LVMVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-1`:

```yaml
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LvmVolumeGroup
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-1"
spec:
type: Local
blockDeviceNames:
- dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0
- dev-b103062f879a2349a9c5f054e0366594568de68d
local:
nodeName: "worker-1"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0
- dev-b103062f879a2349a9c5f054e0366594568de68d
actualVGNameOnTheNode: "vg-1"
EOF
```

- Wait for the created `LvmVolumeGroup` resource to become `Operational`:
- Wait for the created `LVMVolumeGroup` resource to become `Operational`:

```shell
kubectl get lvg vg-1-on-worker-1 -w
```

- The resource becoming `Operational` means that an LVM VG named `vg-1` made up of the `/dev/nvme1n1` and `/dev/nvme0n1p6` block device has been created on the `worker-1` node.

- Create an [LvmVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-2`:
- Create an [LVMVolumeGroup](../../sds-node-configurator/stable/cr.html#lvmvolumegroup) resource for `worker-2`:

```yaml
kubectl apply -f - <<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: LvmVolumeGroup
kind: LVMVolumeGroup
metadata:
name: "vg-1-on-worker-2"
spec:
type: Local
blockDeviceNames:
- dev-53d904f18b912187ac82de29af06a34d9ae23199
- dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1
local:
nodeName: "worker-2"
blockDeviceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values:
- dev-53d904f18b912187ac82de29af06a34d9ae23199
- dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1
actualVGNameOnTheNode: "vg-1"
EOF
```

- Wait for the created `LvmVolumeGroup` resource to become `Operational`:
- Wait for the created `LVMVolumeGroup` resource to become `Operational`:

```shell
kubectl get lvg vg-1-on-worker-2 -w
Expand Down
Loading

0 comments on commit dc89220

Please sign in to comment.