Коротко, это файл, в котором последовательности нулевых байтов заменены на информацию об этих последовательностях. В связи с чем происходит экономия дискового пространства, отсутсвие временных затрат на запись "нулей".
2) Могут ли файлы, являющиеся жесткой ссылкой на один объект, иметь разные права доступа и владельца? Почему?
На мой взгляд нет, т.к. жесткая ссылка это ссылка на один и тот же inode, соответсвенно и права будут одинаковые. Тест ниже.
vagrant@vagrant:~$ touch TESTFILE
vagrant@vagrant:~$ ln TESTFILE link_to_TESTFILE
vagrant@vagrant:~$ ls -ilh
131081 -rw-rw-r-- 2 vagrant vagrant 0 Dec 11 19:07 link_to_TESTFILE
131081 -rw-rw-r-- 2 vagrant vagrant 0 Dec 11 19:07 TESTFILE
vagrant@vagrant:~$ chmod 0755 TESTFILE
vagrant@vagrant:~$ ls -ilh
131081 -rwxr-xr-x 2 vagrant vagrant 0 Dec 11 19:07 link_to_TESTFILE
131081 -rwxr-xr-x 2 vagrant vagrant 0 Dec 11 19:07 TESTFILE
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-20.04"
config.vm.provider :virtualbox do |vb|
lvm_experiments_disk0_path = "/tmp/lvm_experiments_disk0.vmdk"
lvm_experiments_disk1_path = "/tmp/lvm_experiments_disk1.vmdk"
vb.customize ['createmedium', '--filename', lvm_experiments_disk0_path, '--size', 2560]
vb.customize ['createmedium', '--filename', lvm_experiments_disk1_path, '--size', 2560]
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', lvm_experiments_disk0_path]
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', lvm_experiments_disk1_path]
end
end
Данная конфигурация создаст новую виртуальную машину с двумя дополнительными неразмеченными дисками по 2.5 Гб. Машина создана, все ок.
vagrant@vagrant:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
sdc 8:32 0 2.5G 0 disk
Разбит диск sdb
vagrant@vagrant:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
└─sdb2 8:18 0 511M 0 part
sdc 8:32 0 2.5G 0 disk
Сделано, вывод ниже:
vagrant@vagrant:~$ sudo sfdisk -d /dev/sdb | sudo sfdisk /dev/sdc
Checking that no-one is using this disk right now ... OK
Disk /dev/sdc: 2.51 GiB, 2684354560 bytes, 5242880 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x5a35369e.
/dev/sdc1: Created a new partition 1 of type 'Linux' and of size 2 GiB.
/dev/sdc2: Created a new partition 2 of type 'Linux' and of size 511 MiB.
/dev/sdc3: Done.
New situation:
Disklabel type: dos
Disk identifier: 0x5a35369e
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 4196351 4194304 2G 83 Linux
/dev/sdc2 4196352 5242879 1046528 511M 83 Linux
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
vagrant@vagrant:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
└─sdb2 8:18 0 511M 0 part
sdc 8:32 0 2.5G 0 disk
├─sdc1 8:33 0 2G 0 part
└─sdc2 8:34 0 511M 0 part
vagrant@vagrant:~$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 2094080K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
vagrant@vagrant:~$ sudo fdisk -l
Disk /dev/md0: 1.102 GiB, 2144337920 bytes, 4188160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
vagrant@vagrant:~$ sudo mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
vagrant@vagrant:~$ lsblk
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdb2 8:18 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
sdc 8:32 0 2.5G 0 disk
├─sdc1 8:33 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdc2 8:34 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
vagrant@vagrant:~$ sudo fdisk -l
Disk /dev/md1: 1018 MiB, 1067450368 bytes, 2084864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Добавляем в автозагрузку (за основу берем инструкцию из лекции, применяем tee, т.к. выполняем не из под root)
vagrant@vagrant:~$ echo 'DEVICE partitions containers' | sudo tee /etc/mdadm/mdadm.conf
vagrant@vagrant:~$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
vagrant@vagrant:~$ cat /etc/mdadm/mdadm.conf
DEVICE partitions containers
ARRAY /dev/md0 metadata=1.2 name=vagrant:0 UUID=95e4fdd1:e0ffe5a6:ef41fc16:d6d512f9
ARRAY /dev/md1 metadata=1.2 name=vagrant:1 UUID=95e4753d:4201562f:6b3ddece:0d75f267
vagrant@vagrant:~$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.4.0-80-generic
vagrant@vagrant:~$ sudo pvcreate /dev/md0
Physical volume "/dev/md0" successfully created.
vagrant@vagrant:~$ sudo pvcreate /dev/md1
Physical volume "/dev/md1" successfully created.
vagrant@vagrant:~$ sudo pvdisplay
"/dev/md0" is a new physical volume of "<2.00 GiB"
--- NEW Physical volume ---
PV Name /dev/md0
VG Name
PV Size <2.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID LIpL0Z-aBO3-5N2y-zbFe-wYVe-AkHY-qpb2uT
"/dev/md1" is a new physical volume of "1018.00 MiB"
--- NEW Physical volume ---
PV Name /dev/md1
VG Name
PV Size 1018.00 MiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID N82st3-NybI-5L2f-0iVw-OK06-pmMf-sQu2WN
vagrant@vagrant:~$ sudo vgcreate vg01 /dev/md0 /dev/md1
Volume group "vg01" successfully created
vagrant@vagrant:~$ sudo vgdisplay vg01
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size <2.99 GiB
PE Size 4.00 MiB
Total PE 765
Alloc PE / Size 0 / 0
Free PE / Size 765 / <2.99 GiB
VG UUID VG7774-fjfR-QqcG-O9gd-2VuK-VuJn-CRrGvd
vagrant@vagrant:/mnt$ sudo lvcreate -n lv1_100 -L 100M /dev/vg01 /dev/md1
Logical volume "lv1_100" created.
vagrant@vagrant:/mnt$ sudo lvdisplay /dev/vg01/lv1_100
--- Logical volume ---
LV Path /dev/vg01/lv1_100
LV Name lv1_100
VG Name vg01
LV UUID DynQSZ-Rc0w-O0LJ-9bud-d0G1-q2KP-Czql08
LV Write Access read/write
LV Creation host, time vagrant, 2021-12-12 10:06:29 +0000
LV Status available
# open 0
LV Size 100.00 MiB
Current LE 25
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:2
vagrant@vagrant:/mnt$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdb2 8:18 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
└─vg01-lv1_100 253:2 0 100M 0 lvm
sdc 8:32 0 2.5G 0 disk
├─sdc1 8:33 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdc2 8:34 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
└─vg01-lv1_100 253:2 0 100M 0 lvm
vagrant@vagrant:~$ sudo mkfs.ext4 /dev/vg01/lv1_100
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 25600 4k blocks and 25600 inodes
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
vagrant@vagrant:~$ sudo mkdir /mnt/lv01_100
vagrant@vagrant:~$ sudo mount /dev/vg01/lv1_100 /mnt/lv01_100/
vagrant@vagrant:~$ mount | grep lv01
/dev/mapper/vg01-lv1_100 on /mnt/lv01_100 type ext4 (rw,relatime)
13) Поместите туда тестовый файл, например wget https://mirror.yandex.ru/ubuntu/ls-lR.gz -O /tmp/new/test.gz.
vagrant@vagrant:/mnt/lv01_100$ sudo wget https://mirror.yandex.ru/ubuntu/ls-lR.gz
--2021-12-12 09:52:25-- https://mirror.yandex.ru/ubuntu/ls-lR.gz
Resolving mirror.yandex.ru (mirror.yandex.ru)... 213.180.204.183, 2a02:6b8::183
Connecting to mirror.yandex.ru (mirror.yandex.ru)|213.180.204.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22718197 (22M) [application/octet-stream]
Saving to: ‘ls-lR.gz’
ls-lR.gz 100%[======================================================================>] 21.67M 1.21MB/s in 29s
2021-12-12 09:52:54 (774 KB/s) - ‘ls-lR.gz’ saved [22718197/22718197]
vagrant@vagrant:/mnt$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdb2 8:18 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
└─vg01-lv1_100 253:2 0 100M 0 lvm /mnt/lv01_100
sdc 8:32 0 2.5G 0 disk
├─sdc1 8:33 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
└─sdc2 8:34 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
└─vg01-lv1_100 253:2 0 100M 0 lvm /mnt/lv01_100
vagrant@vagrant:/mnt/lv01_100$ gzip -t -v ls-lR.gz
ls-lR.gz: OK
vagrant@vagrant:/mnt/lv01_100$ echo $?
0
vagrant@vagrant:/mnt/lv01_100$ sudo pvmove /dev/md1 /dev/md0
/dev/md1: Moved: 32.00%
/dev/md1: Moved: 100.00%
vagrant@vagrant:/mnt/lv01_100$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 63.5G 0 part
├─vgvagrant-root 253:0 0 62.6G 0 lvm /
└─vgvagrant-swap_1 253:1 0 980M 0 lvm [SWAP]
sdb 8:16 0 2.5G 0 disk
├─sdb1 8:17 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
│ └─vg01-lv1_100 253:2 0 100M 0 lvm /mnt/lv01_100
└─sdb2 8:18 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
sdc 8:32 0 2.5G 0 disk
├─sdc1 8:33 0 2G 0 part
│ └─md0 9:0 0 2G 0 raid1
│ └─vg01-lv1_100 253:2 0 100M 0 lvm /mnt/lv01_100
└─sdc2 8:34 0 511M 0 part
└─md1 9:1 0 1018M 0 raid0
vagrant@vagrant:/mnt/lv01_100$ sudo mdadm --fail /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
vagrant@vagrant:/mnt/lv01_100$ dmesg | grep md0
[ 2.639280] md/raid1:md0: active with 2 out of 2 mirrors
[ 2.639314] md0: detected capacity change from 0 to 2144337920
[ 3129.628370] md/raid1:md0: Disk failure on sdb1, disabling device.
md/raid1:md0: Operation continuing on 1 devices.
vagrant@vagrant:/mnt/lv01_100$ gzip -t -v ls-lR.gz
ls-lR.gz: OK
vagrant@vagrant:/mnt/lv01_100$ echo $?
0
vagrant@vagrant:/mnt/lv01_100$ shutdown -P now