-
Notifications
You must be signed in to change notification settings - Fork 8
Restoring a backup
Here we describe the steps needed to recover a machine from a previous backup. We suppose that KVM settings are compatible with Guest domain definition. Shutdown the virtual domain name, if still exists in our KVM environment (all those commands need to be run as a privileged user):
[root@cloud1 ~]# virsh list --all
Id Nome Stato
----------------------------------------------------
10 DockerNode1 in esecuzione
17 DockerNode2 in esecuzione
[root@cloud1 ~]# virsh shutdown DockerNode2
Il dominio DockerNode2 è stato arrestato
[root@cloud1 ~]# virsh list --all
Id Nome Stato
----------------------------------------------------
10 DockerNode1 in esecuzione
- DockerNode2 terminato
Next undefine the domain: this will erase the Guest domain!!! The disk image file however will remain on the disk.
[root@cloud1 ~]# virsh undefine DockerNode2
È stata rimossa la definizione del dominio DockerNode2
[root@cloud1 ~]# virsh list --all
Id Nome Stato
----------------------------------------------------
10 DockerNode1 in esecuzione
Get the domain backup. If you have used kvmBackup
, it will be placed in the
backupdir
key under the same hostname in kvmBackup
config file, under a directory
with the same name of domain name. Copy the backup version you need in your home directory,
for example:
[root@cloud1 ~]# cd /mnt/cloud/kvm_backup/cloud1/DockerNode2
[root@cloud1 DockerNode2]# ll
totale 2265544
-rw-r--r-- 1 root root 579996216 13 dic 02.03 DockerNode2.tar.gz
-rw-r--r-- 1 root root 579996024 6 dic 02.03 DockerNode2.tar.gz.1
-rw-r--r-- 1 root root 579961432 29 nov 02.02 DockerNode2.tar.gz.2
-rw-r--r-- 1 root root 579954641 22 nov 02.02 DockerNode2.tar.gz.3
[root@cloud1 DockerNode2]# cp -a DockerNode2.tar.gz ~/
[root@cloud1 DockerNode2]# cd ~/
Then extract the archive. Data will be stored under a directory with the same date when backup was run:
[root@cloud1 ~]# tar -xvzf DockerNode2.tar.gz
2015-12-13/DockerNode2.xml
2015-12-13/DockerNode2-inactive.xml
2015-12-13/DockerNode2-migratable.xml
2015-12-13/829cd357-8ae7-4d0d-9a3b-308fbcbc8e7b-0.img
The three .xml
files were generated by using different parameter with virsh dumpxml
.
The first file is the dump with no parameters. The inactive.xml
is obtained using
the --inactive
parameter where domain is not active; The migratable.xml
is obtained using
with --migratable
option, were XML is provided suitable for migrations (networks and
bridges are not define). Inspect the XML file for file locations. File needs to be
placed in the same positions:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' discard='unmap'/>
<source file='/var/lib/libvirt/images/829cd357-8ae7-4d0d-9a3b-308fbcbc8e7b-0.img'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
Copy the img
file in the appropriate location:
[root@cloud1 ~]# cp 2015-12-13/829cd357-8ae7-4d0d-9a3b-308fbcbc8e7b-0.img /var/lib/libvirt/images/829cd357-8ae7-4d0d-9a3b-308fbcbc8e7b-0.img
Then define the domain:
[root@cloud1 ~]# virsh define 2015-12-13/DockerNode2-inactive.xml
Dominio DockerNode2 definito da 2015-12-13/DockerNode2-inactive.xml
[root@cloud1 ~]# virsh list --all
Id Nome Stato
----------------------------------------------------
10 DockerNode1 in esecuzione
- DockerNode2 terminato
If you move a virtual machine from one server to another, you may need to delete
the qemu_guest_agent
channel inside the device section. Edit the virsh
domain
and delete a channel line like this:
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/DockerNode2.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
Then, you have to put the qemu_guest_agent
channel, like you have done for the first
time :
<devices>
...
<channel type="unix">
<source mode="bind"/>
<target type="virtio" name="org.qemu.guest_agent.0"/>
</channel>
...
</devices>
Now you can start the domain:
[root@cloud1 ~]# virsh start DockerNode2
Dominio DockerNode2 avviato
[root@cloud1 ~]# virsh list --all
Id Nome Stato
----------------------------------------------------
10 DockerNode1 in esecuzione
18 DockerNode2 in esecuzione
If you see an error like this:
errore: Impossibile avviare il dominio DockerNode2
errore: internal error: il processo è uscito durante il collegamento con il monitor: 2016-03-09T13:10:28.750285Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-DockerNode2/org.qemu.guest_agent.0,server,nowait: Failed to bind socket: No such file or directory
2016-03-09T13:10:28.750380Z qemu-kvm: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-DockerNode2/org.qemu.guest_agent.0,server,nowait: chardev: opening backend "socket" failed
you need to remove the qemu_guest_agent
channel as described above