Xen

From Void Linux Wiki
Jump to: navigation, search

Introduction

Xen /ˈzɛn/ is a hypervisor providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently.

See the following links for more information:

Running Xen dom0

Install the xen package:

$ xbps-install -S xen
The xen package contains the hypervisor and its tools.

Enable required services:

# ln -s /etc/sv/xen /var/service/
# ln -s /etc/sv/xenstored /var/service/
# ln -s /etc/sv/xenconsoled /var/service/

Reboot and then boot any available Xen entry in the GRUB menu. Check that Xen Management Tool xl(1) works correctly:

$ xl dmesg
 __  __            _  _    _____  ___  
 \ \/ /___ _ __   | || |  |___ / / _ \ 
  \  // _ \ '_ \  | || |_   |_ \| | | |
  /  \  __/ | | | |__   _| ___) | |_| |
 /_/\_\___|_| |_|    |_|(_)____(_)___/ 
                                   
(XEN) Xen version 4.3.0 (@[unknown]) (gcc (GCC) 4.7.3) debug=n Tue Jul  9 19:32:37 CEST 2013
(XEN) Latest ChangeSet: 
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN) Disc information:
(XEN)  Found 0 MBR signatures
(XEN)  Found 0 EDD information structures
...

Congratulations! you are now running Xen in the domain 0; it's a privileged domain that has access to all system resources and can be used as running Linux natively. The dom0 is reponsible to admin the unprivileged domains, also known as domUs.

Preparing unprivileged domains (domUs)

To create a domU, a configuration file shall be created as /etc/xen/conf/<name>.cfg.

A domU can be created in PV or HVM mode; PV (paravirtualized) guests have specific Xen drivers, and provide better performance; the OSes supported are Linux, NetBSD, FreeBSD and Solaris. HVM domains are fully virtualized computers with emulated BIOS, disk and network peripherals, etc; this includes Windows and others.

If your CPU and Motherboard supports an IOMMU (vt-d virtualization feature), the HVM domUs can run really close in performance terms than natively.

When running HVM domUs, there might be PV drivers that will improve performance. There are available GPL PV drivers available for Windows (signed) here: http://www.meadowcourt.org/downloads/

Network Bridge setup for domUs

To create the bridge multiple ways can be used; the easiest way is to create a simple service and then use dhcpcd to assign an IP:

# mkdir -p /etc/sv/dhcpcd-xenbr0
# vi /etc/sv/dhcpcd-xenbr0/run
set -e
modprobe -q bridge
ip link set enp4s0 up
ip link add name xenbr0 type bridge
ip link set dev enp4s0 master xenbr0
exec dhcpcd -B xenbr0
# chmod +x /etc/sv/dhcpcd-xenbr0/run
# ln -s /run/runit/supervise.dhcpcd-xenbr0 supervise

And then enable/start this service (WARNING! This step will cause enp4s0 to lose its IP address and network connections will die):

# ln -s /etc/sv/dhcpcd-xenbr0 /var/service

If everything is correctly setup the xenbr0 interface should be ready:

$ ip addr
...
3: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
      link/ether bc:5f:f4:38:ce:d3 brd ff:ff:ff:ff:ff:ff
      inet 192.168.1.101/24 brd 192.168.1.255 scope global xenbr0
         valid_lft forever preferred_lft forever
      inet6 fe80::be5f:f4ff:fe38:ced3/64 scope link 
         valid_lft forever preferred_lft forever
...

Creating a Void Linux HVM guest

So let's create a new HVM domain that will be named void32 and will run Void x86 (32bits); we'll use the live image for installation and a new image file created by the Xen's qemu-img utility as system disk. For networking we'll use the bridge setup from above.

Create the disk image:

$ /usr/lib/xen/bin/qemu-img create void32.img 2G

Create the HVM domU configuration file:

# /etc/xen/conf/void32.cfg
name='void32'
builder='hvm' # set it to generic for PV guest
memory=256
disk=[ 'file:/path/to/void-live-i686.iso,hdb,r,devtype=cdrom', 'file:/path/to/void32.img,hda,w' ]
vif=[ 'bridge=xenbr0,mac=00:00:bb:aa:11:11' ]
vnc=0 # enabled by default
sdl=1 # disabled by default
keymap='es' # default is 'en'
boot='dc' # default is 'cd' (hdd and cdrom)

By default the domUs export its output to a VNC connection, but in this example we've enabled SDL output instead. You can now create this domU manually and verify it works:

$ xl create -c /etc/xen/conf/void32.cfg

A new window will be created with the guest video output. You can list running domains with xl(1):

$ xl list
Name                                        ID   Mem VCPUs  State   Time(s)
Domain-0                                     0 11256     4     r-----    1247.6
void32                                       6   256     1     -b----       0.6

Congratulations! you've created your first HVM domU and it's running successfully. You can now continue with the Void x86 32bits installation.

Creating a Void Linux PV guest

The paravirtualized guest would use the same disk and network settings but needs a kernel and ramdisk to start. We'll install a minimal base system and then we'll use the current kernel image and initramfs to boot it.

Create the disk image for rootfs:

$ /usr/lib/xen/bin/qemu-img create void32-pv.img 2G

Create the disk image for swap:

$ /usr/lib/xen/bin/qemu-img create void32-swap-pv.img 512M

Create the filesystems for both disk images:

$ mkfs.ext4 void32-pv.img
$ mkswap void32-swap-pv.img

Mount the rootfs disk image and then install the base system:

$ mkdir void32-root
$ mount -o loop void32-pv.img void32-root/
$ xbps-install -r void32-root/ -Sy base-system

Prepare target fstab with the UUIDs of the new filesystems:

$ uuid=$(blkid -o value -s UUID void32-swap-pv.img); echo "UUID=$uuid none swap sw 0 0" >> void32-rootfs/etc/fstab
$ uuid=$(blkid -o value -s UUID void32-pv.img); echo "UUID=$uuid / ext4 defaults 0 1" >> void32-rootfs/etc/fstab

Setup the root password:

$ passwd -R $(pwd)/void32-rootfs

Copy kernel and the initramfs image to a directory in the host for booting the PV guest:

$ cp void32-rootfs/boot/{vmlinuz*,initramfs*} .

Umount the rootfs:

$ umount void32-rootfs

We now have a proper Void Linux base system installed in the void32-pv.img image. Let's create a new Xen domU configuration file:

# /etc/xen/conf/void32-pv.cfg
name='void32-pv'
builder='generic'
memory=64
disk=[ 'file:/path/to/void32-swap-pv.img,xvda,w', 'file:/path/to/void32-pv.img,xvdb,w' ]
vif=[ 'bridge=xenbr0,mac=00:00:ff:aa:11:11' ]
kernel='/path/to/vmlinuz-3.10.5_2'
ramdisk='/path/to/initramfs-3.10.5_2.img'
root='/dev/xvdb'
extra='rw loglevel=4'

The difference here is builder='generic' that specifies that it's a PV guest. The PV guest also needs the kernel and ramdisk options, and additionally root and extra.

We are ready to boot this PV domU:

$ xl create -c /etc/xen/conf/void32-pv.cfg

Congratulations! you are now running a Void Linux PV domU.

PCI Passthrough

The new way to passthrough PCI devices to guests is by using the xl tool management and then assign the PCI devices in the domU configuration file. For example, on my system there are the following PCI devices:

 $ lspci
 ...
 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV740 PRO [Radeon HD 4770]
 01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] RV710/730 HDMI Audio [Radeon HD 4000 series]
 ...

And we want to export the AMD Radeon HD4770 Graphics card to the void32 HVM guest; this card contains 2 PCI devices: 01:00.0 and 01:00.0. Firstly the PCI devices must be assignable and then must be assigned to the domU:

 $ xl pci-assignable-add 0000:01:00.0
 $ xl pci-assignable-add 0000:01:00.1

And then modify the domU configuration file to assign the PCI devices:

 # /etc/xen/conf/void32.cfg
 ...
 pci=[ '01:00.0', '01.00.1' ]
 ...

Those PCI devices will now be passthrough to this domU and the guest OS will use it as with real hardware. Alternatively the PCI devices can also be attached when the guest is already running:

 $ xl pci-attach void32 0000:01:00.0
Make sure the PCI devices are assignable before attaching them to a domU. A PCI device can only be used by a guest, do not attach it to multiple guests!

PCI Passthrough - Rebinding devices to dom0

As shown above, those PCI devices are then passthrough to the target guest, but after exiting the domU those PCI devices are unused because we made them assignable. To attach and rebind those PCI devices along with the kernel modules that were used previously, we can use the xl utility one more time:

 $ xl pci-assignable-remove -r 0000:01:00.0
 $ xl pci-assignable-remove -r 0000:01:00.1

The AMD Radeon HD4770 PCI devices will now be re-assigned to dom0 and will work as it did before the assignment to the domUs.