Difference between revisions of "XCP PV templates start"

From Xen
m (Trying to add platform:vga options since bootloader options are discussed here.)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  +
{{TODO|Has this been implemented? If so, it should be moved to Designs in the XAPI Devel Index}}
When virtual machine is created by command <code>xe vm-install template=... new-name-label=...</code> and templates of linux system used (like Ubuntu, Debian, CentOS, RHEL, SUSE and so on), following steps occur:
 
   
  +
A virtual machine, when created using the <code>xe</code> command along with its series of options, such as: <code>xe vm-install template="... (examples are "Ubuntu", "RedHat", "Debian", "openSUSE", and so forth)" ... new-name-label="YOUR_DESIRED_VM_NAME"...</code>, the following steps take place:
# XCP clones virtual machine (templates is VMs but with flag is-a-template=true)
 
  +
# It process 'disks' record from other-config (sample is self-descriptive: <code>
 
  +
disks: &lt;provision&gt;&lt;disk device="0" size="8589934592" sr="" bootable="true" type="system"/&gt;&lt;/provision&gt;</code>) and creates required disks.
 
  +
# XCP clones a virtual machine template where the template metadata defines <code>is-a-template=true || default_template=true</code> and instantiates it as its own, unique virtual machine
# After user configure VM (adds network interfaces and sets install-repository to correct value) and starts it, it run specified PV-bootloader - eliloader.
 
  +
# It process the original virtual machine's 'disks' - along with other metadata from the template - such as <code>disks: &lt;provision&gt;&lt;disk device="0" size="8589934592" sr="" bootable="true" type="system"/&gt;&lt;/provision&gt;</code>) ... this will naturally create the new virtual machine's virtual disks based on the template definition.
  +
# After the user specifies configurable options for the virtual machine being created from the VM template (network interfaces, RAM, and so forth), the new VM is now its own official entity and starts: running with the specified virtual machine's original template value for <code>PV-bootloader</code>, which is <code>"eliloader"</code>
 
# And here template specific starts...
 
# And here template specific starts...
 
== eliloader ==
 
== eliloader ==
Line 10: Line 12:
 
After initrd and kernel is extracted they are used as kernel and initrd for virtual machine (and installation started).
 
After initrd and kernel is extracted they are used as kernel and initrd for virtual machine (and installation started).
   
Right after successful VM start PV-bootloader is replaces from eliloader to pygrub (to boot to new VM native kernel). Of course, if user stops installation process, at next startup there will be no kernel and vm will not start again.
+
Right after successful VM start PV-bootloader is replaced from eliloader to pygrub (to boot to new VM native kernel). Of course, if user stops installation process, at next startup there will be no kernel and vm will not start again.
   
 
== installation restart ==
 
== installation restart ==
Line 33: Line 35:
 
Note the last line: set back to pygrub to avoid endless installation.
 
Note the last line: set back to pygrub to avoid endless installation.
   
  +
The main problem of eliloader is large initrd, tranferred via internet (or other network). This can be sometimes annoying and can be solved by using pre-downloaded initrd. This will not works for older systems (like RHEL4) which require patching of initrd, but works fine for modern version of ubuntu, opensuse (suse too, I hope), debian. Those files must be placed in /boot/guest directory on every host of the pool (don't forget to check if enough free space is available!).
[[Category: XCP]]
 
  +
  +
  +
== PV/HVM Graphic Replacement ==
  +
By default, most virtual machine templates lean upon the Cirrus VGA drivers as they are not only the most generic, but most compatible with various architectures. There is an option that can be changed at either the "virtual machine template" level or the "Guest/User VM" level to leverage VGA options:
  +
  +
<code>xe vm-platform-set uuid="UUID of your VM" platform:vga=std platform:videoram=8</code>
  +
  +
The videoram's max value can be 16MiB. The platform flag defining VGA as the "standard" (<code>std</code>) informs the Hypervisor to leverage VGA-based drivers instead of the more basic, Cirrus drivers.
  +
  +
[[Category:XAPI Devel]]

Latest revision as of 07:07, 29 January 2015

Icon todo.png To Do:

Has this been implemented? If so, it should be moved to Designs in the XAPI Devel Index


A virtual machine, when created using the xe command along with its series of options, such as: xe vm-install template="... (examples are "Ubuntu", "RedHat", "Debian", "openSUSE", and so forth)" ... new-name-label="YOUR_DESIRED_VM_NAME"..., the following steps take place:


  1. XCP clones a virtual machine template where the template metadata defines is-a-template=true || default_template=true and instantiates it as its own, unique virtual machine
  2. It process the original virtual machine's 'disks' - along with other metadata from the template - such as disks: <provision><disk device="0" size="8589934592" sr="" bootable="true" type="system"/></provision>) ... this will naturally create the new virtual machine's virtual disks based on the template definition.
  3. After the user specifies configurable options for the virtual machine being created from the VM template (network interfaces, RAM, and so forth), the new VM is now its own official entity and starts: running with the specified virtual machine's original template value for PV-bootloader, which is "eliloader"
  4. And here template specific starts...

eliloader

Eliloader is python script located in /usr/bin/eliloader. It allows XCP to download network installation images (kernel and overgrown initrd) for specified operating system (actual network image is really differ in suse, centos and debian systems). For some legacy and antique systems (like RHEL 4) it to some patching of initrd. Exact path is constructed from type of template (install-distro in other-config) and url, provided by used in other-config: install-repository). After initrd and kernel is extracted they are used as kernel and initrd for virtual machine (and installation started).

Right after successful VM start PV-bootloader is replaced from eliloader to pygrub (to boot to new VM native kernel). Of course, if user stops installation process, at next startup there will be no kernel and vm will not start again.

installation restart

For installation restart PV-bootloader can be set back to eliloader (xe vm-param-set uuid=... PV-bootloader=eliloader), after that installation can be restarted again.

Eliloader replacement

The main problem of eliloader is large initrd, tranferred via internet (or other network). This can be sometimes annoying and can be solved by using pre-downloaded initrd. This will not works for older systems (like RHEL4) which require patching of initrd, but works fine for modern version of ubuntu, opensuse (suse too, I hope), debian. Those files must be placed in /boot/guest directory on every host of the pool (don't forget to check if enough free space is available!).

Here sample for Debian 6 (squeezy), 64 bits:

cd /boot/guest
mkdir squeezy64-install
cd squeezy64-install
wget http://mirror.yandex.ru/debian/dists/Debian6.0.3/main/installer-amd64/current/images/netboot/xen/vmlinuz
wget http://mirror.yandex.ru/debian/dists/Debian6.0.3/main/installer-amd64/current/images/netboot/xen/initrd.gz
(vm-install and vif creating skipped)
xe vm-param-set uuid=... PV-bootloader=
xe vm-param-set uuid=... PV-kernel=/boot/guest/squeezy64/vmlinuz
xe vm-param-set uuid=... PV-ramdisk=/boot/guest/squeezy64/initrd.gz
xe vm-start uuid=...
xe vm-param-set uuid=... PV-bootloader=pygrub

Note the last line: set back to pygrub to avoid endless installation.

The main problem of eliloader is large initrd, tranferred via internet (or other network). This can be sometimes annoying and can be solved by using pre-downloaded initrd. This will not works for older systems (like RHEL4) which require patching of initrd, but works fine for modern version of ubuntu, opensuse (suse too, I hope), debian. Those files must be placed in /boot/guest directory on every host of the pool (don't forget to check if enough free space is available!).


PV/HVM Graphic Replacement

By default, most virtual machine templates lean upon the Cirrus VGA drivers as they are not only the most generic, but most compatible with various architectures. There is an option that can be changed at either the "virtual machine template" level or the "Guest/User VM" level to leverage VGA options:

xe vm-platform-set uuid="UUID of your VM" platform:vga=std platform:videoram=8

The videoram's max value can be 16MiB. The platform flag defining VGA as the "standard" (std) informs the Hypervisor to leverage VGA-based drivers instead of the more basic, Cirrus drivers.