Difference between revisions of "Access a LVM-based DomU disk outside of the domU"

From Xen
m
(Add block-attach method)
Line 5: Line 5:
 
<!-- Original date: Tue Nov 16 02:19:41 2010 (1289873981000000) -->
 
<!-- Original date: Tue Nov 16 02:19:41 2010 (1289873981000000) -->
   
  +
If you've installed a domU using virt-install (ie. Fedora/CentOS/RHEL), which by default uses LVM, or otherwise used LVM inside the domU, trying to access the disk in your dom0 is difficult. If you need to access the disk (to fix file system corruption, as an example), this is really annoying. Fortunately, there's a few methods.
__NOTOC__
 
If you've installed a domU using virt-install (ie. Fedora/CentOS/RHEL), which by default uses LVM, or otherwise used LVM inside the domU, trying to access the disk in your dom0 is difficult. After using kpartx to reveal the partition structure of the disk image, trying to mount just gives you a file system error. If you need to access the disk (to fix file system corruption, as an example), this is really annoying. Fortunately, there's a method.
 
   
 
In this example, my domU is installed on a logical volume called 'helium'.
 
In this example, my domU is installed on a logical volume called 'helium'.
  +
= kpartx =
  +
This method has the benefit of working even if your Xen install is not working - for example, if the hardware died and you pulled the hard drives to extract the data.
   
  +
Now, using kpartx looks simple - once it reveals the partition structure of the disk image, you would think that you'd be able to mount it normally with mount. But trying just gives you a mount error, because mount doesn't understand how LVM works. Instead, you're going to need to use the LVM tools to get access.
= Step 1 =
 
  +
== Step 1 ==
 
<pre><nowiki>
 
<pre><nowiki>
 
kpartx -a /path/to/logical/volume</nowiki></pre>
 
kpartx -a /path/to/logical/volume</nowiki></pre>
Line 16: Line 18:
 
For me, this was kpartx -a /dev/domU/helium. This created two block devices in /dev/mapper - domU-helium1 and domU-helium2. helium1 was my /boot, helium2 was the partition used by LVM to store the root and swap space.
 
For me, this was kpartx -a /dev/domU/helium. This created two block devices in /dev/mapper - domU-helium1 and domU-helium2. helium1 was my /boot, helium2 was the partition used by LVM to store the root and swap space.
   
= Step 2 =
+
== Step 2 ==
 
<pre><nowiki>
 
<pre><nowiki>
 
vgscan</nowiki></pre>
 
vgscan</nowiki></pre>
Line 22: Line 24:
 
This will give you a list of the volume groups on your system, hopefully including the volume group from your domU.
 
This will give you a list of the volume groups on your system, hopefully including the volume group from your domU.
   
= Step 3 =
+
== Step 3 ==
 
<pre><nowiki>
 
<pre><nowiki>
 
vgchange -a y volume_group_from_VM</nowiki></pre>
 
vgchange -a y volume_group_from_VM</nowiki></pre>
Line 28: Line 30:
 
This will activate the logical volumes in the VG. Block devices should appear in /dev/mapper. For me, what appeared was vg_helium-lv_root and vg_helium-lv_swap.
 
This will activate the logical volumes in the VG. Block devices should appear in /dev/mapper. For me, what appeared was vg_helium-lv_root and vg_helium-lv_swap.
   
= Step 4 =
+
== Step 4 ==
 
Do whatever you want with the disk image. For the purposes of this walkthrough, run a disk check with
 
Do whatever you want with the disk image. For the purposes of this walkthrough, run a disk check with
   
Line 36: Line 38:
 
Replace vg_helium-lv_root with your volume group and logical volume name. Wait for the check to finish before starting to clean up after yourself.
 
Replace vg_helium-lv_root with your volume group and logical volume name. Wait for the check to finish before starting to clean up after yourself.
   
= Step 5 =
+
== Step 5 ==
 
<pre><nowiki>
 
<pre><nowiki>
 
vgchange -a n volume_group_from_VM</nowiki></pre>
 
vgchange -a n volume_group_from_VM</nowiki></pre>
Line 42: Line 44:
 
This will deactivate the volume group so LVM won't complain when you destroy the block devices that represent the domU LVM.
 
This will deactivate the volume group so LVM won't complain when you destroy the block devices that represent the domU LVM.
   
= Step 6 =
+
== Step 6 ==
 
<pre><nowiki>
 
<pre><nowiki>
 
kpartx -d /path/to/logical/volume
 
kpartx -d /path/to/logical/volume
 
</nowiki></pre>
 
</nowiki></pre>
   
To clean up, you have to destroy the block devices that kpartx created so Xen doesn't complain that the logical volume already being accessed.
+
Finally, to clean up, you have to destroy the block devices that kpartx created so Xen doesn't complain that the logical volume already being accessed.
  +
  +
= block-attach =
  +
The alternate method depends on Xen - you'll be using Xen's ability to attach block devices to domUs, just that instead of attaching a disk to a domU, you'll be attaching it to your dom0.
  +
  +
==Step 1==
  +
<pre>xl block-attach 0 phy:/dev/domU/helium xvda w</pre>
  +
A short explanation of the arguments:
  +
<pre>xl [-v] block-attach <Domain> <BackDev> <FrontDev> [<Mode>] [BackDomain]</pre>
  +
* domain should be the domain ID, get it from xl list
  +
* backdev needs to include 'phy:' if you use LVM/raw partitions or 'file:' if you use disk images
  +
* frontdev doesn't appear to need /dev, like the config file
  +
* mode should be 'w' or 'r'
  +
  +
So in this case, you can see that I attached the disk /dev/domU/helium to my dom0, showing up as /dev/xvda, and allowing writes to it.
  +
  +
This created 2 entries in /dev - <code>/dev/xvda1</code> and <code>/dev/xvda2</code>.
  +
  +
==Step 2==
  +
The volume group of the domU was automatically detected - <code>vg_helium</code>, and the LVs appeared in <code>/dev/vg_helium/</code> as <code>/dev/vg_helium/lv-root</code> and <code>/dev/vg_helium/lv-swap</code>.
  +
  +
It's a simple matter to mount the LV with <code>mount /dev/vg_helium/lv-root /mnt/helium</code>
  +
  +
==Step 3==
  +
Once you're done whatever you're doing, unmount the LV.
  +
  +
Next, we have to do <code>xl block-detach</code> to release the block device. Looking at the syntax - <code>xl [-v] block-detach <Domain> <DevId></code>, something like <code>xl block-detach 0 xvda</code>.
  +
  +
Except that's wrong. You get <code>Error: Device xvda not connected</code> when you try it.
  +
  +
Turns out, DevId doesn't mean the front end <em>or</em> back end device names. What it needs is something from <code>xl block-list</code>. Running <code>xl block-list 0</code> (since we want the block devices attached to dom0) gave me this output:
  +
  +
<pre>Vdev BE handle state evt-ch ring-ref BE-path
  +
51712 0 0 1 -1 -1 /local/domain/0/backend/vbd/0/51712</pre>
  +
  +
Look at the Vdev number. That's what you want.
  +
  +
Substitute it into the command to get <code>xl block-detach 0 51712</code>, run it, and your block device is detached.
  +
  +
As a point of note, it doesn't print any message on success though, so don't be worried if it doesn't say anything. You can double check that it was removed by running <code>xl block-list 0</code> again and checking the output.
   
 
[[Category:Xen]]
 
[[Category:Xen]]

Revision as of 07:34, 10 November 2011


If you've installed a domU using virt-install (ie. Fedora/CentOS/RHEL), which by default uses LVM, or otherwise used LVM inside the domU, trying to access the disk in your dom0 is difficult. If you need to access the disk (to fix file system corruption, as an example), this is really annoying. Fortunately, there's a few methods.

In this example, my domU is installed on a logical volume called 'helium'.

kpartx

This method has the benefit of working even if your Xen install is not working - for example, if the hardware died and you pulled the hard drives to extract the data.

Now, using kpartx looks simple - once it reveals the partition structure of the disk image, you would think that you'd be able to mount it normally with mount. But trying just gives you a mount error, because mount doesn't understand how LVM works. Instead, you're going to need to use the LVM tools to get access.

Step 1

kpartx -a /path/to/logical/volume

For me, this was kpartx -a /dev/domU/helium. This created two block devices in /dev/mapper - domU-helium1 and domU-helium2. helium1 was my /boot, helium2 was the partition used by LVM to store the root and swap space.

Step 2

vgscan

This will give you a list of the volume groups on your system, hopefully including the volume group from your domU.

Step 3

vgchange -a y volume_group_from_VM

This will activate the logical volumes in the VG. Block devices should appear in /dev/mapper. For me, what appeared was vg_helium-lv_root and vg_helium-lv_swap.

Step 4

Do whatever you want with the disk image. For the purposes of this walkthrough, run a disk check with

fsck /dev/mapper/vg_helium-lv_root

Replace vg_helium-lv_root with your volume group and logical volume name. Wait for the check to finish before starting to clean up after yourself.

Step 5

vgchange -a n volume_group_from_VM

This will deactivate the volume group so LVM won't complain when you destroy the block devices that represent the domU LVM.

Step 6

kpartx -d /path/to/logical/volume
 

Finally, to clean up, you have to destroy the block devices that kpartx created so Xen doesn't complain that the logical volume already being accessed.

block-attach

The alternate method depends on Xen - you'll be using Xen's ability to attach block devices to domUs, just that instead of attaching a disk to a domU, you'll be attaching it to your dom0.

Step 1

xl block-attach 0 phy:/dev/domU/helium xvda w

A short explanation of the arguments:

xl [-v] block-attach <Domain> <BackDev> <FrontDev> [<Mode>] [BackDomain]
  • domain should be the domain ID, get it from xl list
  • backdev needs to include 'phy:' if you use LVM/raw partitions or 'file:' if you use disk images
  • frontdev doesn't appear to need /dev, like the config file
  • mode should be 'w' or 'r'

So in this case, you can see that I attached the disk /dev/domU/helium to my dom0, showing up as /dev/xvda, and allowing writes to it.

This created 2 entries in /dev - /dev/xvda1 and /dev/xvda2.

Step 2

The volume group of the domU was automatically detected - vg_helium, and the LVs appeared in /dev/vg_helium/ as /dev/vg_helium/lv-root and /dev/vg_helium/lv-swap.

It's a simple matter to mount the LV with mount /dev/vg_helium/lv-root /mnt/helium

Step 3

Once you're done whatever you're doing, unmount the LV.

Next, we have to do xl block-detach to release the block device. Looking at the syntax - xl [-v] block-detach <Domain> <DevId>, something like xl block-detach 0 xvda.

Except that's wrong. You get Error: Device xvda not connected when you try it.

Turns out, DevId doesn't mean the front end or back end device names. What it needs is something from xl block-list. Running xl block-list 0 (since we want the block devices attached to dom0) gave me this output:

Vdev  BE  handle state evt-ch ring-ref BE-path
51712 0   0      1     -1     -1       /local/domain/0/backend/vbd/0/51712

Look at the Vdev number. That's what you want.

Substitute it into the command to get xl block-detach 0 51712, run it, and your block device is detached.

As a point of note, it doesn't print any message on success though, so don't be worried if it doesn't say anything. You can double check that it was removed by running xl block-list 0 again and checking the output.