Disk mounting

From Noah.org
Jump to navigationJump to search

Can't unmount (umount) error "device is busy." or "target is busy"

# umount /mnt/disk_image_loop
umount: /mnt/disk_image_loop: target is busy.

See also Kill -9 does not work.

Causes: Nested mounts, NFS exports, processes with file descriptors or working directory under the mount point, hardware failure.

Things to check:

findmnt --submounts disk_image_loop
mount | grep /mnt/disk_image_loop
fuser -v -u -m /mnt/disk_image_loop
losetup --associated /tmp/disk.img
# Use loop device found in "losetup --associated" to find submounts:
findmnt --submounts /dev/loop0
# The +f option will also show kernel file descriptors that refer to the mounted file system.
lsof +f -- /mnt/disk_image_loop
# Unfortunately that often returns the same as not using the option:
lsof /mnt/disk_image_loop
lsof | grep 

The losetup --associated variation doesn't tell you anything you don't already know, but after you find the loop device name you can use findmnt --submounts to find nested mounts.

Note, do not use fuser with the path to the disk image file. The file is on a mounted device, so this asks for every process using the same mounted filesystem as the file (probably /).

# Do not use the path to the disk image file.
#fuser -v -u -m /tmp/disk.img

test setup

I use this for testing when playing with looped filesystem commands. After doing this you will not be able to unmount /mnt/disk_image_loop and fuser and losetup will not give you useful clues.

mkdir /mnt/disk_image_loop
truncate -s 100m /tmp/disk.img
mkfs.ext3 /tmp/disk.img
mount /tmp/disk.img /mnt/disk_image_loop -o loop
mkdir /mnt/disk_image_loop/nested_mount_point
mount /dev/device_partition /mnt/disk_image_loop/nested_mount_point
fuser -v -u -m /mnt/disk_image_loop

Cause: hardware failure

Unfortunately Linux doesn't handle hardware failures gracefully. The system calls dealing with a failed device will simply block forever. In some instances if the device itself is hot-swappable (such as a USB drive) then you can simply unplug the device and that will also unblock the calls waiting for that device. All the zombie (defunct) processes will disappear and you an unmount the file system. It makes sense.

Cause: nested mounts

Nested mounts can cause the error, "umount: foo : device is busy." when unmounting parent directory. When unmounting a filesystem you may see something like this:

# umount /mnt/disk_image_loop
umount: /mnt/disk_image_loop: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

You confirm that no terminal windows have a shell set to that working directory. You then find that fuser -v -u -m /mnt/disk_image_loop (find file on mounted filesystem) and lsof -n -N | grep disk_image_loop give no useful information. Checking losetup you see that a loopback device is associated with the mount point.

# losetup --associated /var/disk-images/sid.img
/dev/loop0: [fc00]:25953690 (/var/disk-images/sid.img)

This can happen with nested mounts. When checking mount it is easy to overlook additional mounts inside your mounted filesystem. For example, if you are building a root filesystem you might have /var/disk-images/sid.img mounted on /mnt/disk_image_loop then not notice that you have have additional proc and devpts filesystems mounted on directories under /mnt/disk_image_loop. You must unmount these filesystems before you can unmount /mnt/disk_image_loop.

/dev/sda1 on / type ext4 (rw,noatime,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/var/disk-images/sid.img on /mnt/disk_image_loop type ext3 (rw)
/proc on /mnt/disk_image_loop type none (rw,bind)
devpts on /mnt/disk_image_loop type devpts (rw)

find open, but deleted file (unlinked files with inodes, but no directory entry).

This will show files are have been unlinked but still open. The files don't actually get deleted until the last process using it (before or after exit code is collected by wait?) . This can also be handy for finding what might be keeping a mount from being unmountable.

The +L1 option specifically asks for open files that have been unlinked.

lsof +L1

Example of hard to find process keeping a mount point stuck open

I ended up with two disk image files mounted through a loop device, which I could not umount.

/root/rootfs/quark-1.img (deleted) on /tmp/tmp.PNERzZiF2D type ext4 (rw)
/root/rootfs/quark-1.img on /tmp/tmp.NXMuSARwu9 type ext4 (rw)

losetup wouldn't release the two loop devices that were used to create mounts for filesystem images.

# losetup --all
/dev/loop2: [ca01]:525141 (/root/rootfs/quark-1.img)
/dev/loop4: [ca01]:525277 (/root/rootfs/quark-1.img)

Running losetup -d loop2 (similarly for loop4) returned no error, but the loop remained.

The following is suspicious, but can't say for sure it's causing problems.

# lsof +L1
init      1 root   18w   REG  202,1      804     0 271554 /var/log/upstart/docker.io.log.1 (deleted)

I check with ps and find that a docker.io service is running. I still can't pinpoint my stuck mount to this process, but I decide to shut it down to see if this gets me anywhere.

service docker.io stop

Now it seems that lsof +L1 returns nothing, so it seems that the docker.io service was at least the cause of the open, deleted file. After all this is done I find that I still cannot umount the stuck filesystem.... oh well, this is a terrible example, so far.

Next I try strace:

strace umount /tmp/tmp.NXMuSARwu9'''

That gives me a lot of output. It seems that umount decided that the filesystem was busy soon after reading the /tmp/fstab file. I noticed something odd. It seems that /proc and /sys were listed twice. Perhaps my rootfilesystem builder escaped it's chroot and corrupted the host's /etc/fstab file.

proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0

I fixed the fstab file with no effect. I'm not sure how that would cause umount to fail, but it was another odd clue.

Another concern I had was my use of fallocate to reserve a large block of filesystem storage for a virtual disk image file. This function returns quickly. The reservation is noted as a credit. Actual blocks of the filesystem are not yet reserved. At some point the system decides to actually reserve the space. It may be that if some other process kills the fallocation building process. This leaves the kernel in a bad state with hung kernel threads.

After more digging around I noticed these kernel processes just hanging out where they don't look like they belong:

32213     2 -20  39  0.0  0.0  2-03:39:46 root     loop_thread          S<   [loop2]
32215     2   0  19  0.0  0.0  2-03:39:46 root     kjournald2           S    [jbd2/loop2-8]
32216     2 -20  39  0.0  0.0  2-03:39:46 root     rescuer_thread       S<   [ext4-rsv-conver]

Digging more I noticed a second group of kernel threads associated with loop4:

12114     2 -20  39  0.0  0.0  2-04:15:54 root     loop_thread          S<   [loop4]
12116     2   0  19  0.0  0.0  2-04:15:54 root     kjournald2           S    [jbd2/loop4-8]
12117     2 -20  39  0.0  0.0  2-04:15:54 root     rescuer_thread       S<   [ext4-rsv-conver]

I had been building ext4 filesystems under looped disk image files. Perhaps one of the mke2fs commands had gotten into trouble and never recovered leaving behind some kernel threads that subsequently got hung up trying to recover from the mess.

fuser still has one more trick

This might be a red herring. The -m dumps all pids accessing the mounted device that the image itself is stored on.

fuser -m /root/rootfs/quark-1.img

From this I got an absurdly long list of PIDs:

# fuser -m /root/rootfs/quark-1.img
/root/rootfs/quark-1.img:     1rce     2rc     3rc     5rc     7rc     8rc
9rc    10rc    11rc    12rc    13rc    14rc    15rc    16rc    17rc    18rc
19rc    20rc    21rc    23rc    24rc    25rc    26rc    28rc    29rc    30rc
31rc    33rc    34rc    35rc    36rc    37rc    38rc    39rc    40rc    41rc
42rc    43rc    45rc    46rc    47rc    48rc    49rc    50rc    51rc    53rc
 54rc    55rc    56rc    57rc    58rc    70rc    72rc    91rc    92rc   197rc
198rc   730rce   943rce  1035rce  1091rce  1093rce  1119rce  1129rce
1132rce  1136rce  1137rce  1139rce  1169rce  1170rce  1177rce  1190rce
1209rce  1261rc  1262rc  1263rc  1268rc  1274rc  1292rc  1293rc  1294rc
 1295rc  1327rc  1423rce  1442rce  1536rce  1585rc  1985rc  4617rce
 4977rc  7627rce  7712rce  8372rce  8375rc  8460rce  8461rce  8498rce
8499rce  8500re  8682rce 15573rce 15577rce 15578rce 16818rc 16819rc
16820rc 16843rc 19397rc 21045rce 21047rce 22367rc 22636rc 23046rce
23057re 23060re 23066rce 23068rc 23147rce 23148rce 23185rce 23186rce
 23187rce 25133rc 25135rc 25136rc 25426rc 28464rce 28466rce

I decided to look at the last pid, 28466, and saw this:

28464     1   0  19  0.0  0.0     4-11:34:07 root      wait                     Ss   /bin/sh -e /proc/self/fd/9

This looked odd, and having nothing better to do, I killed it with -9. All of a sudden all my shell sessions died. For a while I could not ssh back to the server, but then it seemed to recover and I was able to login again. The mysterious stuck mount was gone and losetup -a showed no connected loops. "I fixed it!" No, wait... uptime showed that the machine had rebooted itself. I found nothing in /var/log to give any more clues as to what happened. Apparently the crash didn't give processes any time to sync their log files.

chrooted processes cause "device is busy" error during umount

This error can happen if a process was chrooted to the mounted filesystem and left running. The process could be a daemon or just an open shell somewhere. These can be difficult to find using the usual lsof or fuser commands because these won't report anything with the name of the mount point or the mounted device name (because it was chrooted). You can demonstrate this by mounting a root filesystem and chrooting into it and then running a trivial little bash daemon.

mount /var/disk-images/sid.img /mnt/disk_image_loop
mount -o loop /var/disk-images/sid.img /mnt/disk_image_loop
chroot /mnt/disk_image_loop /bin/bash
( ( while true; do date >> /log.log; sleep 1; done ) & ) &
tail -n 1 /mnt/disk_image_loop/log.log
sleep 2
tail -n 1 /mnt/disk_image_loop/log.log
ls -l /proc/*/root | grep /mnt/disk_image_loop

This technique of searching for chrooted processes can be handy. Here is an alias for listing chrooted processes.

alias lschroot='ls -l /proc/*/root | grep -v "\-[>] /$"'

This is also a useful way to view all chrooted processes:

for procpid in /proc/*/root; do
    linktarget=$(readlink ${procpid})
    if [ "${linktarget}" != "/" ]; then 
        echo "${procpid} chrooted to ${linktarget}"

Or this is equivalent with a different style of globbing.

for procpid in /proc/[0-9]*; do
    linktarget=$(readlink ${procpid}/root)
    if [ "${linktarget}" != "/" ]; then 
        echo "${procpid} chrooted to ${linktarget}"


This will list all the disks that Linux sees. This will not show loop devices. See `losetup` example for more information:

fdisk -l


Convert a VMWare flat split image disk set to a raw disk image
cat linux-server-f001.vmdk linux-server-f002.vmdk linux-server-f003.vmdk > linux-server.img
Find the start of partitions
fdisk -l -u linux-server.img
First partition usually starts at block 63. Each block is usually 512 bytes. Offset is therefore
echo $((63*512))
Find the start of each partition down to the exact offset byte (easier than `fdisk`)
parted linux-server.img unit b print
List the next available loopback device
losetup -f
Attach loopback to a partition offset inside of a disk image
losetup -o $((63*512)) /dev/loop0 linux-server.img
Create a mount point
mkdir -p /media/adhoc
Mount the partition
mount /dev/loop0 /media/adhoc
Unmount the partition before cleaning up loop device
umount /media/adhoc
Cleanup the loop device
losetup -d /dev/loop0

losetup -- mount individual partitions in a whole disk image

If you have a while disk image and you want to mount partitions inside that image then use `losetup` to create a loopback device for the image. For example, say you copied an entire disk using `dd` like this:

dd if=/dev/sda of=disk.img

You can later create a loop device for it and see its partitions with `fdisk` and mount those partitions individually with `mount`. Note that `fdisk -l` does not normally show loop devices. You must add an explicit path to the loop device that you want to list.

losetup /dev/loop0 disk.img
fdisk -l /dev/loop0

The previous example assumed that /dev/loop0 was free. You can you the '-f' option to automatically find a free loop device. In this example we first use the '-f' option to associate the image file with the next available loop device; then we use the '-j' option to see what loop device was associated with the file:

losetup -f disk.img
losetup -j disk.img

mounting partitions inside a disk image without loop device

It is also possible to mount partitions inside a disk image file directly with `mount` using the 'offset' option, but I have not had luck with this.

mount -o loop,ro,offset=1025 disk.img /media/adhoc

Disk recovery

Use `dd_rhelp`. This is a wrapper around `dd_rescue` that makes it easier to use.