Logical Volume Management on Debian Linux

7. At this point the Logical Volumes are configured but not quite yet ready to be mounted and used to store data. The Logical Volumes still need to be formatted with a file-system. This is easily accomplished with mkfs or gparted. This server is a CLI only server so the mkfs utility will be used to accomplish this task.

# mkfs.ext4 -L Music /dev/mapper/storage-Music
# mkfs.ext4 -L Documents /dev/mapper/storage-Documents

The two commands above will write an ext4 file system to each of the Logical Volumes created as well as put a file system label (-L) on each of the Logical Volume’s file systems. The (-L) part isn’t necessary but it can help to keep Logical Volumes straight based on their file system labels.

8. Now the Logical Volumes are ready to have data written to them. In order for Linux to access the volume though it will have to be mounted. This is accomplished through the use of the mount utility but first a location on the files system must be created in order to “mount” the new LV to that directory.

For the purposes of this tutorial, a new directory for each LV was created in the /mnt directory using the following commands:

# mkdir /mnt/Music
# mkdir /mnt/Documents

At this point, using the mount utility, the Logical Volumes can be “mounted” to the newly created directory. Use the following commands to accomplish this task:

# mount /dev/mapper/storage-Music /mnt/Music
# mount /dev/mapper/storage-Documents /mnt/Documents

Be sure to note that Linux is case sensitive and as such ‘Music‘ and ‘music‘ are not the same thing!

Pending any error messages, the Logical Volumes are now ready for use under the directory ‘/mnt/Music‘ or ‘/mnt/Documents‘. There are several different ways to confirm whether the LV is actually ready for use or not, the easiest is the lsblk command.

Confirm Logical Volume Status
Confirm Logical Volume Status

Do not worry that the output doesn’t match exactly here. This LVM setup was done on top of a set of raided disks (sdb and sdc). The point to take away here is that on the far right, the logical volumes are mounted to the newly created directories in /mnt. That’s it for a basic LVM setup!

While this isn’t a necessary step, it does help make the administration of these Logical Volumes easier. As of this point, if the system were to reboot, on startup the Logical Volumes would not be automatically mounted to the configured mount points.

For some people this is okay and they have no problems with typing the mount command in step 8 every time, however some people prefer to have the storage areas ready on system start up.

The next couple of steps will explain how to configure the system to automatically boot and mount the Logical Volumes every time the system starts up (known as persistent mounting).

9. To accomplish this task, a disk identifier known as a Universally Unique Identifier (UUID) should be obtained for each of the Logical Volumes. This is accomplished with the blkid command.

The blkid command will return the UUID value as well as the file system type necessary for use to begin to setup persistent mounting.

# blkid /dev/mapper/*
Check UUID of File System
Check UUID of File System

This output provides the UUID values for each of the Logical Volumes created. This output can be difficult to remember and in a command line interface, potentially non-selectable, so using the built in functionality of redirection, this output can be redirected to the file that it will ultimately need to reside in order for the system to automatically mount the Logical Volumes at startup. The file that the output needs to be sent to is known as /etc/fstab.

# blkid /dev/mapper/* >> /etc/fstab

WARNING!! – A note of caution here, be sure that this command uses DOUBLE greater than symbols ( >> ) if a single greater than symbol is used, the contents of this file will be OVER-WRITTEN!

Now open the /etc/fstab file with a text editor. This tutorial will use the nano text editor.

# nano /etc/fstab
Automatic File System Mounting  at Startup
Automatic File System Mounting at Startup

Now the entries need to be formatted to meet the formatting requirements for the /etc/fstab file. The first thing needed removed is “/dev/mapper/storage-Documents:” and “/dev/mapper/storage-Music:”.

Once this is removed, enter the mount points setup in step 8. Be sure that the absolute path to the mount point is placed in the field after the UUID.

The third field will need the file system type. In this case, the file systems created on each Logical Volume was ‘ext4‘.

The fourth field can be set to ‘defaults‘ and the last two fields (<dump><pass>) can be set to zeros.

Automatic Mounting Filesystem Entries
Automatic Mounting Filesystem Entries

At this point the entries are ready to be persistently mounted on system reboots. Save changes to the file in nano by issuing the ctrl + the ‘o‘ (that’s the oh key not the zero). The system will then prompt for confirmation of the file name to save the document under.

Check that nano is asking to save the file as ‘/etc/fstab‘ and hit the enter key. Then ctrl +’x’ to exit the nano text editor.

Once the file is saved, confirmation of the mount points can be confirmed by using the mount command again with an argument that tells the mount utility load all automatic mounts. That command is:

# mount -a

Again, all the commands used in this document have assumed that the user logged in is the root user.

After issuing mount -a the system will likely not provide any feedback that anything happened (unless something is wrong with the fstab file or the mount points themselves). To determine if everything worked, the lsblk utility can be used.

# lsblk
Check LVM Filesystem Mount Status
Check LVM Filesystem Mount Status

At this points the Logical Volumes are accessible via /mnt/Music and /mnt/Documents for the system or users to write files. From here any number of tasks can be done with the LVM volumes such as re-sizing, migrating the data, or adding more Logical Volumes but that is for a different how-to.

For now, enjoy the new data storage locations attached to the Debian system and stay tuned for more Debian How-to’s.

Hey TecMint readers,

Exciting news! Every month, our top blog commenters will have the chance to win fantastic rewards, like free Linux eBooks such as RHCE, RHCSA, LFCS, Learn Linux, and Awk, each worth $20!

Learn more about the contest and stand a chance to win by sharing your thoughts below!

Rob Turner
Rob Turner is an avid Debian user as well as many of the derivatives of Debian such as Devuan, Mint, Ubuntu, and Kali. Rob holds a Masters in Information and Communication Sciences as well as several industry certifications from Cisco, EC-Council, ISC2, Linux Foundation, and LPI.

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

13 Comments

Leave a Reply
  1. A couple of things missing from this tutorial is 1) how to remove a disk or a partition from the array, 2) how to reduce the amount of space used by the array without removing disks or partitions.

    Reply
  2. Hi,

    I have reduced the remove the RAID-1 array (/dev/md0) to 48G size and it is configured on the lvm and trying to reduce respective disk space (sda4) as well to 48G from 58G.

    But i am not able to accomplish this, could you please help me regarding this.

    root@sekhar~# lsblk
    NAME                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sda                       8:0    0 232.9G  0 disk
    ├─sda1                    8:1    0  16.5G  0 part
    └─sda4                    8:4    0    58G  0 part
      └─md0                   9:0    0    48G  0 raid1
        ├─vg0-root (dm-0)   252:0    0  26.6G  0 lvm   /
        ├─vg0-backup (dm-1) 252:1    0  19.6G  0 lvm
        └─vg0-swap (dm-2)   252:2    0   1.9G  0 lvm   [SWAP]
    sr0                      11:0    1  1024M  0 rom
    
    root@sekhar~:~# pvdisplay
      --- Physical volume ---
      PV Name               /dev/md0
      VG Name               vg0
      PV Size               48.00 GiB / not usable 3.00 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              12287
      Free PE               0
      Allocated PE          12287
      PV UUID               uxH3FS-sUOF-LsIP-kAjq-7Bwq-suhK-CLJXI1
    
    root@sekhar~:~# vgdisplay
      --- Volume group ---
      VG Name               vg0
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  112
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                3
      Open LV               2
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               48.00 GiB
      PE Size               4.00 MiB
      Total PE              12287
      Alloc PE / Size       12287 / 48.00 GiB
      Free  PE / Size       0 / 0
      VG UUID               LjCUyX-25MQ-WCFT-j2eF-2UWX-LYCp-TtLVJ5
    
    root@sekhar~:~# lvdisplay
      --- Logical volume ---
      LV Path                /dev/vg0/root
      LV Name                root
      VG Name                vg0
      LV UUID                SBf1mc-iqaB-noBx-1neo-IEPi-HhsH-SM14er
      LV Write Access        read/write
      LV Creation host, time S000001, 2015-09-23 03:01:19 +0000
      LV Status              available
      # open                 1
      LV Size                26.59 GiB
      Current LE             6808
      Segments               2
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           252:0
    
      --- Logical volume ---
      LV Path                /dev/vg0/backup
      LV Name                backup
      VG Name                vg0
      LV UUID                E0zuBR-3iIT-ig42-1y2j-YvJY-PMea-P9d8D4
      LV Write Access        read/write
      LV Creation host, time S000001, 2017-02-11 05:30:02 +0000
      LV Status              available
      # open                 0
      LV Size                19.54 GiB
      Current LE             5003
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           252:1
    
      --- Logical volume ---
      LV Path                /dev/vg0/swap
      LV Name                swap
      VG Name                vg0
      LV UUID                LqeFep-zKvG-vRJI-Id9N-LXmZ-FZlI-fvM040
      LV Write Access        read/write
      LV Creation host, time Microknoppix, 2017-03-02 16:09:29 +0000
      LV Status              available
      # open                 2
      LV Size                1.86 GiB
      Current LE             476
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           252:2
    
    root@sekhar~:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sda4[3]
          50331648 blocks super 1.2 [2/1] [U_]
    
    unused devices: 
    
    Reply
  3. Thank you so much for this how to guide, i’m going to be doing my LFCS soon and was cracking my head to understand LVM and it all clicked while reading this HOW TO. Thank you for the easy to understand write up, you sir I salute.

    Reply
    • Dibs,

      You are quite welcome and best of luck on the LCFS. It’s a great test to challenge yourself with common Linux tasks!

      Reply
  4. I love LVM2 and software raid in Linux. One tip if you want to try this before making something for real, use USB memory sticks as disks. They work great as such. Swell to test disk craches and rebuilding RAID-5 or RAID-6.

    Reply
    • Anders, I prefer LVM on hardware RAID but this little NAS box didnt support HW RAID. The USB drive option is a fantastic idea for testing a potential install! No reason to risk the real drives when USB media is so cheap.

      Reply
      • Actually, hardware Raid have hardware dependencies. Linux software Raided disks can be moved between machines, for instance when replace a faulty motherboard or disc controller. That can’t be done securly unless replaced with the same brand and version of hardware.

        By the way, it is dead easy to move a volume group and its logical volumes to new disks and remove old that are about to crach. Just add a physical volume to the volume group and then move all data off the bad one. Lastly remove the bad one. No need to do any manual moving. Used it to move a raid that was degenerated out and a new, larger in.

        Lastly, I would have recommended to mount under /srv and not /mnt. As /mnt are meant for temporary mounts and /srv are for server storage. Makes it easier to backup, like /home for user data. ;-)

        Reply
        • Anders, To each their own. /mnt was only used for illustrative purposes. The box that these drives are actually in, does mount the LVM’s in a different location.

          Reply
    • Satish, You’re very welcome. LVM2 is very similar across most distributions. I can’t speak for de facto but I would say that most of the LVM stuff will be the same across distributions. The only realy differences will likely be Distro specific things and maybe naming conventions of the LVM package.

      Reply

Got Something to Say? Join the Discussion...

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.