Following a recent email error response to a send of insufficent disk space, I deleted some larger files to clear up some space. This got me back in order. However, I thought I had plenty of space up my sleave....time for some investigation. Unfortunately, LVM is not in my knowledge set! I would appreciate if someone can do a quick scan of the below and give me a pointer of where I should focus my efforts (I am a bit wooly today ...intermittent network card fault on the VM Host server.... took until the wee hours of the morning to diagnose until it died died) I have a isci block of 200gb allocated to my ISPConfig VM on my XCPNG Host server. Besides some basic website hosting, and several domains (emails), most storage is consumed by a nextcloud site with a significant number of bigger files. Please find below various VM system dumps of the storage landscape: Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 392M 5.5M 387M 2% /run /dev/mapper/ultimate--vg-root 94G 82G 7.9G 92% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/xvda1 236M 49M 176M 22% /boot tmpfs 392M 0 392M 0% /run/user/1000 major minor #blocks name 202 0 209715200 xvda 202 1 248832 xvda1 202 2 1 xvda2 202 5 104605696 xvda5 11 0 1048575 sr0 254 0 99561472 dm-0 254 1 4182016 dm-1 Disk /dev/xvda: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x4ed9c7f4 Device Boot Start End Sectors Size Id Type /dev/xvda1 2048 499711 497664 243M 83 Linux /dev/xvda2 501758 209715199 209213442 99.8G 5 Extended /dev/xvda5 501760 209713151 209211392 99.8G 8e Linux LVM Disk /dev/mapper/ultimate--vg-root: 95 GiB, 101950947328 bytes, 199122944 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ultimate--vg-swap_1: 4 GiB, 4282384384 bytes, 8364032 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
That is hard to read. It would be easier if posted in CODE tags. Code: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ultimate--vg-root 94G 82G 7.9G 92% / That shows root partition is pretty full. Was that at 100% before you cleaned up? Besides, disk may be full if i-nodes are exhausted. Check also with Code: df -hTi To find where disk space goes, start with Code: ls -lhd /*/. That shows what directories are under root. Ignore directories that are mount points. Then as root do du -sh for the remaining directories. Then you can examine further, cd to directory and do du again.
Your Disk /dev/xvda is 200 GiB in size but the vg-root is only ~100GB. Check the volume group with "vgdisplay" and resize the logical volume with: Code: lvextend -l +100%FREE /dev/XXXX/XXXX resize2fs /dev/mapper/XXXXXX
Thank you pyte n Tailman. I am guessing you (Taleman) are Finnish...been there many times, a lovely place with wonderful people, I was part of the team that developed Kavitsa.
Thanks for the heads-up. It was 100% before, I have since deleted as much as I can, latest status: Code: root@mando:~# df -h Filesystem Size Used Avail Use% Mounted on udev 1.9G 0 1.9G 0% /dev tmpfs 392M 11M 382M 3% /run /dev/mapper/ultimate--vg-root 94G 69G 21G 77% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/xvda1 236M 49M 176M 22% /boot tmpfs 392M 0 392M 0% /run/user/1000 Herewith the results of df -hTi, this seems in good shape?: Code: root@mando:~# df -hTi Filesystem Type Inodes IUsed IFree IUse% Mounted on udev devtmpfs 486K 371 486K 1% /dev tmpfs tmpfs 490K 615 489K 1% /run /dev/mapper/ultimate--vg-root ext4 6.0M 188K 5.8M 4% / tmpfs tmpfs 490K 1 490K 1% /dev/shm tmpfs tmpfs 490K 7 490K 1% /run/lock tmpfs tmpfs 490K 17 490K 1% /sys/fs/cgroup /dev/xvda1 ext2 61K 339 61K 1% /boot tmpfs tmpfs 490K 10 490K 1% /run/user/1000 Well noted, I will delve deeper!
Thank you pyte, result below: Code: root@mando:~# vgdisplay --- Volume group --- VG Name ultimate-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <99.76 GiB PE Size 4.00 MiB Total PE 25538 Alloc PE / Size 25328 / <98.94 GiB Free PE / Size 210 / 840.00 MiB VG UUID iZYXf4-BVR9-NfLO-Xzmp-YXXt-WwMJ-cQU6NI Am I correct in saying that I have left the 100tb on the table by never allocating the unused space to xvda1? Or should it be to different xvda[?]
Check the lsblk and fdisk -l command to see where that space is laying around. Maybe it's just unalloctaed
Thanks pyte. It does seem unallocated with xvda5 (99.8G) used? Code: root@mando:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 200G 0 disk ├─xvda1 202:1 0 243M 0 part /boot ├─xvda2 202:2 0 1K 0 part └─xvda5 202:5 0 99.8G 0 part ├─ultimate--vg-root 254:0 0 95G 0 lvm / └─ultimate--vg-swap_1 254:1 0 4G 0 lvm [SWAP] root@mando:~# Code: root@mando:~# fdisk -l Disk /dev/xvda: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x4ed9c7f4 Device Boot Start End Sectors Size Id Type /dev/xvda1 2048 499711 497664 243M 83 Linux /dev/xvda2 501758 209715199 209213442 99.8G 5 Extended /dev/xvda5 501760 209713151 209211392 99.8G 8e Linux LVM Disk /dev/mapper/ultimate--vg-root: 95 GiB, 101950947328 bytes, 199122944 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ultimate--vg-swap_1: 4 GiB, 4282384384 bytes, 8364032 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Code: growpart -v /dev/xvda 5 be aware that the whitespace between xvda and 5 is intended. This should grow the partition to use the 210.219.008 sectors that are not in use. After that you need to resize the as described in #3