Hello, I am using ispconfig 3.1.15p3 under debian8 5 mothns ago I needed to move to a new dedicate servrer My old configuration is runing VM under windows server 2000 Now I use proxmox and I have move all VM there Rresently i have problem with space and specificaly with inodes and i cant find the problem Code: dvga@srv:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 48G 33G 12G 74% / udev 10M 0 10M 0% /dev tmpfs 3.1G 316M 2.8G 11% /run tmpfs 7.7G 80K 7.7G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup tmpfs 1.6G 16K 1.6G 1% /run/user/118 tmpfs 1.6G 0 1.6G 0% /run/user/1000 dvga@srv:~$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3145728 3145497 231 100% / udev 2008063 320 2007743 1% /dev tmpfs 2010203 603 2009600 1% /run tmpfs 2010203 5 2010198 1% /dev/shm tmpfs 2010203 14 2010189 1% /run/lock tmpfs 2010203 13 2010190 1% /sys/fs/cgroup tmpfs 2010203 13 2010190 1% /run/user/118 tmpfs 2010203 4 2010199 1% /run/user/1000 Thanks
Those command outputs would be readable if posted in CODE tags. Anyway, this line Code: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3145728 3145497 231 100% / shows the root partition is out of free inodes.https://en.wikipedia.org/wiki/Inode That means new files can not be created on that partition. You do not reveal what file system is on root, use Code: df -hT to show that. If the file system used does not support increasing number of inodes, things are a bit bad. It may be there are a large number of small files that you could remove, that would free inodes. When creating the file system the number of inodes can be tuned, but that would mean installing a new system. Some file systems like xfs do not use a fixed number of inodes, so they do not run out of them but just create more.
Thank you for your reply this is the result of df -hT command What do you suggest I do? I dont have much knowledge on the matter and any help will be more than appreciated... Thank you in advance Code: root@srv:~# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 48G 34G 12G 74% / udev devtmpfs 10M 0 10M 0% /dev tmpfs tmpfs 3.1G 308M 2.8G 10% /run tmpfs tmpfs 7.7G 80K 7.7G 1% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup tmpfs tmpfs 1.6G 16K 1.6G 1% /run/user/118 tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000 root@srv:~#
Ext4 is one of the filesystems where number of inodes is fixed at file system creation. Like I suggested in #2, freeing inodes by removing files would get your system back working enough so you can do some proper permanent fixing. To find if the host has unexpected large number of files in some directory tree, try for example Code: cd /var/www du -hs --inodes */. to find number of inodes used in subdirectory trees. Repeat for other probable starting directories. You can start with but / directory has several system directories where special files recide. Then you can think about how to solve this permanently. Now the root partition is 48 GBytes and has about 3M inodes. That is 16000 bytes/inode (see man mkfs.ext4 for info). Next time you create similar system use 4000 bytes/inode. If you are willing to risk breaking your system (root partition needs to stay bootable so mucking with it is complicated) you could back up all the files in root partition, delete partition and create new ext4 file system with more inodes. Then copy files back. Do not attempt this if you do not know how to do it. Take image backup of the whole system first. You could also install a new system, making more inodes when creating file system, or using xfs as file system type. Then use Migration tool to move all your data from old ISPConfig system to new. If you can increace the size of the root partition, that may help since making it larger may increase number of inodes in proportion. This depends, try verifying first if this would work in your case.
I looked at my ISPConfig web server, there disk space is 41% full and inodes use is 12%. Bytes per inode is 16000 like on your system. So it does seem your system for some reason has a very large number of small files. Small files are files smaller than the bytes per inode setting in this case.
Code: root@srv:~# tune2fs -l /dev/sda1 tune2fs 1.42.12 (29-Aug-2014) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: b05b2fdf-98e7-458a-9280-6a1c842ef316 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype n eeds_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nli nk extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 3145728 Block count: 12582912 Reserved block count: 629145 Free blocks: 3708515 Free inodes: 0 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1021 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Wed Jul 13 10:16:41 2016 Last mount time: Sat Jun 26 13:04:01 2021 Last write time: Tue Jun 29 02:48:16 2021 Mount count: 133 Maximum mount count: -1 Last checked: Wed Jan 18 14:36:12 2017 Check interval: 0 (<none>) Lifetime writes: 18 TB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 1706119 Default directory hash: half_md4 Directory Hash Seed: 00ead41a-56c5-4ace-b43c-28827f678cf2 Journal backup: inode blocks
Seems the file system was created 5 years ago, so since this out of inodes thing is recent phenomenon, I suspect something happened on that host that created a lot of small files using up the inodes. Try to find what that something was.
Code: root@srv:/# du -hs --inodes */. 154 bin/. 326 boot/. 327 dev/. 4.5K etc/. 1.5K home/. 13K lib/. 2 lib64/. 1 lost+found/. 3 media/. 1 mnt/. 3.2K opt/. du: cannot access ‘proc/./19398/task/19398/fd/4’: No such file or directory du: cannot access ‘proc/./19398/task/19398/fdinfo/4’: No such file or directory du: cannot access ‘proc/./19398/fd/4’: No such file or directory du: cannot access ‘proc/./19398/fdinfo/4’: No such file or directory 87K proc/. 442 root/. 631 run/. 168 sbin/. 1 srv/. 26K sys/. 36 tmp/. 224K usr/. 2.8M var/. root@srv:/# Code: root@srv:/# cd /var/www root@srv:/var/www# du -hs --inodes */. 1 apps/. 2.8M clients/. 1 conf/. 2 html/. 7.8K ispconfig/. 9 php-fcgi-scripts/. 9 webalizer/.
There are 2.7M in the tmp folder. Is it reasonable? (Total: 2761868 files and 0 directories) looks like "sess_5q539rv6jc5j8vrajg4nd4pmr2' Code: root@srv:/var/www/clients/client0/xxxxxxxx.tld# du -hs --inodes */. 1 backup/. 1 cgi-bin/. 147 home/. 53 log/. 1 private/. 22 ssl/. 2.7M tmp/. 1 vmfiles/. 83K web/. 1 webdav/.
If it's reasonable depends on the website usage, but probably the session file cleanup is not working on your system. There was an issue in some older ISPConfig releases, which has been fixed some time ago. But as you are using a really old ISPConfig version, you might still be affected by this. You should delete all session files that are older than e.g. 30 days in that folder.
This is normally cleaned up automatically by ISPConfig, as I mentioned. But your old ISPConfig release might have an issue with this code. So the best approach long term would be that you update your outdated OS to a recent version and update ISPconfig to the current version as well. As a short term solution, run this command inside the website tmp dir: Code: find . -mtime +1 -name 'sess_*' | grep -v -w .no_delete | xargs rm
Thanks The reason I have not upgraded is that the upgrade requires debian 9 or 10 and I have the debian 8 So I have to build it from scratch. Code: root@srv:/var/www/clients/client0/xxxxxx.tld/tmp# find . -mtime +1 -name 'sess_*' | grep -v -w .no_delete | xargs rm find: `./sess_ufs2d4cf89guk4topl7bfshk06': No such file or directory find: `./sess_4307337ujvcem4rl11huj69cn4': No such Code: root@srv:/var/www/clients/client0/xxxxx.tld# du -hs --inodes */. 1 backup/. 1 cgi-bin/. 147 home/. 53 log/. 1 private/. 22 ssl/. 8.4K tmp/. 1 vmfiles/. 83K web/. 1 webdav/.
Why do you want to build a system from scratch instead of just upgrading your Debian OS? Upgrading Debian major versions is a normal thing and unlike Redhat-based systems, upgrading Debian and Ubuntu works quite well.
You wrote you have virtual machines running in Proxmox. Then it is easy to make backup dump of the virtual machine, upgrade debian 8 to debian 9 following the upgrade instructions from https://www.debian.org/releases/stretch/releasenotes chapter 4. If the upgrade fails restore the backup and you are back where you started, and can keep running the unupgraded system or figure out how to do the upgrade and try again. Remember also for ISPConfig to follow ISPConfig perfect server guide for debian 9, install packages and do configurations. Then ispconfig_update.sh --force to finish the upgrade.
You would do well to follow the Debian 9 upgrade with another upgrade to 10, it's not much more work and gets you to the current stable release.
No. I've never tried it offhand, but the supported upgrade path is one version at a time, and iirc it is explicitly mentioned in release notes not do jump versions. You can skip updating ISPConfig after each debian update, though. So update debain 8 -> debian 9, then update debian 9 -> debian 10, then run through the relevant Perfect Server guide for debian 10 and install all needed packages, then update ISPconfig and let it reconfigure services. Once you're done you'll need to update the default PHP Settings for each server in Server Config (and maybe run through all settings to consider/update new settings which have been added), and generally test and fix any issues you might find (eg. old apache config).