cgroups or other way to limit php-fpm website cpu

Discussion in 'General' started by topogigio, Apr 13, 2022.

Tags:
  1. topogigio

    topogigio Member

    Hi,
    I need a way to limit cpu usage per-website (or per-client). Is there any way integrate in ISPConfig to do this?
    Or any way to use cgroups or other Linux stuff to limit them? Basically I need that a website cannot eat too many cpu..
    thanks
     
  2. michelangelo

    michelangelo Active Member

    ISPConfig currently doesn't support cgroups.
    However, having cgroup support might be a nice feature, but if I would really make use this would also depend on it how well it would work in production.
     
  3. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    I don't think a website alone will use that much cpu but who knows what other things it's actually running in the background.

    The only way to do it is to separate as a web server on its own prescribed cpu, that way it cannot take more than what it has.
     
  4. topogigio

    topogigio Member

    I tried to configure cgroup. It seems not working. I see php-fpm processes executed by web-xx user, but cgroup sees them as executed by php-fpm user and does not apply the right slice.
    Any idea from someone that know Linux better than me? :(

    It's an important point, because ISPConfig is used to run a shared server hosting. If I cannot limit PHP CPU from a single website I cannot protect my service from a single website code.. :(
     
  5. michelangelo

    michelangelo Active Member

    I recently had the case that a wordpress website of a customer was compromised and the intruder installed a python miner in the private folder of the website and this miner affected noticeably the CPU and due that the overall server performance.
    So, yeah, having certain tools to limit the impact of something like that would be actually nice but the question is also how much effort one would have to put into it when it comes to overall availability via ISPConfig.

    Also the only control panel that I know off that offers cgroup support and IS open source is apnscp.
    Might be worth to have a look at the code how they did it.
     
  6. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    Well, ISPConfig is multi server capable so it is possible to implement separate web server instance as mentioned earlier.

    I however do not think it is too difficult nor impossible to be implemented either but somebody need to give his time and expertise, come forward and really code that for ISPConfig.
     
  7. topogigio

    topogigio Member

    a separate web server is a different scenario. I cannot provide dedicated resources for every website, otherwise I will switch to a different solution.
    apnscp has an impressive resource controls I see...

    I tried to understand what is the problem with cgroups in my ISPConfig server. The problem seems that php-fpm forks secondary processes running them with the website user, but this fork does not apply cgroup slice, so it's totally usefulness in our environment. It seems that cgroups works only with "real" processes, not secondary.
     
    ahrasis likes this.
  8. Jesse Norell

    Jesse Norell ISPConfig Developer Staff Member ISPConfig Developer

    You could try fastcgi mode; it is not as efficient as fpm, and doesn't chroot, but might allow cgroups to limit resources.
     
    ahrasis likes this.
  9. michelangelo

    michelangelo Active Member

    Right, exactly that is the problem of enforcing cgroup limits on php-fpm pools.
    As far as I could find out it would be necessary to change the php-fpm handling in ISPConfig.

    Instead of having just one master php-fpm service it would be necessary to create a master php-fpm service for every website.
    That's the way how apnscp/apiscp did it: https://docs.apiscp.com/admin/PHP-FPM/#service-management

    You can clearly see how they created for every website their own master php-fpm service file.
    In fact having this would be actually pretty neat because with the current pool handling every website would be unavailable whenever someone (Or ISPConfig) restart the master php-fpm service.

    With this solution only the master process of the website would be restarted...
     
    ahrasis likes this.
  10. topogigio

    topogigio Member

    I think this will be a really important upgrade for ISPConfig. It allows to create shared environment, and in this scenario to isolate websites (and its performance) is a main focus.

    Based on what I've understood, changing this php-fpm handling as you describe will then make trivial to allow CPU/RAM/bandwidth enforcement for every single website, and this will be a HUGE improvement..

    BTW a separate PHP master process is also what, for example, IIS does, and in fact it allows to limit cpu and ram resources per-website
     
  11. Jesse Norell

    Jesse Norell ISPConfig Developer Staff Member ISPConfig Developer

    ahrasis likes this.
  12. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    It has been like 5 years? I am sure with some helps he can get that properly completed.
     
  13. michelangelo

    michelangelo Active Member

    ...If he is still around and interested.
    Last time he was online on the forums was some time in 2021.

    I did have a quick look at his code and I've started porting his code to the current dev version of ISPConfig, but as far as what I could see only the systemd service file handling was implemented.
     
    topogigio likes this.
  14. topogigio

    topogigio Member

    wow.. that's really the point, and was on the table 5 years ago!
    I cannot help with code, but if I can help with other tasks I'm here. I think that this feature must be the first on every roadmap for a shared environment as an ISPConfig server is.
     
  15. topogigio

    topogigio Member

    no response on the old thread, seems dead also here.
    So is there any change to have a so important feature? :(
     
  16. michelangelo

    michelangelo Active Member

    I'm just like you, a ISPConfig user, who ocassionally contributes in his free time to this project and not a paid developer who can work fulltime on it .
    However, I had a look further into adding cgroups and resource limiting but it's more work than first thought, also the existing fpm handling code is outdated and needs rework to match current ISPConfig fpm handling and the question that is still unanswered is how well cgroups + separated php-fpms + not separated webserver will actually work. Apache httpd 2.4 seems to have certain resource limiting features but I couldn't find the same exact features provided by Nginx. Another point is that this feature needs to run on all of three major distributions, which are Debian, Ubuntu and RHEL (and it's derivates like Alma, Rocky etc).

    Of course, if no one tries it, no one will know how well all of this will work, but it requires some good quality time to work on this, what I can not afford atm.

    So, sadly yeah, I believe resource limiting still is an interesting feature, also for me, but we'll probably have to wait longer to have it implemented, or until an alternative like an open-sourced ready-to-use CageFS emerges which works with Nginx, httpd and php-fpm.
     
    Last edited: May 10, 2022
    till likes this.
  17. topogigio

    topogigio Member

    Yes I understand.

    But really I cannot think that I'm alone to have problems with customers that are eating CPU in a platform that has "to provide ISP Web services" as goal. How can work everyone that has a lot more than my customers and systems, and is providing services from many years? It's impossible that this limit has not stopped others from work with ISPConfig... :( At least a valid manual workaround or similar to limit "bad websites", I don't know...
    Is there someone in ISPConfig core team that can clarify this?

    thanks
     
  18. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    I think if you have good TOS you can take action against such an abuse. Coding is not the only way to solve this kind of problem.
     
  19. topogigio

    topogigio Member

    If is something as malware yes. But if it's some Website that is receiving a lot of requests, or has a bad code that is consuming too CPU, is not so easy to measure and apply. And it's really simpler to have a cpu or resources usage limit applied and go on. If an environment is a shared environment I think that first requirements for all customers is to be "protected" from others..
     
  20. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    If someone is up to it, may be yes to that option, but as you may already see, 5 years has gone and nothing is done, so you can easily see another 5 years coming without anything seriously done either.

    I'd still consider such behaviour an abuse and if TOS covered it action could still be taken to such web account which is worth doing rather than waiting for uncertain period of time.

    But it's your server / service, so it is up to you to resolve, act and/or decide accordingly / expediently.
     

Share This Page