supporting DANE

Discussion in 'Developers' Forum' started by, Oct 7, 2017.

  1. ISPConfig Developer ISPConfig Developer

    currently I was thinking on how to implement DANE + ISPConfig + LetsEncrpyt, would like to know if there's any interest in that ( which would be great )

    Things to consider:
    - LE cert is changed ~3 month
    - needs to generate hash of new cert & publish to DNS ( which could be achieved in ISPConfig DNS or API supporting Provider )
    - DNSSEC support

    interesting lecture:

    I can do that for myself the hackish way but dunno if it's worth the effort to make it "properly" work together with ISPConfig - or is it already done and I was too blind to see? Happy to hear from you

    ah I've seen on DNS-level there seems to be an implementation already, does it configure postfix aswell? never used IPSConfig DNS module I have to admit ;)
    Last edited: Oct 7, 2017
  2. Jesse Norell

    Jesse Norell ISPConfig Developer Staff Member ISPConfig Developer

  3. ISPConfig Developer ISPConfig Developer

    thanks for the link, I forgot to mention I'd use the Letsencrypt cert and only use checking in postfix letting it up to the user where to put their own data. But I need to rethink some stuff anyway. Will check with the issue you linked
  4. till

    till Super Moderator Staff Member ISPConfig Developer

    The problem with the current DNSSEC implementation is that it creates the certs/keys on the slave, the result on a mirror is that both nodes use different certs and that's why it needs to be reimplemented so that the certs get either created on the master and then were pushed to both slaves or that only the first slave (mirror master) creates it and then stores it in the ispconfig master db which pushed it then to all mirrored slaves. To be able to do that securely, I guess we should implement a two-way encryption (internal api function and form field type to auto en- and decrypt) in ISPConfig with a key that is shared between all nodes so we can encrypt the key and cert data in the ISPConfig database.
  5. Jesse Norell

    Jesse Norell ISPConfig Developer Staff Member ISPConfig Developer

    What is the threat model that is being addressed, someone who has access to the mysql data (maybe dump files, or sql injection in ispconfig interface), but not access to the key (which is potentially stored on each ispconfig server in the installation)? Separate keys for separate functions may be good (shared among web servers, one among dns servers, etc.). If the threat being protected against is actually someone viewing inter-server traffic (ie. the mysql connection), it'd be easy to enable ssl on the mysql connection, which just requires

    The "how to store secret data encrypted, in a way we can securely share among slave servers" problem is probably worth solving in it's own right, and I don't know enough cryptography to build/suggest more than a home-brew solution which I myself couldn't see how to break - I suspect this problem has probably been solved in many domains and the right person familiar with it could point to a good solution (eg. libsodium).
 likes this.
  6. ISPConfig Developer ISPConfig Developer

    I like the idea of securing sql connections, just a quick thought in my mind I'd like to throw in

    can't the key be encrpyted on disk and shared using scp with authorized_keys?
  7. till

    till Super Moderator Staff Member ISPConfig Developer

    @Jesse Norell: Yes, the idea behind the encryption is to hide the key from an attacker that got access to the sql db somehow (like access to the live db or access to a backup sql dump).

    Adding 'out of the box' support for secured sql connections is another part that we should add in ISPConfig 3.2.

    I would like to avoid using a separate PHP extension for cryptography beside openssl as that other extension might not be available for all supported OS. But I'm not a crypto guy too, so we might need some help to not do any mistakes in choosing the right methods and settings in OpenSSL.

    Generally yes. But this would require a second server connection channel beside the MySQL connection and I would like to avoid that as it complicates the setup and adds another possible point for failures. And when your servers are separated by firewalls, then you would have to allow ssh between them.

    My idea for the long term is to replace the MySQL inter-server connections completely with a REST api so that the slaves can poll data from a REST endpoint hosted on the master. This has several benefits in my opinion:

    1) The connection is secured by SSL automatically trough the SSL cert of the ISPConfig interface.
    2) No MySQL ports need to be open anymore.
    3) We can use a more fine grained security model. In MySQL, we can just limit access to a specific table and column at the moment. But it would be better if we could limit access not just by column and table but also by record owner so we get a limitation like 'ispcsrv1 user is able to access field A in table B where server_id is is own server_id only'.

    The steps to achieve that longtime goal are in my opinion:

    We should encapsulate all slave to master connects into a separate class file (currently we use the normal mysql class with $app->dbmaster->.... connection. That way, we should have a function like getDataLog(...) and sendMonitorData(....). In a first step, we implement them to use MySQL, so everything stays working as it is now. In a second step, we can then implement an alternative REST endpoint in the ISPConfig interface part plus the slave side in the class, so that the connect method can be chosen with a config setting.

    I moved this post to dev forum now.
    Chris_UK likes this.
  8. ISPConfig Developer ISPConfig Developer

    he, yeah from dane to let's rewrite everything, good move moving :)
    yeah, haven't thought about your valid points regarding scp transfer, it was late anyway ;)

    I like the idea of having a REST-API, + I'd secure the verification of the user using some sort of hmac algorhithm
  9. Chris_UK

    Chris_UK Active Member HowtoForge Supporter

    @till thanks for pointing me to this thread.

    I too like the REST API idea. I don't see how it will be too expensive on the server load either, I know it will add some because we would be having php do the work but i don't think it would be anything to the extreme for the majority of people, besides the cron job could just be run at a longer interval if it becomes too much of an issue.

    I have to wonder how many people using the software are actually running even 10 slaves let alone hundreds or thousands of slaves for it to cause any noticeable problems. I have no doubt there will be some but I can't really see it being many. Besides if those power users needed too, couldn't they run the master in a cloud and mitigate the load? I dare say that would be an option and a cost effective one at that.

    In any case, i do look forward to seeing the progress on this as i said in my other post asking about it. :D
  10. Chris_UK

    Chris_UK Active Member HowtoForge Supporter

    Ive been thinking this key situation and how should they be passed.
    Public / private key pairs.

    Slave wanting to enrol generates a key pair and the first thing it does is contact the master to exchange public keys. At this point the slave can send all the required information to the master but master only completes enrolement when some action is taken (administrator has to permit the enrolment)?

    Once the criteria to enrol is fulfilled then the master can send back its public key and also encrypts any signature related information with the slaves public key. I might add that when i say masters public key, its only public so far as the slave servers know it. both the masters pub key and signature data can both be encrypted using the slaves pub key to ensure they are received securely

    There is no need for ssh, scp or any of those methods because the slave creates the first pair and sends it to the master.
    Last edited: Oct 30, 2017

Share This Page