mysql - find databases in use

Discussion in 'Server Operation' started by pyte, Jul 4, 2024.

  1. pyte

    pyte Well-Known Member HowtoForge Supporter

    Hi!
    I'm trying to find a relaiable way to determine if a Database is "in use"(records changes, access to the database) on a old host with loads of databases. Following things i've already tried
    1. Checking files in /var/lib/mysql/{DBNAME} - This is not possible as the timestamp my not change
    2. SHOW PROCESSLIST; - This will show active connections but it's only in the moment the command is run.
    3. update_time is not possible as this is not a INNODB Backend
    Does anyone have a good idea how to get the information?
     
  2. till

    till Super Moderator Staff Member ISPConfig Developer

    If this is a kind of low traffic server, then something like this might work (from ChatGPT):

    To get the last login timestamp of a MySQL database user, you need to enable and check the general query log or audit log, as MySQL does not maintain a direct last login timestamp for users by default. Here's how you can do it:

    ### Method 1: Using the General Query Log

    1. **Enable the General Query Log**:
    You can enable the general query log to record all SQL queries, including login attempts.

    ```sql
    SET GLOBAL general_log = 'ON';
    SET GLOBAL log_output = 'TABLE';
    ```

    This will log all queries to the `mysql.general_log` table.

    2. **Query the General Log**:
    You can query the `mysql.general_log` table to find the last login timestamp.

    ```sql
    SELECT event_time, user_host
    FROM mysql.general_log
    WHERE argument LIKE 'Connect%'
    AND user_host LIKE 'your_username%'
    ORDER BY event_time DESC
    LIMIT 1;
    ```

    Replace `your_username` with the actual username you are checking for.

    ### Method 2: Using Audit Log Plugin

    1. **Install the Audit Log Plugin**:
    If you are using MySQL Enterprise Edition, you can use the audit log plugin.

    ```sql
    INSTALL PLUGIN audit_log SONAME 'audit_log.so';
    SET GLOBAL audit_log_policy = 'LOGINS';
    ```

    2. **Configure the Audit Log Plugin**:
    Configure the plugin to log login attempts.

    ```sql
    SET GLOBAL audit_log_policy = 'ALL';
    ```

    3. **Query the Audit Log**:
    The audit log plugin writes to a file by default. You will need to parse this file to find login attempts.

    The location of the audit log file is specified in the MySQL configuration (usually `my.cnf` or `my.ini`).

    ### Method 3: Using the Performance Schema

    1. **Enable Performance Schema**:
    Ensure that the performance schema is enabled in your MySQL configuration.

    ```sql
    UPDATE performance_schema.setup_instruments
    SET ENABLED = 'YES'
    WHERE NAME LIKE 'wait/io/%';
    ```

    2. **Query Performance Schema for Connection Events**:
    You can query the `performance_schema` to find connection events.

    ```sql
    SELECT
    event_name, timer_start, timer_end, processlist_user, processlist_host
    FROM
    performance_schema.events_waits_current
    WHERE
    event_name LIKE 'wait/io/socket/sql/accept'
    ORDER BY
    timer_start DESC
    LIMIT 1;
    ```

    ### Method 4: Using a Custom Solution

    If you prefer, you can implement a custom solution by logging login attempts manually. This involves creating a trigger or a stored procedure that logs login attempts to a custom table.

    1. **Create a Custom Log Table**:
    Create a table to store login timestamps.

    ```sql
    CREATE TABLE user_login_log (
    user VARCHAR(50),
    login_time DATETIME
    );
    ```

    2. **Insert Login Attempt**:
    Use a trigger or application logic to insert a record into the `user_login_log` table whenever a user logs in.

    ```sql
    INSERT INTO user_login_log (user, login_time) VALUES (CURRENT_USER(), NOW());
    ```

    3. **Query the Custom Log Table**:
    Query the `user_login_log` table to get the last login timestamp.

    ```sql
    SELECT login_time
    FROM user_login_log
    WHERE user = 'your_username'
    ORDER BY login_time DESC
    LIMIT 1;
    ```

    Choose the method that best fits your environment and requirements. Method 1 (General Query Log) and Method 2 (Audit Log Plugin) are the most straightforward if you want to avoid custom development.
     
    TonyG likes this.
  3. pyte

    pyte Well-Known Member HowtoForge Supporter

    Yea i already thought about these option but both result in a huge amount of logging and we are talking about ~200 Databases on that host :-/

    Maybe i just run "SHOW PROCESSLIST;" fequently and log the result with uniq entries for the "db" column?
     
  4. till

    till Super Moderator Staff Member ISPConfig Developer

    Might be something to try. At least it will be less of a burden for the system, load wise :)
     
  5. pyte

    pyte Well-Known Member HowtoForge Supporter

    For anyone might finding this thread. This is what i've done in the end:

    Code:
    #!/bin/bash
    
    MYSQL_USER="root"
    MYSQL_PASSWORD="XXXXXX"
    MYSQL_HOST="localhost"
    MYSQL_DATABASE="information_schema"
    
    LOG_FILE="unique_databases.log"
    
    INTERVAL=10
    
    # Create the log file if it doesn't exist
    touch $LOG_FILE
    
    while true; do
      UNIQUE_DBS=$(mysql -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MYSQL_HOST -D$MYSQL_DATABASE -e "SHOW PROCESSLIST;" | awk '{print $4}' | sort | uniq | grep -v "db")
    
      # Iterate through each unique database name
      for DB in $UNIQUE_DBS; do
        if ! grep -q "^$DB$" $LOG_FILE; then
          echo "$DB" >> $LOG_FILE
        fi
      done
    
      sleep $INTERVAL
    done
    
    Keeping it running for a few days with nohup:

    Code:
    nohup ./mysql-report.sh &
     
    TonyG, ahrasis and till like this.
  6. ahrasis

    ahrasis Well-Known Member HowtoForge Supporter

    pyte likes this.
  7. pyte

    pyte Well-Known Member HowtoForge Supporter

  8. lukafred

    lukafred New Member

    I have been tasked with a mysql cleanup on a server, that is, find out which databases are in use and which ones aren't.

    My background is Oracle and MS-SQLserver (on mostly windows servers) and the only exposure I've had to mysql is playing with CM websites with XAMPP on my workstation.

    I've managed to find out how many databases are on the server, as well as their sizes & free space.

    Is there a query I can use to see the last time a database has been accessed or touched? Or can this be setup on the OS (solaris) side?

    Since mysql databases seem to be similar to the mssql concept of databases (schemas in oracle) is there a way to shutdown individual databases on a server? (That way I could shut down innocuous looking ones and see if anyone complains)
     
  9. pyte

    pyte Well-Known Member HowtoForge Supporter

    You can check with this:
    Code:
    SELECT TABLE_SCHEMA, MAX(UPDATE_TIME) AS last_update
    FROM information_schema.tables
    WHERE TABLE_SCHEMA = 'database_name'
    GROUP BY TABLE_SCHEMA;
    But that will only show if there were changes to the data, not if it is accessed at all.

    To see if it was accessed you could use the general log or audit log, both have to be enabled before and look through it after some time, to see which databases got accessed.

    You cannot "disable" a database in mysql. What you can do is to revoke all permissions to it, so that it can't be accessed. Or you can dump it into a file and then drop it, in that case if you still need it you can restore it from file.

    If used a script that worked okish:
    Code:
    #!/bin/bash
    
    MYSQL_USER="root"
    MYSQL_PASSWORD="XXXXXXXXXX"
    MYSQL_HOST="localhost"
    MYSQL_DATABASE="information_schema"
    LOG_FILE="unique_databases.log"
    INTERVAL=10
    
    # Create the log file if it doesn't exist
    touch $LOG_FILE
    
    while true; do
      UNIQUE_DBS=$(mysql -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MYSQL_HOST -D$MYSQL_DATABASE -e "SHOW PROCESSLIST;" | awk '{print $4}' | sort | uniq | grep -v "db")
    
      # Iterate through each unique database name
      for DB in $UNIQUE_DBS; do
        if ! grep -q "^$DB$" $LOG_FILE; then
          echo "$DB" >> $LOG_FILE
        fi
      done
      INTERVAL=$(shuf -i 1-20 -n 1)
      sleep $INTERVAL
    done
    
    And run it for a few days in the background:
    Code:
    nohup ./mysql-check.sh &
     
    TonyG, till and ahrasis like this.

Share This Page