I've been trying to do a migration from a rocky 9 server to a new build deb12 (with autoinstaller). Dry run completes fine. no problem. Migration live run happily gets going, and well into the proceedings it gets to where it is dumping the databases. All has gone well to this point. Now one of the databases IS quite large. at somepoint shorly after, the SSH window closes the connection, killing migtool. when I log back in, the migration log does not show any errors. I thought maybe some keep-alive needed to be set, but I set it in both sshd-config, and in the termius client side settings. but it always fails (no error, session simply closes). any ideas??
Then this is not an issue of the migration tool; it's an issue with your SSH session. When your SSH session gets closed, ssh terminates the command running in that session, so SSH stops the migration tool, and this is why the migration does not finish. To keep applications running even when the SSH session gets closed, you can use e.g. the Linux program 'screen': https://www.howtoforge.com/how-to-use-screen-command-in-linux-system/ First, start the screen command, and then within that screen session, you start the migration tool. So even when your ssh connection gets closed again, the migration tool will continue to run within the screen session. You can then either wait until its finished, or you can later reconnect to that running screen session to see the output of the tool.
I will look at screen. I also though maybe we can run migtool from cron? then it wont care about ssh sessions. is there a migrate command to 'migrate - migrate not dry but use all defaults' that have been established from the prior session? then its trivial.
Cron is not interactive and also not designed to run a program once, so it's not suited for this. The right tool for your problem is screen.
using screen I get this from migrate.log: 2026-03-06 01:24:46 - [INFO] File /tmp/migrate-import-tmp.sql.gz successfully transferred. 2026-03-06 03:25:14 - [INFO] Command gunzip -c /tmp/migrate-import-tmp.sql.gz | mysql --max_allowed_packet=1G -h 'localhost' -D c01stdb -u 'root' failed with code 255. 2026-03-06 03:25:14 - [ERROR] Execution of job failed: array ( 'type' => 'exec', 'command' => 'gunzip -c /tmp/migrate-import-tmp.sql.gz | mysql --max_allowed_packet=1G -h \'localhost\' -D c01stdb -u \'root\'', 'info_message' => 'Propagating database c01stdb', 'stop_on_error' => true, ) 2026-03-06 03:25:14 - [WARN] JSON API ERROR: session expired. Trying re-login! [root@ns1 migration]# and it aborts. any idea?
Have you checked that /tmp is also big enough? It might be that you have free space in other partitions but /tmp is too small. Then, please check if /tmp/migrate-import-tmp.sql.gz file is still there on the new server, so you could run the failed command again: Code: gunzip -c /tmp/migrate-import-tmp.sql.gz | mysql --max_allowed_packet=1G -h 'localhost' -D c01stdb -u 'root' Another possibility is that this old database uses a feature that the new mysql server rejects during import.
i've run the two commands (adding -p to the mysql and they seem to have completed without error. whats the next step?
Then this might have been just a temporary issue on your server. Run the migration tool again to see if the migration finishes now.
alas ran it again as ./migrate and: 2026-03-16 18:43:06 - [INFO] File /tmp/migrate-import-tmp.sql.gz successfully transferred. 2026-03-16 20:45:30 - [INFO] Command gunzip -c /tmp/migrate-import-tmp.sql.gz | mysql --max_allowed_packet=1G -h 'localhost' -D c01stdb -u 'root' failed with code 255. 2026-03-16 20:45:30 - [ERROR] Execution of job failed: array ( 'type' => 'exec', 'command' => 'gunzip -c /tmp/migrate-import-tmp.sql.gz | mysql --max_allowed_packet=1G -h \'localhost\' -D c01stdb -u \'root\'', 'info_message' => 'Propagating database c01stdb', 'stop_on_error' => true, ) 2026-03-16 20:45:31 - [WARN] JSON API ERROR: session expired. Trying re-login! when I was running it manually I had to add the root password with the -p command. this is not in the migration script. problem? now since I can run these commands on the new server without error (-p included) is there a way to skip this database and continue past here? maybe a --skip-nasty-database option? sad!
More than unlikely after several thousand successfully migrated servers. I've asked Marius to take a look at your thread in the forum as he is doing the Migration Tool support. Btw. You could have reached out to the Migration Tool support here directly: https://www.ispconfig.org/get-support/?type=migration But as mentioned, I've forwarded your request to them now. Makes not make much sense to do that. There is an issue with your system that needs to be solved or you did not se a DB password or something similar. If you do not solve the initial problem now, you will likely get other failures in ISPConfig later when it does not fully work and is not fully migrated.
only light I can shed its a BIG database. took about 10 minute to manually do those commands. is there some timing error? something exhausting a loop before it comes back completed? or is the ssh session timing out on the new server? any ideas what to check for?
Until marius is able to answer your question here, you could try this: Create a file /root/.my.cnf with this content: Code: [client] user=root password=YOUR_ROOT_PASSWORD host=localhost replace YOUR_ROOT_PASSWORD with your root MariaDB password. Then secure the file: Code: chmod 600 /root/.my.cnf chown root:root /root/.my.cnf Finally, try if you can log in now as root user to MariaDB while being logged in as root shell user on your system.
I think Till's post could be the right solution. As you said you had to add -p to the command, this could mean that your root user was not able to login passwordless to the database, which it normally can on fully set-up ISPConfig systems (either using socket or the debian.cnf file).
hmm till not heard anything from marius. other databases seem to have moved fine its just the huge one! and it unpacks on the new server manually just not as part of the script? some questions - does migration migrate ALL databases? or only those tied to a website? and if I dump the huge base to sql manually (works, and is already on the new server) can I rename the base (or delete it) on the source server? if its skipped, should it not complete properly? I an always recreate the base on the target server manually after all!
Depends on what you told the tool to do. This should work as well. or you use the exclude options of the tool, which should probably work for databases as well.