Nextcloud requires a database backend. It is capable of using SQLite, PostgreSQL, or MySQL / MariaDB. The installation on this server has been configured to use MariaDB as this is the best supported option.
This script helps secure the installation by giving the opportunity to supply a strong root password for the mysql process, removing anonymous test users, dissallowing remote root login and removing the usecured ‘test’ database.
After editing the /etc/mysql/my.cnf the database can be created as follows:
root@<server># systemctl restart mysql && mysql -u root
This will open up the MariaDB console, which accepts SQL Statements, to enable the creation of a database and an administrative user other than the MariaDB root user. The choice of name and password is entirely arbitrary for this user so choosing a name other than admin is probably a good idea.
Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 1597 Server version: 10.1.34-MariaDB-0ubuntu0.18.04.1 Ubuntu 18.04 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci; MariaDB [(none)]> CREATE USER <username>@localhost identified by '<long_random_password_string>'; MariaDB [(none)]> GRANT ALL PRIVILEGES on nextcloud.* to <username>@localhost; MariaDB [(none)]> FLUSH privileges; MariaDB [(none)]> quit;
The other important thing to do with administering the database is to create a backup user with minimal privileges to make database dumps. This can be done with the MariaDB shell
root@<server># mysql -u root:
MariaDB [(none)]> CREATE USER backup IDENTIFIED BY '<long_random_password_string>'; MariaDB [(none)]> GRANT SELECT, LOCK TABLES ON *.* TO backup; MariaDB [(none)]> quit;
With the backup user in place, it is time to create a script to dump the database over the network to the development machine. Of course, these parameters can be adjusted or duplicated such that this can happen between any number of boxes provided they have the correct authentication methods configured.
#!/bin/sh source "$HOME/.keychain/$HOSTNAME-sh" eval `keychain --noask --eval id_ed25519` ssh root@<server> -i $HOME/.ssh/id_ed25519 \ "mysqldump --user=backup \ --password='<long_random_password_string>' \ nextcloud | xz -9 -T 0 -c \ > $HOME/_vps/backups/MariaDB/fog.vincible.space-$(date +%Y-%m-%d_%H-%M-%S).sql.xz"
This script again relies on a pipe to compress the mysqldump using xz which is compressed to standard output using the -c option and then redirected to fog.vincible.space-$(date +%Y-%m-%d_%H-%M-%S).sql.xz on my laptop. This can be scheduled to run at arbitrary intervals by the regular user on the development machine (laptop) crontab -e and add the line:
0 0 * * * /bin/sh /path/to/sqldump/script
This will run the script daily at midnight, but it is obviously possible to adjust the time and frequency that the script runs. For a production server, it might make sense to run the backup much more frequently and have some sort of rotation strategy to make sure that disk space isn’t being wasted. Also, using some form of data deduplication is a good idea. One useful piece of software designed explicitly for this purpose is Colin Percival’s Tarsnap which encrypts files locally using a syntax similar to tar and then stores deduplicated, encrypted chunks on a remote server backed by Amazon Web Services (AWS).