Backup

To backup docker-scripts apps, it is usually enough to make a backup of the directories /opt/docker-scripts/ and /var/ds/, since the data of the apps are usually stored on the host system (in subdirectories of /var/ds/). I also include /root/ in the backup, since it may contain useful things (for example maintenance scripts).

1. The backup script

The main backup script is /root/backup/backup.sh and it looks like this:

#!/bin/bash -x

cd $(dirname $0)

MIRROR_DIR=${1:-/var/bak}

# mirror everything to a local dir
./mirror.sh $MIRROR_DIR

# backup the mirror directory to the storagebox
./borg.sh $MIRROR_DIR

# backup the incus setup
./incus-backup.sh

It has these main steps:

  1. Mirror everything (/root/, /var/ds/ and /opt/docker-scripts/) to the directory /var/bak/. This is mainly done by rsync.

  2. Backup the mirror directory (/var/bak/) to the StorageBox, using BorgBackup.

  3. Unrelated to the first two steps, backup also the setup and configuration of Incus, which is located at /var/lib/incus/.

2. Run the script periodically

To run the backup script periodically, create a cron job like this:

cat <<EOF > /etc/cron.d/backup
30 3 * * * root bash -l -c "/root/backup/backup.sh &> /dev/null"
EOF

It runs the script every night at 3:30. The borg script takes care to keep only the latest 7 daily backups, the latest 4 weekly backups, and the latest 6 monthly backups. This makes sure that the size of the backups (on the StorageBox) does not grow without limits. Borg also uses deduplication and compression for storing the backups, and this helps to reduce further the size of the backup data.

We are using bash -l -c to run the command in a login shell. This is because the default PATH variable in the cron environment is limited and some commands inside the script will fail to execute because they cannot be found. By running it with bash -l -c we make sure that it is going to have the same environment variables as when we execute it from the prompt.

We are also using &> /dev/null in order to ignore all the stdout and stderr output. If the cron jobs produce some output, the cron will try to notify us by email. Usually we would like to avoid this.

3. The mirror script

The script that mirrors /root/, /var/ds/ and /opt/docker-scripts/ to the directory /var/bak/ is located at /root/backup/mirror.sh, and it looks like this:

Script: backup/mirror.sh
#!/bin/bash -x

MIRROR_DIR=${1:-/var/bak}
rsync='rsync -avrAX --delete --links --one-file-system'

main() {
    mirror_host
    mirror_incus_containers
    mirror_bbb
}

mirror_host() {
    local mirror=${MIRROR_DIR}/host

    # mirror /root/
    mkdir -p $mirror/root/
    $rsync /root/ $mirror/root/

    # mirror /opt/docker-scripts/
    mkdir -p $mirror/opt/docker-scripts/
    $rsync \
        /opt/docker-scripts/ \
        $mirror/opt/docker-scripts/

    # backup the content of containers
    /var/ds/_scripts/backup.sh

    # mirror /var/ds/
    stop_docker
    mkdir -p $mirror/var/ds/
    $rsync /var/ds/ $mirror/var/ds/
    start_docker
}

stop_docker() {
    local cmd="$* systemctl"
    $cmd stop docker
    $cmd disable docker
    $cmd mask docker
}

start_docker() {
    local cmd="$* systemctl"
    $cmd unmask docker
    $cmd enable docker
    $cmd start docker
}

mirror_incus_containers() {
    local mirror
    local container_list="vclab"
    for container in $container_list ; do
        mirror=$MIRROR_DIR/$container

        # mount container
        mount_root_of_container $container

        # mirror /root/
        mkdir -p $mirror/root/
        $rsync mnt/root/ $mirror/root/

        # mirror /opt/docker-scripts/
        mkdir -p $mirror/opt/docker-scripts/
        $rsync \
            mnt/opt/docker-scripts/ \
            $mirror/opt/docker-scripts/

        # backup the content of the docker containers
        incus exec $container -- /var/ds/_scripts/backup.sh

        # mirror /var/ds/
        stop_docker "incus exec $container --"
        mkdir -p $mirror/var/ds/
        $rsync mnt/var/ds/ $mirror/var/ds/
        start_docker "incus exec $container --"

        # unmount container
        unmount_root_of_container
    done
}

mount_root_of_container() {
    local container=$1
    mkdir -p mnt
    incus file mount $container/. mnt/ &
    MOUNT_PID=$!
    sleep 2
}

unmount_root_of_container() {
    kill -9 $MOUNT_PID
    sleep 2
    rmdir mnt
}

mirror_bbb() {
    local container=bbb
    local mirror=$MIRROR_DIR/$container

    # mount container
    mount_root_of_container $container

    # mirror /root/
    stop_docker "incus exec $container --"
    mkdir -p $mirror/root/
    $rsync mnt/root/ $mirror/root/
    start_docker "incus exec $container --"

    # unmount container
    unmount_root_of_container
}

### call main
main "$@"

This script is self-explanatory, easy to read and to understand what it is doing. Nevertheless, let’s discuss a few things about it.

  1. The script does not only mirror the directories of the host, but also the directories /root/, /var/ds/ and /opt/docker-scripts/ in Incus containers. We are assuming that in some Incus containers we have installed Docker and some docker-scripts apps. In this example script, there is only one such Incus container, named vclab:

    local container_list="vclab"

    But if there are more, their names can be easily added to the list (separated by spaces).

  2. Before mirroring /var/ds/, we run the script /var/ds/_scripts/backup.sh. This script may run the command ds backup on some of the applications. This is needed only for those apps where the content of the app directory is not sufficient for making a successful restore of the application. Usually those are the apps where we would use a ds remake to update, instead of a ds make (see also: Update). So, we make these necessary docker-scripts backups, before making a mirror of the directory and a backup to the StorageBox. This backup script may look like this:

    Script: /var/ds/_scripts/backup.sh
    #!/bin/bash -x
    
    rm /var/ds/*/logs/*.out
    
    cd /var/ds/talk.example.org/
    ds backup
    find backup -type f -name "*.tar.gz" -mtime +10 -delete
    
    cd /var/ds/guacamole.example.org/
    ds backup
    find . -type f -name "backup*.tgz" -mtime +10 -delete
    
    cd /var/ds/linuxmint/
    ds users backup
    find backup/ -type f -name "*.tgz" -mtime +10 -delete
    
    cd /var/ds/moodle.example.org/
    ds backup
    find . -type f -name "backup*.tgz" -mtime +10 -delete
    
    # list all the backups
    find /var/ds/ -name '*.tgz' | xargs ls -lh --color=yes
  3. We are using the option --one-file-system with the command rsync. This means that the data on any external file systems will not be mirrored, will be skipped. This is useful for cases like NextCloud or BBB, which may have big amounts of data, and we keep their data on an external storage. These data should be backed up separately.

  4. Before mirroring /var/ds/ which contains the applications and their data, we make sure to stop docker, which in turn will stop all the applications. If the data on the disk are constantly changing while we make the mirror, we may get a "mirror" with inconsistent data.

  5. For the Incus containers, we mount the filesystem of the container to a directory on the host, before mirroring, and unmount it afterwards.

  6. For the bbb container, we mirror only the /root/ directory, which contains also the docker applications (greenlight etc.). The data directory /var/bigbluebutton/ is not included in this backup.

4. The borg script

This script makes a backup of the mirror directory (/var/bak/ by default) to the StorageBox. It is located at /root/backup/borg.sh, and looks like this:

Script: backup/borg.sh
#!/bin/bash

export BORG_PASSPHRASE='XXXXXXXXXXXXXXXX'
export BORG_REPO='storagebox:backup/repo1'

### to initialize the repo run this command (only once)
#borg init -e repokey
#borg key export

dir_to_backup=${1:-/var/bak}

# some helpers and error handling:
info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; }
trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM

info "Starting backup"
borg create \
    --stats \
    --show-rc \
    ::'{hostname}-{now}' \
    $dir_to_backup
backup_exit=$?

info "Pruning repository"
borg prune \
    --list \
    --glob-archives '{hostname}-*' \
    --show-rc \
    --keep-daily    7 \
    --keep-weekly   4 \
    --keep-monthly  6
prune_exit=$?

info "Compacting repository"
borg compact
compact_exit=$?

# use highest exit code as global exit code
global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit ))
global_exit=$(( compact_exit > global_exit ? compact_exit : global_exit ))

if [ ${global_exit} -eq 0 ]; then
    info "Backup, Prune, and Compact finished successfully"
elif [ ${global_exit} -eq 1 ]; then
    info "Backup, Prune, and/or Compact finished with warnings"
else
    info "Backup, Prune, and/or Compact finished with errors"
fi

exit ${global_exit}

The main commands of this script are borg create, borg prune and borg compact. The last part of the script just shows the status of these commands.

The script makes sure to keep a backup only for the last 7 days, the last 4 weeks, and the last 6 months (17 backups in total). This prevents the size of the backups from growing without limits. Borg also uses deduplication, compression and compact to keep the size of the backup storage as small as possible.

Borg commands usually need to know on which borg repository they should work, and the passphrase that is used to encrypt that repository. These may be specified on the command line, or with environment variables. We are using environment variables because this is more convenient for a script:

export BORG_PASSPHRASE='XXXXXXXXXXXXXXXX'
export BORG_REPO='storagebox:backup/repo1'

The semicolon in the BORG_REPO tells borg to use SSH for accessing the repo. The name of the SSH server (storagebox) is defined in /root/.ssh/config and looks like this:

Host storagebox
    HostName uXXXXXX.your-storagebox.de
    User uXXXXXX
    Port 23
    IdentityFile /root/storagebox/key1

For some more details see: How to access the StorageBox with SSH keys

5. Initialize a borg repo

Before using the borg script, it is necessary to initialize a borg repository for the backup. It can be done like this:

pwgen 20
export BORG_PASSPHRASE='XXXXXXXXXXXXXXXXXXXX'

ssh storagebox mkdir -p backup/repo1
export BORG_REPO='storagebox:backup/repo1'

borg init -e repokey
borg key export > borg.repo1.key
The repository data is totally inaccessible without the key and the passphrase. So, make sure to save them in a safe place, outside the server and outside the StorageBox.
It is important to study at least the Quick Start tutorial of the BorgBackup, and to make some tests in order to get familiar with borg.
Quick Borg testing
mkdir tst
cd tst/

export BORG_PASSPHRASE='12345678'

ssh storagebox mkdir -p borgtest/repo1
ssh storagebox tree borgtest
export BORG_REPO='storagebox:borgtest/repo1'

# initialize repo
borg init -e repokey
borg key export > borg.test.repo1.key
cat borg.test.repo1.key

# create archives
borg create ::archive1 /etc
borg create --stats ::archive2 /etc
ssh storagebox tree borgtest

# list archives
borg list
borg list ::archive1

# extract
borg extract ::archive1
ls
ls etc
rm -rf etc

# delete archives
borg delete ::archive1
borg compact
borg list

# delete borg repository
borg delete
borg list
ssh storagebox ls -al borgtest
ssh storagebox rmdir borgtest

6. Restore docker-scripts app

Let’s assume that we want to restore the application /var/ds/app1/.

  1. First, we extract it from the Borg archive to a local directory:

    export BORG_PASSPHRASE='XXXXXXXXXXXXXXXXXXXX'
    export BORG_REPO='storagebox:backup/repo1'
    
    borg list
    borg list ::<name-of-archive>
    borg list ::<name-of-archive> var/bak/host/var/ds/app1/
    
    cd ~
    borg extract ::<name-of-archive> var/bak/host/var/ds/app1/
    ls var/bak/host/var/ds/app1/
  2. Remove the current app (if installed):

    cd /var/ds/app1/
    ds remove
    cd ..
    rm -rf app1/
  3. Copy/move the directory that was extracted from the backup archive:

    cd /var/ds/
    mv ~/var/bak/host/var/ds/app1/ .
    rm -rf ~/var/
  4. Rebuild the app:

    cd app1/
    ds make

    Usually this is sufficient for most of the apps, because the necessary settings, data, configurations etc. are already in the directory of the application.

    In some cases, if there is a .tgz backup file, we may also need to do a ds restore:

    ds restore backup*.tgz

7. Backup Incus

This is done by the script /root/backup/incus-backup.sh, which is called by /root/backup/backup.sh. It looks like this:

Script: backup/incus-backup.sh
#!/bin/bash -x

main() {
    backup_incus_config
    snapshot_containers
    export_containers
}

backup_incus_config() {
    local incus_dir="/var/lib/incus"
    local destination="storagebox:backup/incus"

    incus admin init --dump > $incus_dir/incus.config
    incus --version > $incus_dir/incus.version
    incus list > $incus_dir/incus.instances.list

    rsync \
        -arAX --delete --links --one-file-system --stats -h \
        --exclude disks \
        $incus_dir/ \
        $destination

}

snapshot_containers() {
    # get the day of the week, like 'Monday'
    local day=$(date +%A)

    # get a list of all the container names
    local containers=$(incus list -cn -f csv)

    # make a snapshot for each container
    for container in $containers; do
        incus snapshot create $container $day --reuse
    done
}

export_containers(){
    # mount the storagebox to the directory mnt/
    ssh storagebox mkdir -p backup/incus-containers
    mkdir -p mnt
    sshfs storagebox:backup/incus-containers mnt

    # clean up old export files
    find mnt/ -type f -name "*.tar.xz" -mtime +5 -delete

    # get list of containers to be exported
    # local container_list=$(incus list -cn -f csv)
    local container_list="name1 name2 name3"

    # export containers
    for container in $container_list; do
        incus export $container \
            mnt/$container-$(date +'%Y-%m-%d').tar.xz \
            --optimized-storage
    done

    # unmount
    umount mnt/
    rmdir mnt/
}

# call main
main "$@"

The Incus directory is located at /var/lib/incus/. When we make a mirror of this directory to the storagebox (with rsync), we exclude the subdirectory disks/, which contains the storage devices, where are stored the Incus containers etc. Instead, we are making backups of the Incus containers separately.

We are also making a daily snapshot of all the containers. We are doing it in such a way that only the last 7 daily snapshots are preserved.

To backup the Incus containers, we use the command incus export (with the option --optimized-storage), to create a compressed archive of the filesystem of the container. This archive is stored on a directory in the StorageBox, which is mounted with SSHFS. The archives older than 4-5 days are removed. Not all the containers need a full backup like this, because for some of them we actually backup only the important configurations and data.

Restore

A restore can be done manually, with commands like these:

  1. Mount the backup directory:

    apt install sshfs
    mkdir -p mnt/
    sshfs storagebox:backup/ mnt/
    ls mnt/incus/incus.*
  2. Install the same version of Incus as that of the backup:

    cat mnt/incus/incus.version
    apt install --yes incus
  3. Initialize incus with the same config settings as the backup:

    vim mnt/incus/incus.config
    cat mnt/incus/incus.config | incus admin init --preseed
    incus list
  4. Import any instances from backup:

    ls mnt/incus-containers/*.tar.xz
    incus import mnt/incus-containers/name1-20024-02-05.tar.xz
    incus list
    incus start name1