FreeBSD virtual environment management and repository

2020-10 upd: we reached the first fundraising goal and rented a server in Hetzner for development! Thank you for donating !

Attention! Current pages describe CBSD version 13.0.x. If you are using an older version, please update first.

Attention! I apologize for the automatic translation of this text. You can improve it by sending me a more correct version of the text or fix html pages via GITHUB repository.

Synchronization of jail environments via csync2 and CBSD csync2 module

Today, there are many approaches and tools for replicating or distributing data on a file system between different servers. Depending on the task at hand, you can use:

to deliver configuration files. and the like to replicate binary data or entire file systems; you can use HASTD or ZFS Send for block and discrete replication. But there are still cases where, instead of complex and monstrous solutions, good old lightweight data synchronization utilities such as csync2 or lsyncd are ideal. You can also choose Ceph, ClusterFS and the like to replicate binary data or entire file systems; you can use HASTD or ZFS ыend for block and discrete replication. But there are still cases where, instead of complex and monstrous solutions, good old lightweight data synchronization utilities such as csync2 or lsyncd are ideal.

Of course, they are not suitable for syncing several thousand files or highly write-heavy environments. But for situations where you need to synchronize a container with a small number of files and infrequent writes, such utilities cope perfectly. For example, you have a jail container that serves static files for a WEB server and you want to replicate environments in large numbers using synchronization. The csync2 module for CBSD is a wrapper script for more comfortable management of the csync2 program configuration file when you need to synchronize jail-based containers.

How it works: in the system CBSD directory of each jail, you can accompany any configuration for csync2 describing the lists of files and directories of the container for synchronization (or their exclusion). The 'cbsd csync2' command runs every minute from cron, glues these files into one original configuration file /usr/local/etc/csync2.cfg and, depending on the required synchronization frequency (can be configured for each container individually), runs the csync2 utility for each container. or another.

As an example, let's create two sync containers named 'repl1' and 'repl1' between two CBSD nodes.


1) install the csync2 package and the 'cbsd csync2' module on both nodes:

pkg install -y csync2
cbsd module mode=install csync2
echo 'csync2.d' >> ~cbsd/etc/modules.conf
cbsd initenv


2) Copy the sample module configuration file to the CBSD working directory:

cp /usr/local/cbsd/modules/csync2.d/etc/csync2.conf ~cbsd/etc/

The configuration file is small and allows you to operate with parameters:

## Logs directory:
CSYNC2_CBSD_LOG_DIR="/var/log/cbsd-csync2"

## Global/main csync2 key:
CSYNC2_CBSD_KEY="/usr/local/etc/cbsd_csync2.key"

## Full path to csync2 executable file in the system
CSYNC2_CMD="/usr/local/sbin/csync2"

## The global csync2 config file, which in our case will be the auto-generated by CBSD csync2 module
CSYNC2_CFG_FILE="/usr/local/etc/csync2.cfg"

# Frequency of running the synchronization operation 
# (not to be confused with the frequency of running 
# the 'cbsd csync2' module from crontab - see below)
# By default, we perform one synchronization every ten minutes, 
# but if you want to have a different frequency for some containers, use
# file ~cbsd/jails-system/%jail%/etc to set an individual value for %jail%.
CSYNC2_CBSD_RUN_INTERVAL="10"


3) Let's create containers 'repl1' and 'repl2' on both nodes. The names are the same, but we will use non-overlapping IP addresses, since in our case the servers are located in the same segment.

node1# cbsd jcreate ip4_addr=10.0.100.144 jname=repl1 runasap=1
node1# cbsd jcreate ip4_addr=10.0.100.145 jname=repl2 runasap=1

node2# cbsd jcreate ip4_addr=10.0.100.146 jname=repl1 runasap=1
node2# cbsd jcreate ip4_addr=10.0.100.147 jname=repl2 runasap=1


4) On each server, for each container, we will create a configuration file /usr/jails/jails-system/%jail%/csync2.cfg that describes the container directories for synchronization. We will replicate the entire container, except for the /var/run/ directory where the PIDs of the processes are stored and we will not synchronize the logs, the contents of the files of which are unique for each container, since we plan that all containers will work. On node1: for container 'repl1' and assuming cbsd_workdir is set to /usr/jails:


cat > /usr/jails/jails-system/repl1/csync2.cfg <<EOF
        host node1.my.domain;
        host node2.my.domain;

        include /usr/jails/jails-data/repl1-data;
        include /usr/jails/jails-system/repl1/csync2.cfg;
        exclude /usr/jails/jails-data/repl1-data/var/spool/clientmqueue;
        exclude /usr/jails/jails-data/repl1-data/var/log/*;
        exclude /usr/jails/jails-data/repl1-data/var/log/*/*.log;
        exclude /usr/jails/jails-data/repl1-data/var/run/*;
        exclude /usr/jails/jails-data/repl1-data/tmp/*;

        action
        {
                pattern /usr/jails/jails-data/repl1-data/usr/local/etc/nginx/*;
                exec "/usr/local/bin/cbsd service jname=repl1 mode=action nginx reload";
                logfile "/usr/jails/jails-system/repl1/csync2.actions.log";
                do-local;
        }

        auto younger;
EOF

same config for container 'repl2' with corresponding difference in directory paths:


cat > /usr/jails/jails-system/repl2/csync2.cfg <<EOF
        host node1.my.domain;
        host node2.my.domain;

        include /usr/jails/jails-data/repl2-data;
        include /usr/jails/jails-system/repl2/csync2.cfg;
        exclude /usr/jails/jails-data/repl2-data/var/spool/clientmqueue;
        exclude /usr/jails/jails-data/repl2-data/var/log/*;
        exclude /usr/jails/jails-data/repl2-data/var/log/*/*.log;
        exclude /usr/jails/jails-data/repl2-data/var/run/*;
        exclude /usr/jails/jails-data/repl2-data/tmp/*;

        action
        {
                pattern /usr/jails/jails-data/repl2-data/usr/local/etc/nginx/*;
                exec "/usr/local/bin/cbsd service jname=repl2 mode=action nginx reload";
                logfile "/usr/jails/jails-system/repl2/csync2.actions.log";
                do-local;
        }

        auto younger;
EOF
Pay attention to 'node1.my.domain' and 'node2.my.domain' - these should be IP addresses or valid DNS names of your CBSD hosts. In addition, we use the special 'action' directive as an example, which will restart the nginx service in the desired container if the configuration files of the 'nginx' HTTP server, which in our case is running in each container, were modified somewhere during synchronization.


5) We repeat the completely similar setting from point '4' on the second node.


6) Let's generate a global csync2 key on any of our hosts by running the script:

cbsd csync2

If the file /usr/local/etc/cbsd_csync2.key is missing (parameter CSYNC2_CBSD_KEY), the file will be generated. Attention! You need to propagate this key to all nodes participating in the synchronization. Of course, it is convenient to do this with tools such as: Puppet, Chef, Salt, Ansible.


7) mark the csync2 service via /etc/rc.conf as active, on all nodes:

sysrc csync2_enable="YES"


8) Let's add an entry to cron as user 'root' to run 'cbsd csync2' every minute. This does not mean that the synchronization will work every minute - as we remember, we regulate the synchronization frequency with the CSYNC2_CBSD_RUN_INTERVAL parameter. We make a call to 'crontab -e' on all nodes and add the line:

* * * * * /usr/bin/lockf -s -t0 /tmp/cbsd_csync2.lock /usr/bin/env NOCOLOR=1 /usr/local/bin/cbsd csync2 >> /var/log/cbsd-csync2/csync2.log 2>&1
This completes the setup. You can run the script several times manually to make sure everything works correctly:
cbsd csync2 verbose=1 force=1
Or watch the logs in the /var/log/cbsd-csync2 directory a few minutes after installing cron.