2020-10 upd: we reached the first fundraising goal and rented a server in Hetzner for development! Thank you for donating !
Attention! Current pages describe CBSD version 13.0.x. If you are using an older version, please update first.
Attention! I apologize for the automatic translation of this text. You can improve it by sending me a more correct version of the text or fix html pages via GITHUB repository.
CBSD k8s module: deploying a kubernetes cluster
Installation of the module is done by the regular module script from the GitHub repository of the project:
cbsd module mode=install k8s
The module uses a pre-formed image, which must first be obtained:
cbsd fetch_iso name=cloud-kubernetes-20 dstdir=default cloud=1 conv2zvol=1
Activate the module through the config file and re-initialize CBSD:
echo 'k8s.d' >> ~cbsd/etc/modules.conf cbsd initenv
If the cbsd k8s command exists, then the module is ready to work.
To initialize a new K8S cluster, use the command: cbsd k8s mode=init:
cbsd k8s mode=init k8s_name=k1
where k1 - name of the cluster profile.
The number of master and worker nodes is regulated by the number of IP addresses that you assign through the init_masters_ips and init_nodes_ips parameters
In addition, 1 IP address is assigned as an API endpoint, via the vip=, virtual IP parameter
You can assign fixed addresses for masters and workers, or get them automatically from the CBSD pool by setting DHCP as addresses, for example
cbsd k8s mode=init k8s_name=k1 init_masters_ips="DHCP DHCP DHCP" init_nodes_ips="DHCP DHCP DHCP" vip=DHCP cluster=k8s-bhyve.io
As a result of this command, you will get a cluster named k8s-bhyve.io, consisting of 3 masters and 3 workers with automatically assigned IP addresses.
Other possible arguments and their description:
|profile name, short unique ID, for example: k1
|use CBSD VPC in which to deploy the cluster
|name of Kubernetes cluster, by default: k8s-bhyve.io
|hostname for master node
|which version of K8S to use
|which version of ETCD to use
|which version of flannel to use
|list of IP addresses for master nodes. The number of IP determines the number of masters
|list of IP addresses for worker nodes. The number of IP determines the number of worker
|IP of VRRP, for cluster API Endpoint
|IP address, used by VM as default gateway, by default: 10.0.0.1
|IP address for the internal DNS server in the kuberenetes network
|install CoreDNS service?
|hostname of the Ingress service
|It controls whether the master node can also execute worker functions and run containers, by default - yes: 1
|Use PV ? By default: 0
|specify an alternative uplink interface of the VM master nodes. default: auto
|specify an alternative uplink interface of the VM worker nodes. default: auto
|Master node configuration, amount of RAM. By default: 2g
|Master node configuration, amount of vCPU. By default: 1
|Master node configuration, disk space. By default: 20g
|Worker node configuration, amount of RAM.. By default: 2g
|Worker node configuration, amount of vCPU. By default: 1
|Worker node configuration, disk space. By default: 20g
When the k8s cluster is initialized, you must generate a CBSDfile to start and stop the cluster. To do this, use the command: k8s mode=init_upfile:
Two files will be generated in the current working directory - CBSDfile and bootstrap.config. This is all you need to start the cluster.
cbsd k8s mode=init_upfile k8s_name=k1
Once in the directory where CBSDfile and bootstrap.config are generated, run the command: cbsd up:
Upon completion of initialization, the system will import
You can copy it to another host or manage the cluster via kube and helm commands from your host system.
Once in the directory where CBSDfile and bootstrap.config are generated, run the command: cbsd destroy:
PV is configured with the pv_enable=1 option and the corresponding pv_spec-* parameters.
Attention: The current version will automatically configure the NFS server, which entails a complete generation of /etc/exports and modification of /etc/rc.conf files, followed by the launch of the appropriate services.
- Make sure pv_spec_nfs_path points to an existing directory and has NFS permission. For example, to use the default path ( /nfs), you must run:
zfs create zroot/nfs zfs set mountpoint=/nfs zroot/nfs zfs set sharenfs=on zroot/nfs zfs mount zroot/nfs
- Make sure that your services are configured to bind/listen only the required IP address, for example via the flags in /etc/rc.conf:
nfs_server_flags="-u -t -h 10.0.0.1" mountd_flags="-r -S -h 10.0.0.1" rpcbind_flags="-h 10.0.0.1"
Creation of two independent K8S clusters of different configurations with PV-NFS on single host