Achtung! Der deutschsprachige Inhalt ist wegen fehlenden Betreuerinnen und Betreuern veraltet. Verwenden Sie bitte die englischsprachige Version!
Attention! Current pages describe CBSD version 12.0.x. If you are using an older version, please update first.
Attention! I apologize for the automatic translation of this text. You can improve it by sending me a more correct version of the text or fix html pages via GITHUB repository.
FreeBSD: Bhyve ( virtualization of xBSD and Linux )
Bhyve -hypervisor is kindly open now by NetAPP (having their achievements based on FreeBSD OS) under the BSD license and is currently included in the box/base version of OS FreeBSD, unlike some similar projects that are not part of the OS and (or modify them for the kernel), you must first download and install. In other words, all of the FreeBSD since 10.0, able to run OpenBSD, NetBSD and Linux-based virtual systems right out of the box.
Experimenting with a huge number of bhyve arguments, I pretty quickly an overwhelming desire to have
PetiteCloud CLI CBSD, but instead bhyve jail, as it is very highly configurable hypervisor and swim in its settings can be long enough to launch the first environment.
Second thought - that the code CBSD a lot of action primitives inherent and for virtual environments. In addition, when working with gusts FreeBSD, part of potentially useful functions intersect with what we would like to see in cbsd4bhyve. In this regard, there was a thought in CBSD 10.0.4 if you do not add support bhyve directly in the code, then do it as a module for CBSD.
What are the pros (except for the fact that FreeBSD is able to be fast host system) can be obtained with FreeBSD as a host, and Linux / OpenBSD / NetBSD as a guest? The options are many and the question is unlikely to ever be closed. For example, the ability to use HASTD and iscsi to use NAS, is the ability to limit cputime or pcpu on a virtual machine, it is possible to use Link Aggregation, aggregating multiple physical network cards in one trunk and feed it into a virtual systems. It is an opportunity to infiltrate NetGraph hooks in traffic from systems that do not possess NetGraph. This is an opportunity to use GELI-crypto partition, FIB, nice. Opportunity to using ZFS snapshots and cloning to obtain file system inside the virtual systems that are about snapshots are not even aware. Allowing for example to do TimeMachine Linux/ext3/ext4/xfs/btrfs, while rollbacking did not depend on the file system and killed operability guest OS - just outside the virtual system to make an API for managing snapshots. Which is very beneficial you can use the hosting Linux/BSD systems. Also, you can perform bulk actions - for example, having one image of the virtual system to make 100 clonings means ZFS, parallel mount the image and modifying the configuration in its own way.
Currently, in the wrapper bhyve CBSD available version on github and implemented as commands jails, but beginning with the letter b:
- jstart (jail start) -> bstart (bhyve start)
- jstop (jail stop) -> bstop (bhyve stop)
- jls (jail list) -> bls (bhyve list)
- jconstruct-tui (jail constructor) -> bconstruct-tui (bhyve constructor)
- bhyve demo: FreeBSD as host and Linux CentOS x86 as guest
- bhyve demo: FreeBSD as host and FreeBSD as guest, created by cbsd jail2iso script
Currently, the development of bhyve support functional in CBSD is an experimental and not in priority. Use it in a production environment is not recommended.
If you have any problems with the launch bhyve vm and want to debug problem, you can use script and configuration file, which generatesCBSD to start bhyve virtual machine. Startup script bhyve - $workdir/share/bhyverun.sh, that as an argument expects the path to the config file. For example, for a virtual machine named debian1 and with workdir /usr/jails, this file is - /usr/jails/jails-system/debian1/bhyve.conf. This script will allow you to try to run the virtual machine without reference to the any problems of CBSD (which deals only with the creation of images, NICs and generation bhyve.conf). If the study of the script without CBSD will not bring clarity, it makes sense to write the appropriate mailing list FreeBSD virtualization.
Some of the Linux guest, are loaded through grub-bhyve (the authors contend bhyve this inconvenience - a matter of time) are loaded through the points the way to vmlinuz and initramfs. For example: /vmlinuz-3.10.0-123.el7.x86_64.
Since the corresponding setting is registered in the profile of the virtual machine, the upgrade version of the Linux kernel in the guest, the profile will indicate old or no longer existing file. Unfortunately, in such distributions (RHEL, CentOS, Oracle) when updates need to track this point and edit profiles (for example, by creating a copy of the profile of the etc/defaults and put under a new name in the /etc). In some flavors of Linux, the kernel file names are constant (but can be a symbolic link, and specify the desired kernel). In such distributions update kernels should not be a name change and, accordingly, such a problem they are not exposed. Examples of such "right" Linux distributions: Debian, Ubuntu. Additionally, it should be said that as a solution to this problem can also be loading a virtual machine through the EFI (and grub2-efi port)
To work with bhyve via CBSD, in the base system should also be been installed following packages:
- grub-bhyve ( pkg install grub-bhyve or make -C /usr/ports/sysutils/grub-bhyve install )
- tmux ( pkg install tmux or make -C /usr/ports/sysutils/tmux install )
In addition, the kernel of your node must have modules if_bridge, if_tap, vmm. If the kernel configuration they are not available, you can load them via:
if_bridge_load="YES" if_tap_load="YES" vmm_load="YES"
kld_list="if_bridge if_tap vmm"