How to Use LXD and KVM on the Same Hypervisor Node

Solution Verified in:

  • OpenNebula: 5.10

Issue

You cannot use on the same node two hypervirsors if you require to use full virtualization on VMs. However, with containers not requiring full virtualization support from the CPU, a virtualization node can fully utilize 2 hipervisors, but, OpenNebula's current architecture is based upon the asumption of only 1 hypervisor per virtualization node.

In this article we will work around that limitation by using the same node twice.

Requirements

You need a KVM supported node and a LXD driver compatible distribution for OpenNebula, as not all of the virtualization nodes are compatible with the LXD driver. You can use a single node setup, in order to get a fast scenario to try this you can use the minione LXD launcher.

Solution

OpenNebula requires a hostname for the hosts in order to be imported, the trick is adding the same virtualization node, twice, first with the name for KVM/LXD, then, with the name for LXD/KVM.

Step 1: Aliases

You should have a LXD host provided by minione, or manually sat up by you.

  oneadmin@LXDnKVM:~$ onehost list
  ID NAME            CLUSTER   TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT  
   0 localhost       default     0       0 / 200 (0%)     0K / 1.9G (0%) on    

  oneadmin@LXDnKVM:~$ onehost show 0 | grep MAD
  IM_MAD                : lxd                 
  VM_MAD                : lxd                 
  IM_MAD="lxd"
  VM_MAD="lxd"
  

Add the kvm name to the /etc/hosts file

  127.0.0.1 localhost kvm

Step 2: Adding the host

Now the oneadmin user already is able to access via ssh to the host, but you need you make sure the kvm host is a known host.

  oneadmin@LXDnKVM:~$ ssh kvm
  The authenticity of host 'kvm (127.0.0.1)' can't be established.
  ECDSA key fingerprint is SHA256:dwPyCUgSN38eh9kL2cn/l2PQ67aUVOjt37JVceLCbZ0.
  Are you sure you want to continue connecting (yes/no)? yes
  Warning: Permanently added 'kvm' (ECDSA) to the list of known hosts.

Now add the “new” host.

  oneadmin@LXDnKVM:~$ onehost create kvm -v kvm -i kvm
  oneadmin@LXDnKVM:~$ onehost list
  ID NAME CLUSTER TVM ALLOCATED_CPU ALLOCATED_MEM STAT
  1 kvm default 0 0 / 200 (0%) 0K / 1.9G (0%) on
  0 localhost default 0 0 / 200 (0%) 0K / 1.9G (0%) on

 Step 3: Creating containers and VMs

Let’s create a VM and a container.

  oneadmin@LXDnKVM:~$ onetemplate instantiate 0
  VM ID: 0
  oneadmin@LXDnKVM:~$ onevm list
  ID USER GROUP NAME STAT UCPU UMEM HOST TIME
  0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
  oneadmin@LXDnKVM:~$ onetemplate instantiate 0
  VM ID: 1
  oneadmin@LXDnKVM:~$ onevm list
  ID USER GROUP NAME STAT UCPU UMEM HOST TIME
  1 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K localhost 0d 00h00
  0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
  oneadmin@LXDnKVM:~$ virsh list
  Id Name  State 
  1  one-0 running
  
  oneadmin@LXDnKVM:~$ lxc list
  +-------+---------+---------------------+------+------------+-----------+
  | NAME  | STATE   | IPV4                | IPV6 | TYPE       | SNAPSHOTS |
  +-------+---------+---------------------+------+------------+-----------+
  | one-1 | RUNNING | 172.16.100.3 (eth0) |      | PERSISTENT | 0         |
  +-------+---------+---------------------+------+------------+-----------+

Note that we instantiated the template twice and the scheduler created the 1st time as a VM and the 2nd time a container due to the last added host having resources allocated and the first one being empty.

Step 4: Tuning the hosts

Since the node has been added twice, we have a fake amount of total Memory and CPU values, this can lead to OOM scenarios since the scheduler has a fake limit. Therefore we need to make it so the sum of total CPU and Memory matches the one we desire if it were only 1 node. We can, this way, allocate the amount of resources per hipervisor, however, this might lead to resources not being utilized if, for example, the LXD side is full, but the KVM side has available resources.

On both nodes update the RESERVED_MEM and RESERVED_CPU attributes, and put there how much you want to remove from the total capacity.

  oneadmin@LXDnKVM:~$ onehost update 0
  ...
  RESERVED_CPU="100"
  RESERVED_MEM="1000000"
  ...
  oneadmin@LXDnKVM:~$ onehost update 1
  ...
  RESERVED_CPU="100"
  RESERVED_MEM="1000000"
  ...

Step 5: Tuning the VM templates

If you want certain templates to only be deployed as containers/VMs, you need to add the scheduling requirements on the template.

Let's import an app from the marketplace

  oneadmin@LXDnKVM:~$ onemarketapp list | grep -i alpine
  48 alpine_edge - LXD 1.0 1024M rdy img 04/29/20 Linux Cont 0
  47 alpine_3.11 - LXD 1.0 1024M rdy img 04/29/20 Linux Cont 0
  46 alpine_3.10 - LXD 1.0 1024M rdy img 04/29/20 Linux Cont 0
  45 alpine_3.9 - LXD 1.0 1024M rdy img 04/29/20 Linux Cont 0
  44 alpine_3.8 - LXD 1.0 1024M rdy img 04/29/20 Linux Cont 0
  39 Vrouter Alpine - vCenter 5.0.2-0.20 256M rdy img 11/21/18 OpenNebula 0
  35 Alpine Linux 3.11 5.10.0-1.2 256M rdy img 01/09/20 OpenNebula 0
  30 Alpine Linux 3.8 5.10.0-2.2 256M rdy img 11/27/19 OpenNebula 0
  29 Alpine Linux 3.10 5.10.0-2.2 256M rdy img 11/27/19 OpenNebula 0
  23 Alpine Linux 3.9 5.10.0-2.2 256M rdy img 11/27/19 OpenNebula 0
  5 Vrouter Alpine - KVM
  
  oneadmin@LXDnKVM:~$ onemarketapp export 35 -d 1 market_alpine
  IMAGE
  ID: 1
  VMTEMPLATE
  ID: 2

Then proceed to clone it

  oneadmin@LXDnKVM:~$ onetemplate clone market_alpine alpine_KVM_only
  ID: 3
  oneadmin@LXDnKVM:~$ onetemplate clone market_alpine alpine_LXD_only
  ID: 4

This template is only incompatible with vcenter, so the scheduler will place the VMs created from it on any node with available resources.

  oneadmin@LXDnKVM:~$ onetemplate show 2 | grep -i sched
  SCHED_REQUIREMENTS="HYPERVISOR!=\"vcenter\""

Make template 3 KVM compatible only by updating the SCHED_REQUIREMENTS section to this

SCHED_REQUIREMENTS="HYPERVISOR=\"lxd\""

Make template 4 LXD compatible only by updating the SCHED_REQUIREMENTS section to this

SCHED_REQUIREMENTS="HYPERVISOR=\"kvm\""

Now you can have container templates and VM templates.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.