How to Use miniONE to Deploy a Firecracker Edge Cloud for Containers

Solution Verified in:

  • OpenNebula: 5.12

Issue

miniONE is OpenNebula’s simple evaluation tool for installing an all-in-one, single-node instance based on KVM, LXD system containers, or Firecracker microVMs. The latest version of miniONE brings two new options that allow users to deploy Docker images as Firecracker microVMs thanks to OpenNebula's seamless integration with Docker Hub. By using those new mininONE features together with OpenNebula’s Edge Computing features you can easily deploy containerized applications on cloud resources at the edge from bare-metal infrastructure providers such as Packet. 

Requirements

You will need a Packet account, and in particular an API token and project ID that will be used to provision resources on Packet. Please follow the Packet getting started guide to create and retrieve them by using the Packet dashboard.

Solution

In order to show how to deploy containerized application on the edge, we are going to consider an IoT application. We are going to deploy a ThingsBoard IoT framework in a central location, and then Thingboard IoT Gateway with MQTT brokers on the edge.

Example

This screencast follows the procedure described in this article to deploy a Firecracker Cloud on the Edge:

Step 1: OpenNebula Frontend + Firecracker Hypervisor Installation

You will need one host for the OpenNebula frontend and a Firecracker hypervisor. It could be a physical host in your rack or in a public cloud bare-metal server. Choose either Centos/RHEL or Ubuntu, and ensure it is a relatively fresh and updated system. 

For the tutorial, we decided to run the frontend and a Firecracker hypervisor on a bare-metal server on Packet. You can create a server from the Packet dashboard, by choosing different plans, facilities and operating systems. For the tutorial, we choose the c1.small.x86 plan, the ams1 facility in Amsterdam, Europe, and Ubuntu 18.04 as the operating system.

It will take some minutes before the packet host is ready. Once it is ready, you can connect via SSH to the frontend host and download the miniONE tool

root@host:~$ wget -O /tmp/minione 'https://github.com/OpenNebula/minione/releases/latest/download/minione'

 and then execute miniONE to deploy OpenNebula's frontend and the Firecracker hypervisor on the host

root@host:~$ bash /tmp/minione --firecracker --yes

 While miniONE is doing its job, you should receive an output like this

### Checks & detection
Checking AppArmor  SKIP will try to modify

### Main deployment steps:
Install OpenNebula frontend version 5.12
Configure bridge minionebr with IP 172.16.100.1/24
Enable NAT over bond0
Modify AppArmor
Install OpenNebula Firecracker node
Export appliance and update VM template

Do you agree? [yes/no]:

### Installation
Updating APT cache  OK
Creating bridge interface minionebr  OK
Bring bridge interfaces up  OK
Configuring NAT using iptables  OK
Saving iptables changes  OK
Installing DNSMasq  OK
Starting DNSMasq  OK
Configuring repositories  OK
Create docker packages repository  OK
Updating APT cache  OK
Installing OpenNebula packages  OK
Installing OpenNebula firecracker node packages  OK
Install docker  OK
Start docker service  OK
Enable docker service  OK

### Configuration
Add oneadmin to docker group  OK
Update network hooks  OK
Switching OneGate endpoint in oned.conf  OK
Switching OneGate endpoint in onegate-server.conf  OK
Switching keep_empty_bridge on in OpenNebulaNetwork.conf  OK
Switching scheduler interval in oned.conf  OK
Switching to QEMU emulation  OK
Setting initial password for current user and oneadmin  OK
Changing WebUI to listen on port 80  OK
Starting OpenNebula services  OK
Enabling OpenNebula services  OK
Add ssh key to oneadmin user  OK
Update ssh configs to allow VM addresses reusig  OK
Ensure own hostname is resolvable  OK
Checking OpenNebula is working  OK
Disabling ssh from virtual network  OK
Adding localhost ssh key to known_hosts  OK
Testing ssh connection to localhost  OK
Updating datastores template  OK
Creating Firecracker host  OK
Creating virtual network  OK
Exporting [alpine] from dockerhub to local datastore  OK
Exporting [Kernel 5.4 x86_64 - Firecracker] to local datastore  OK
Waiting until the image is ready  OK
Updating VM template  OK

and finally end up with a successful report similar to this one

### Report
OpenNebula 5.12 was installed
Sunstone [the webui] is running on:
  http://147.75.100.229/
Use following to login:
  user: oneadmin
  password: ah6Z7TvUNG

Step 2: ThingsBoard IoT Framework Deployment

Note

Before proceeding with the installation, the /var/lib/one/remotes/datastore/docker_downloader.sh file must be updated according to the following fix.

In order to install Thingboard IoT Framework on the host, first we need to register the ThingsBoard IoT docker image in the datastore by using the following command

oneimage create -d default --path 'docker://thingsboard/tb-postgres?tag=3.0.1&size=2048&filesystem=ext4&format=raw' --name tb-postgres@3.0.1

Then, with the following command we will create a persistent datablock that will be used as a volume for the Postgres database

oneimage create -d default --type DATABLOCK --size 2048 --persistent --name tb-pgdata

and then you can format the datablock image with

img=$(oneimage show tb-pgdata | grep SOURCE | awk -F ' +' '{print $3}')
sudo virt-format --partition=none --filesystem=ext4 --format=raw -a $img

Note

virt-format requires root permissions to work and can be installed on Ubuntu with apt-get install libguestfs-tools.

Now you can create a template "tb.tpl" of the microVM that will use the docker image and the persistent datablock that have been registered in the datastore

NAME="tb"
CPU="2"
MEMORY="4096"
DISK=[
  IMAGE="tb-postgres@3.0.1",
  TARGET="vda" ]
DISK=[
  IMAGE="tb-pgdata",
  TARGE="vdb" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
NIC=[
  IP="172.16.100.10",
  NETWORK="vnet"
]
OS=[
  KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off",
  KERNEL_DS="$FILE[IMAGE=\"kernel\"]" ]
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="tb",
  PASSWORD="root",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",
  FILES = "/usr/share/demo/tb/init.sh"
]

The template can be created by using

onetemplate create tb.tpl

In the template, you can specify the CPU, MEMORY, the registered docker image, the persistent volume. It is mandatory to specify in the OS section, the kernel image and the kernel boot parameters; as kernel image you can use the one that is imported by miniONE during the installation. In order to start the Thingboard IoT Framework, an init script (/usr/share/demo/tb/init.sh) has been defined

#!/bin/bash
mount /dev/vdb /data
cd /tmp
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export LANG=C.UTF-8
export JAVA_HOME=/docker-java-home
export JAVA_VERSION=8u242
export JAVA_DEBIAN_VERSION=8u242-b08-1~deb9u1
export DATA_FOLDER=/data
export HTTP_BIND_PORT=9090
export DATABASE_TS_TYPE=sql
export PGDATA=/data/db
export SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
export SPRING_DRIVER_CLASS_NAME=org.postgresql.Driver
export SPRING_DATASOURCE_URL=jdbc:postgresql://localhost:5432/thingsboard
export SPRING_DATASOURCE_USERNAME=postgres
export SPRING_DATASOURCE_PASSWORD=postgres
nohup start-tb.sh >> /tmp/tb-postgres.log 2>&1 &

Note

The init script should be placed in a frontend folder that is accessible from oneadmin (/usr/share in the example). It should be named "init.sh" since within the OpenNebula contextualization package it is executed by default.

Once the template has been created, the microVM can be instantiated with the command

onetemplate instantiate tb --name tb

 

Note

In order to access the VM outside the host you can set the following iptables rules. For the ThingsBoard web application
iptables -A PREROUTING -t nat -i bond0 -p tcp --dport 9090 -j DNAT --to 172.16.100.10:9090
iptables -A FORWARD -p tcp -d 172.16.100.10 --dport 9090 -j ACCEPT
For the ThingsBoard MQTT connectivity
iptables -A PREROUTING -t nat -i bond0 -p tcp --dport 1883 -j DNAT --to 172.16.100.10:1883
iptables -A FORWARD -p tcp -d 172.16.100.10 --dport 1883 -j ACCEPT
Change bond0 to your host network device.

Once the microVM is booted and running, you can connect via browser to the ThingsBoard web application, using the host public IP and port 9090. You can create a gateway device as in the following image

ThingsBoardFramework.png

Step 3: OpenNebula Firecracker Edge Installation

Now you can proceed to provision Packet edge nodes that will be used for the deployment of ThingBoard IoT Gateways and MQTT brokers. miniONE gives you the possibility to extend an OpenNebula environment by adding hypervisor nodes at the edge, by using the option --node to the deployment command.

From the frontend host, we will provision a resource on Packet and we will import the eclipse-mosquitto appliance from the Docker Hub marketplace. In order to run the command, some basic information has to be provided: the API token, the project ID, the edge facility (in our example is sjc1 in Sunnyvale, CA), and the name of the Docker Hub image (eclipse-mosquitto).

./minione --node --firecracker --edge packet --edge-packet-token $TOKEN --edge-packet-project $PROJECT --edge-packet-facility sjc1 --fc-marketapp-name eclipse-mosquitto --yes

The provisioning will take 5-10 minutes to complete.

Step 4: MQTT Broker Deployment

We need to update the eclipse mosquitto template

 onetemplate update eclipse-mosquitto mqtt.tpl

mqtt.tpl

CPU="1"
MEMORY="1024"
DISK=[
  IMAGE="eclipse-mosquitto"
]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
NIC=[
  IP="192.168.150.20",
  NETWORK="packet-100-host-only"
]
NIC_ALIAS = [
  NETWORK = "packet-100-public",
  PARENT = "NIC0"
]
OS=[
  KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off",
  KERNEL_DS="$FILE[IMAGE=\"kernel100\"]" ]
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="$NAME",
  PASSWORD="root",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",
  FILES = "/usr/share/demo/mqtt/init.sh"
]

In order to start the eclipse MQTT broker the following init script (/usr/share/demo/mqtt/init.sh) has been defined

#!/bin/sh
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export VERSION=1.6.10
export DOWNLOAD_SHA256=92d1807717f0f6d57d1ac1207ffdb952e8377e916c7b0bb4718f745239774232
export GPG_KEYS=A0D6EEA1DCAE49A635A3B2F0779B22DFB3E717B7
export LWS_VERSION=2.4.2
setsid /docker-entrypoint.sh /usr/sbin/mosquitto -c /mosquitto/config/mosquitto.conf >> /tmp/mqtt.log 2>&1 &

Once the template has been created, the microVM can be instantiated with the command

onetemplate instantiate eclipse-mosquitto --name mqtt

Step 5: ThingsBoard IoT Gateway Deployment

In order to install Thingboard IoT Gateway on the host, first we need to register the ThingsBoard IoT Gateway docker image in the image datastore by using the following command

oneimage create -d packet-100-default --path 'docker://thingsboard/tb-gateway?tag=2.4.0&size=1024&format=raw&filesystem=ext4' --name tb-gateway@2.4.0

Now you can create a template "tb.tpl" of the microVM that will use the docker image registered in the datastore

NAME="tb"
NAME="tb-gw"
CPU="1"
MEMORY="1024"
DISK=[
  IMAGE="tb-gateway@2.4.0"
]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
NIC=[
  NETWORK="packet-100-host-only" ]
NIC_ALIAS=[
  NETWORK="packet-100-public",
  PARENT="NIC0" ]
OS=[
  KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off",
  KERNEL_DS="$FILE[IMAGE=\"kernel100\"]" ]
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="$NAME",
  ACCESS_TOKEN="$ACCESS_TOKEN",
  TB_IP="$TB_IP",
  MQTT_IP="$MQTT_IP",
  PASSWORD="root",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",
  FILES = "/usr/share/demo/tb-gw/init.sh /usr/share/demo/tb-gw/tb_gateway.yaml /usr/share/demo/tb-gw/mqtt.json"
]
USER_INPUTS = [
  ACCESS_TOKEN="M|text|Gateway access token",
  TB_IP="M|text|IP of Thingsboard",
  MQTT_IP="M|text|IP of MQTT broker"
]

The template can be created by using

onetemplate create tb-gw.tpl

For starting the microVM you need to create a couple of configuration files and the init script as in the following: /usr/share/demo/tb-gw/tb_gateway.yaml

thingsboard:
  host: TB_IP
  port: 1883
  remoteConfiguration: false
  security:
    accessToken: ACCESS_TOKEN
storage:
  type: memory
  read_records_count: 100
  max_records_count: 100000
connectors:
  -
    name: MQTT Broker Connector
    type: mqtt
    configuration: mqtt.json

/usr/share/demo/tb-gw/mqtt.json

{
  "broker": {
    "name":"Default Local Broker",
    "host":"MQTT_IP",
    "port":1883,
    "security": {
      "type": "anonymous"
    }
  },
  "mapping": [
    {
      "topicFilter": "/sensor/data",
      "converter": {
        "type": "json",
        "deviceNameJsonExpression": "${serialNumber}",
        "deviceTypeJsonExpression": "${sensorType}",
        "timeout": 60000,
        "attributes": [
          {
            "type": "string",
            "key": "model",
            "value": "${sensorModel}"
          },
          {
            "type": "string",
            "key": "${sensorModel}",
            "value": "on"
          }
        ],
        "timeseries": [
          {
            "type": "double",
            "key": "temperature",
            "value": "${temp}"
          },
          {
            "type": "double",
            "key": "humidity",
            "value": "${hum}"
          }
        ]
      }
    }
  ]
}

/usr/share/demo/tb-gw/init.sh

#!/bin/bash
mkdir -p /context
mount -L CONTEXT /context
source /context/context.sh
cp /context/tb_gateway.yaml /default-config/config/tb_gateway.yaml
cp /context/mqtt.json /default-config/config/mqtt.json
sed -i "s/TB_IP/$TB_IP/g" /default-config/config/tb_gateway.yaml
sed -i "s/ACCESS_TOKEN/$ACCESS_TOKEN/g" /default-config/config/tb_gateway.yaml
sed -i "s/MQTT_IP/$MQTT_IP/g" /default-config/config/mqtt.json
cd /tmp
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export DEBIAN_FRONTEND=noninteractive
export configs=/etc/thingsboard-gateway/config
export extensions=/var/lib/thingsboard_gateway/extensions
export logs=/var/log/thingsboard-gateway
nohup /bin/sh -c start-gateway.sh >> /tmp/tb-gateway.log 2>&1 &

Once the template has been created, you can instantiate the template to create the microVM, by passing user inputs as required

onetemplate instantiate tb-gw --name tb-gw
There are some parameters that require user input. Use the string <> to launch an editor (e.g. for multi-line inputs)
  * (ACCESS_TOKEN) Gateway access token
    OcC6kxH7SPeS0Z8tjZuh
  * (MQTT_IP) IP of MQTT broker
    192.168.150.20
  * (TB_IP) IP of Thingsboard
    147.75.100.229

Once the ThingsBoard IoT Gateway has been deployed, you should see in the gateway dashboard a green light.

GatewayActive.png

Step 6: Testing the IoT Application

You can test the application by using a MQTT client (e.g. mosquitto_pub) to send a message to the MQTT broker as in the following:

mosquitto_pub -h [mqtt_broker_public_ip] -t /sensor/data -m '{"serialNumber": "SN-001", "sensorType": "Thermometer", "sensorModel": "T1000", "temp": 15, "hum": 92}'

now let's check the ThingsBoard dashboard

TBDevice.png

and there it is, our IoT deployment at the edge based on Firecracker microVMs is working just fine! 🤓

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.