How to Use miniONE to Deploy a Firecracker Cloud Integrated with Docker Hub on AWS

Solution Verified in:

  • OpenNebula: 5.12

Issue

This step-by-step tutorial will show how to easily deploy a single-node Firecracker cloud with miniONE and use the integrated Docker Hub Marketplace to run and manage containerized applications as Firecracker microVMs.

Requirements

To follow the tutorial you will need a bare-metal server to deploy OpenNebula frontend with Firecracker hypervisor. We are going to use an AWS bare-metal instance, but this solution would work also on other bare-metal cloud providers.

Solution

Example

This screencast follows the procedure described in this article to deploy a Firecracker cloud integrated with Docker Hub on an AWS bare-metal instance.

Section 1: OpenNebula Installation

To setup a Firecracker Cloud integrated with Docker Hub we will use miniONE, an easy-to-use tool for deploying an evaluation OpenNebula cloud based on Firecracker microVMs, KVM virtual machines, or LXD system containers.

In order to setup the environment, you need a physical host (x86-64 Intel or AMD processor) with virtualization capabilities and with one of these supported Operating Systems.

In our case we instantiated an i3.metal instance type (bare-metal server) on AWS with Ubuntu 18.04 AMI. In AWS you have to create a security group for the instance that allows unrestricted traffic to the following ports:

  • 8080 for Sunstone
  • 80 for nginx application
  • 9000 for Minio application
  • 29876 for accessing the Firecracker microVMs via VNC

In the host you can run the following commands to install OpenNebula with miniONE.

wget 'https://github.com/OpenNebula/minione/releases/latest/download/minione'
sudo bash minione --firecracker --sunstone-port 8080 --yes

During the installation you will get a report like this:

### Checks & detection
Checking AppArmor  SKIP will try to modify

### Main deployment steps:
Install OpenNebula frontend version 5.12
Configure bridge minionebr with IP 172.16.100.1/24
Enable NAT over bond0
Modify AppArmor
Install OpenNebula Firecracker node
Export appliance and update VM template

Do you agree? [yes/no]:

### Installation
Updating APT cache  OK
Creating bridge interface minionebr  OK
Bring bridge interfaces up  OK
Configuring NAT using iptables  OK
Saving iptables changes  OK
Installing DNSMasq  OK
Starting DNSMasq  OK
Configuring repositories  OK
Create docker packages repository  OK
Updating APT cache  OK
Installing OpenNebula packages  OK
Installing OpenNebula firecracker node packages  OK
Install docker  OK
Start docker service  OK
Enable docker service  OK

### Configuration
Add oneadmin to docker group  OK
Update network hooks  OK
Switching OneGate endpoint in oned.conf  OK
Switching OneGate endpoint in onegate-server.conf  OK
Switching keep_empty_bridge on in OpenNebulaNetwork.conf  OK
Switching scheduler interval in oned.conf  OK
Switching to QEMU emulation  OK
Setting initial password for current user and oneadmin  OK
Changing WebUI to listen on port 80  OK
Starting OpenNebula services  OK
Enabling OpenNebula services  OK
Add ssh key to oneadmin user  OK
Update ssh configs to allow VM addresses reusig  OK
Ensure own hostname is resolvable  OK
Checking OpenNebula is working  OK
Disabling ssh from virtual network  OK
Adding localhost ssh key to known_hosts  OK
Testing ssh connection to localhost  OK
Updating datastores template  OK
Creating Firecracker host  OK
Creating virtual network  OK
Exporting [alpine] from dockerhub to local datastore  OK
Exporting [Kernel 5.4 x86_64 - Firecracker] to local datastore  OK
Waiting until the image is ready  OK
Updating VM template  OK

### Report
OpenNebula 5.12 was installed
Sunstone [the webui] is running on:
  http://172.31.36.136:8080/
Use following to login:
  user: oneadmin
  password: 4GOP8tglrI

Once OpenNebula's front-end and the Firecracker hypervisor are installed, we can proceed to deploy one simple application.

Section 2: Importing Official Docker Hub Images

We are going to deploy an Nginx application from the Docker Hub Marketplace that is already configured on miniONE.

OpenNebulaMarketplaces.png

You can select Nginx from the Apps tab.

OpenNebulaDockerHubApps.png

OpenNebulaNginxApp.png

Now download it into the default datastore.

ImportNginxApp.png

When Nginx is imported from Docker Hub, a VM template is also created. You'll have to update the template.

UpdateNginxVMTemplate.png

You can do that by adding the VNet network (in the advanced options set the IP to 172.16.100.30).

AddNetworkNginxVMTemplate.png

Don't forget also the kernel image (already imported by miniONE in the default datastore).

AddKernelNginxVMTemplate.png

And the start script.

AddStartScriptNginxVMTemplate.png

And also update the Custom Vars by setting the root password.

AddCustomVarsNginxTemplate.png

Once the VM template is updated you can instantiate it to create a microVM.

NginxVM.png

You can access the microVM by using VNC and use the password you set earlier at the template's Custom Vars section:

VNCNginxVM.png

Note

In order to access the VM outside the host you can set the following iptables rules.
iptables -A PREROUTING -t nat -i enp4s0 -p tcp --dport 80 -j DNAT --to 172.16.100.30:80
iptables -A FORWARD -p tcp -d 172.16.100.30 --dport 80 -j ACCEPT
Change enps4s0 to your host network device.

You can use your browser to access the Nginx application by using the public IP of your host:

NginxWebPage.png

Section 3: Import non-official Docker Hub Images

To show how to import non-official Docker Hub images, we are going to deploy MinIO which is an High Performance Object Storage with an S3 compatible API. We are going to show also how to prepare a persistent datablock image to be used as persistent volume for the application.

First we need to register the MinIO Docker image in the image datastore by using the following command:

oneimage create -d default --path 'docker://minio/minio?size=256&format=raw&filesystem=ext4' --name minio

Then, with the following command we will create a persistent datablock that will be used as a persistent volume for the application:

oneimage create -d default --type DATABLOCK --size 2048 --persistent --name minio-data

and then you can format the datablock image with:

img=$(oneimage show minio-data | grep SOURCE | awk -F ' +' '{print $3}')
sudo virt-format --partition=none --filesystem=ext4 --format=raw -a $img

Note

virt-format requires root permissions to work and can be installed on Ubuntu with apt-get install libguestfs-tools.

Now you can create a template "minio.tpl" of the micro-VM that will use the docker image registered in the datastore:

NAME="minio"
CPU="1"
MEMORY="1024"
DISK=[
  IMAGE="minio",
  TARGET="vda" ]
DISK=[
  IMAGE="minio-data",
  TARGE="vdb" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
NIC=[
  IP="172.16.100.20",
  NETWORK="vnet"
]
OS=[
  KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off",
  KERNEL_DS="$FILE[IMAGE=\"kernel\"]" ]
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="minio",
  PASSWORD="root",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]",
  FILES = "/usr/share/minio/init.sh"
]

The template can be created by using:

onetemplate create minio.tpl

In the template, you can specify the CPU, MEMORY, the registered docker image, the persistent volume. It is mandatory to specify in the OS section, the "kernel" image and the kernel boot parameters; as a kernel image you can use the one that is imported by miniONE during the installation. In order to start the minIO application, an init script (/usr/share/minio/init.sh) has been defined

#!/bin/bash
mount /dev/vdb /data
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export MINIO_ACCESS_KEY=ACCESSKEY
export MINIO_SECRET_KEY=SECRETKEY
export MINIO_UPDATE=off
nohup /usr/bin/docker-entrypoint.sh server /data > /tmp/minio.log 2>&1 &

Note

The init script should be placed in a frontend folder that is accessible from oneadmin (/usr/share in the example). It should be named "init.sh" since within the OpenNebula contextualization package it is executed by default.

Once the template has been created, the microVM can be instantiated with the command:

onetemplate instantiate minio --name minio

To test the MinIO application that has been deployed, we can use the "mc" MinIO client:

Note

you can install MinIO mc client by downloading it at the following link: https://dl.min.io/client/mc/release/linux-amd64/mc

You can use the "mc" client to add the MinIO server just deployed.

mc config host add minio http://172.16.100.20:9000 ACCESSKEY SECRETKEY --api S3v4

Then you can create a bucket:

mc mb minio/datasets

and add one file to the bucket:

mc cp <file> minio/datasets

You can check if everything worked by accessing the minio microVM with ssh.

ssh root@172.16.100.20

You can manage the lifecycle of your application by using the onevm command for example to poweroff, resume and terminate the microVM.

To access the MinIO GUI from outside the host, you can setup the following iptables rules:

Note

In order to access the MinIO GUI outside the host you can set the following iptables rules.
iptables -A PREROUTING -t nat -i enp4s0 -p tcp --dport 9000 -j DNAT --to 172.16.100.20:9000
iptables -A FORWARD -p tcp -d 172.16.100.20 --dport 9000 -j ACCEPT
Change enps4s0 to your host network device.

Then you can access the MinIO GUI by using http://host_public_ip:9000/minio and use the access key and secret key used in the init script. Voilà!

MinioGUI.png

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.