How to Use miniONE to Deploy Kubernetes Clusters on the Edge

Solution Verified in:

  • OpenNebula: 5.10

Issue

This step-by-step tutorial will help users to easily set up an OpenNebula environment or to extend their own OpenNebula environment with Kubernetes clusters on the edge, by using the miniONE tool and the Kubernetes Appliance from the OpenNebula public marketplace.

Requirements

You will need a Packet account, and in particular an API token and project ID that will be used to provision resources on Packet. Please follow the Packet getting started guide to create and retrieve them by using the Packet dashboard.

Solution

Step 1: OpenNebula Frontend Installation 

First we need to set up an OpenNebula frontend. To deploy the frontend, we need an host that can be an on-premise server, a public cloud bare-metal server or VM. 

For the tutorial, we decided to run the frontend on a bare-metal server on Packet. You can create a server from the Packet dashboard, by choosing different plans, facilities and operating systems. For the tutorial, we choose the t1.small.x86 plan (the cheapest one), the ams1 facility in Amsterdam, Europe and the operating system Ubuntu 18.04.

It will take some minutes before the packet host is ready. Once it is ready, you can connect via ssh to the frontend host and download the miniONE tool:

root@minione-fe:~$ wget -O /tmp/minione 'https://raw.githubusercontent.com/OpenNebula/minione/master/minione'

Note

Use the minione master release instead of the the latest release available (v5.10.0) since it fixes the following error when provisioning edge resources:

ERROR: Failed to create some resources
[one.vn.allocate] VN_MAD named "alias_sdnat" is not defined in oned.conf

Once the download is completed, you can proceed to install the OpenNebula frontend on the server by running: 

root@minione-fe:~$ bash /tmp/minione --frontend --yes

After few minutes the frontend will be installed on the host and you will get the IP and username and password to connect to Sunstone.

### Checks & detection
Checking augeas is installed SKIP will try to install
Checking for present ssh key SKIP
### Main deployment steps:
Install OpenNebula frontend version 5.10
Install augeas-tools
Do you agree? [yes/no]:

### Installation
Updating APT cache OK
Install augeas-tools OK
Download augeas lens oned.aug OK
Configuring repositories OK
Updating APT cache OK
Installing OpenNebula packages OK

### Configuration
Switching OneGate endpoint in oned.conf OK
Switching scheduler interval in oned.conf OK
Setting initial password for current user and oneadmin OK
Changing WebUI to listen on port 80 OK
Starting OpenNebula services OK
Enabling OpenNebula services OK
Add ssh key to oneadmin user OK
Update ssh configs to allow VM addresses reusig OK
Ensure own hostname is resolvable OK
Checking OpenNebula is working OK

### Report
OpenNebula 5.10 was installed
Sunstone [the webui] is runninng on: 
 http://147.75.33.161/
Use following to login:
 user: oneadmin
 password: cux63jEcp6

In the following we provide a Terraform file that can be executed to perform the previous steps, i.e. creation of the host and installation of the OpenNebula frontend with miniONE on Packet.

Note

Replace auth token and project id default values with your own.

variable "auth_token" {
 default = "abcdefghijklmnopqrstuvwxyz123456790"
}

variable "project_id" {
 default = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
}

provider "packet" {
 auth_token = var.auth_token
}

resource "packet_device" "minione" {
 hostname = "minione-fe"
 plan = "t1.small.x86"
 facilities = ["ams1"]
 operating_system = "ubuntu_18_04"
 billing_cycle = "hourly"
 project_id = var.project_id

 provisioner "remote-exec" {
 inline = [
 "apt-get update",
 "wget -O /tmp/minione 'https://raw.githubusercontent.com/OpenNebula/minione/master/minione'",
 "bash /tmp/minione --frontend --yes",
 ]

 connection {
 type = "ssh"
 agent = false
 private_key = file("~/.ssh/id_rsa")
 user = "root"
 host = self.network.0.address
 }
 }
}

output "ip_address" {
 value = ["${packet_device.minione.network.0.address}"]
}

Step 2: Provisioning Edge Physical Nodes

Once the OpenNebula frontend has been deployed, we can proceed to provision Packet edge nodes that will be used for the deployment of Kubernetes clusters.

miniONE gives you the option to extend an OpenNebula environment by adding hypervisor nodes at the edge, by using the  option --node to the deployment command.

From the frontend host, we will provision a resource on Packet and we will import the Kubernetes appliance from the OpenNebula public marketplace.

In order to run the command few information must be provided: the API token, the project ID, the edge facility (in our example is sjc1 in Sunnyvale, CA), and the name of the Kubernetes appliance available from the OpenNebula public marketplace (‘Service Kubernetes - KVM’)

bash /tmp/minione --node --edge packet --edge-packet-token [api_token] --edge-packet-project [project_id] --edge-packet-facility sjc1 --edge-marketapp-name 'Service Kubernetes - KVM' --yes

The provisioning will take 5-10 minutes to complete.

Step 3: Deploying Kubernetes Cluster on the Edge

Once the provisioning of the hypervisor node is completed, you can connect to the Sunstone portal with the oneadmin username and password. 

Select the Sunstone Instances tab on the left menu and create a Kubernetes cluster by instantiating the template that has been imported by miniONE.

CreateK8sVM.png

Tip

When you create the VM, you can resize the disk to a proper size that is needed to store the container images that will be deployed to the Kubernetes cluster.

After few minutes, you can connect from the frontend host to the Kubernetes cluster using the public IP. 

To check the public IP:

root@minione-fe:~$ onevm show 0 | grep ETH0_ALIAS0_IP=

or from the Sunstone UI you can check the Alias field for the public IP (147.75.80.106 in the following image)

K8s_VM_IP_v2.png

 

Once you connect to the Kubernetes VM from the frontend host 

root@minione-fe:~$ ssh 147.75.80.106

the Kubernetes service can be still in configuration phase. As soon as you connect, if it is still bootstrapping you will see something like this:

 ___    _ __     ___ 
/ _ \  |  _ \   / _ \ OpenNebula Service Appliance
|(_)|  | | | | |  __/
\___/  |_| |_|  \___|

2/3 Configuration step is in progress...

 * * * * * * * *
 * PLEASE WAIT *
 * * * * * * * *

To check when the configuration is finished you can use the following command:

[root@onekube-ip-192-168-150-2 ~]$ tail -f /var/log/one-appliance/ONE_configure.log 

Once "CONFIGURATION FINISHED" is written to the log, it means that the configuration is completed.

INFO: Starting kubernetes service
INFO: Credentials and config values are saved in: /etc/one-appliance/config
INFO: Gathering info about this kubernetes cluster (tokens, hash)...
INFO: CONFIGURATION FINISHED

When you login to the VM, the Kubernetes service bootstrap is finished if you see the following message:

 ___    _ __     ___ 
/ _ \  |  _ \   / _ \ OpenNebula Service Appliance
|(_)|  | | | | |  __/
\___/  |_| |_|  \___|

All set and ready to serve 8)

In order to check if everything works run the following command in the Kubernetes VM to check if the node status is ready.

[root@onekube-ip-192-168-150-2 ~]$ kubectl get nodes
NAME                                 STATUS ROLES  AGE   VERSION
onekube-ip-192-168-150-2.localdomain Ready  master 4m30s v1.15.6

For different deployment options regarding the Kubernetes cluster, please check the OpenNebula Kubernetes appliance guide.

Step 4: Provisioning Additional Edge Kubernetes Clusters

To deploy Kubernetes clusters to other edge locations, you can repeat the Step 2 and 3, using miniONE--edge-packet-facility option with different Packet facilities (e.g. ams1, nrt1).

To display the edge resources that has been provisioned, you can run the command line interface of the oneprovision tool (that is used under the hood by miniONE), as in the following

[root@minione-fe:~]$ oneprovision list
                                  ID  NAME                 CLUSTERS HOSTS VNETS DATASTORES STAT 
d6f23ccf-42f9-4209-8db7-004467b052b9  PacketProvision-100   1        1     2     2         configured

Step 5: Deploy an Application to the Kubernetes Cluster

Once the Kubernetes cluster is ready, you can deploy an application. 

First you can check that you have no pods, by running the following command

[root@onekube-ip-192-168-150-2 ~]$ kubectl get pods

We will use an example from kubernetes github repository to deploy an nginx application. You can run 

[root@onekube-ip-192-168-150-2 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml

You can check if the deployment succeeded, by running kubectl get pods and check if the containers are running

[root@onekube-ip-192-168-150-2 ~]$ kubectl get pods
NAME                       READY  STATUS   RESTARTS AGE
my-nginx-7fd6966748-8bjrz  1/1    Running  0        58s
my-nginx-7fd6966748-knc4n  1/1    Running  0        58s
my-nginx-7fd6966748-w575j  1/1    Running  0        58s

Step 6: Cleanup

You can clean up the edge resources by running the oneprovision command as in the following:

[root@minione-fe:~]$ oneprovision delete [ID] --cleanup 

Example

This screencast follows the above procedure to deploy a Kubernetes cluster on Packet edge resources to run a dedicated game server for Xonotic—the well-known open source FPS multiplayer game—in the Agones platform. For more details about this use case, please visit the OpenNebula Blog.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.