In our last post we explored DigitalOcean following Linode.  Another that often gets mentioned alongside DigitalOcean and Linode is Vultr.  So how does Vultr compare for k8s hosting? Follow my arduous and at times lengthy journey to get a Kubernetes cluster operational.

Kubespray

First we need to create 4 VMs to setup Kubespray.  In this case we will use CoreOS.

Now, do you think I started with CoreOS? Of course not - I did Ubuntu before successfully so i figured why not do it again.  But soon I realized that Ubuntu on Vultr is merely an attached ISO and soon I felt I had timewarmed to 2009 filling ESXi servers manually:

I'll save you the gritty details. Having made it past all the setups:

And then fighting to get the right Python running (My team at TR can recall my incessant kvetches on 2.7 vs 3), I ultimately ran into a wall on swap memory and no amount of cajoling could get the kubespray past those errors:

fatal: [node2]: FAILED! => {
    "assertion": "ansible_memtotal_mb >= 1024", 
    "changed": false, 
    "evaluated_to": false, 
    "msg": "Assertion failed"
}

Back to CoreOS...

Head to servers and press the "+" to add new hosts.

Next set the OS and size (you’ll need at least 1500Mb for the master(s) and 1024Mb for the worker node.

Set the quantity to 4 and create them:

Now, do you think I started with 4? Heavens no. I thought 3 would do it.  But I found endless issues connecting kubespray locally and realized keeping a 4th host just to orchestrate things might help (I was and still am somewhat convinced my ansible issues could have related to root running connections back to itself to also run ansible).  Do as you please, but I had better luck with 4 then 3 (and just treat the 4th as your utility on demand host)

You can get the user (root) and password from the server details pages:

Click the view in bottom right or the copy to clipboard link to get the root password

In the upper right we see icons for viewing the console and a button to restart.  The IP Address, Username and Password are on the bottom left.

On the main installer server, install python2 and 3.

From this guide on py3 on CoreOS:

#!/bin/bash

# create directory for python install
sudo mkdir -p /opt/bin
sudo chown -hR core:core /opt
cd /opt

# check for newer versions at http://downloads.activestate.com/ActivePython/releases/
wget http://downloads.activestate.com/ActivePython/releases/3.6.0.3600/ActivePython-3.6.0.3600-linux-x86_64-glibc-2.3.6-401834.tar.gz
tar -xzvf ActivePython-3.6.0.3600-linux-x86_64-glibc-2.3.6-401834.tar.gz
mv ActivePython-3.6.0.3600-linux-x86_64-glibc-2.3.6-401834 apy && cd apy && ./install.sh -I /opt/python/

# create some useful symlinks so we can call pip, python etc from anywhere (/opt/bin will already be on PATH)
ln -s /opt/python/bin/easy_install-3.6 /opt/bin/easy_install
ln -s /opt/python/bin/pip3.6 /opt/bin/pip
ln -s /opt/python/bin/python3.6 /opt/bin/python
ln -s /opt/python/bin/virtualenv /opt/bin/virtualenv

Py2: https://github.com/judexzhu/Install-Python-on-CoreOs

#!/bin/bash -uxe

VERSION=2.7.13.2715
PACKAGE=ActivePython-${VERSION}-linux-x86_64-glibc-2.12-402695

# make directory
mkdir -p /opt/bin
cd /opt

wget http://downloads.activestate.com/ActivePython/releases/${VERSION}/${PACKAGE}.tar.gz
tar -xzvf ${PACKAGE}.tar.gz

mv ${PACKAGE} apy && cd apy && ./install.sh -I /opt/python/

ln -sf /opt/python/bin/easy_install /opt/bin/easy_install
ln -sf /opt/python/bin/pip /opt/bin/pip
ln -sf /opt/python/bin/python /opt/bin/python
ln -sf /opt/python/bin/python /opt/bin/python2
ln -sf /opt/python/bin/virtualenv /opt/bin/virtualenv

On the installer server, download kubespray

$ git clone https://github.com/kubernetes-sigs/kubespray.git

Next edit the hosts yaml to put entries for each coreos host:

k8sCoreTest kubespray # cat inventory/mycluster/hosts.yml 
all:
  hosts:
    node1:
      ansible_host: 149.28.124.229
      ip: 149.28.124.229
      access_ip: 149.28.124.229
    node2:
      ansible_host: 144.202.48.221
      ip: 144.202.48.221
      access_ip: 144.202.48.221
    node3:
      ansible_host: 173.199.90.97
      ip: 173.199.90.97
      access_ip: 173.199.90.97
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Note: As mentioned earlier, the first few times i tried using the master to install (as I did with Kubespray on AKS) but had endless problems with python and memfree.

Next we use ansible to install the required python packages on all the hosts

declare -a IPS=(149.28.124.229,144.202.48.221,173.199.90.97)

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
k8sCoreTest kubespray # CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all
DEBUG: Adding group kube-master
DEBUG: Adding group kube-node
DEBUG: Adding group etcd
DEBUG: Adding group k8s-cluster
DEBUG: Adding group calico-rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube-master
DEBUG: adding host node2 to group kube-master
DEBUG: adding host node1 to group kube-node
DEBUG: adding host node2 to group kube-node
DEBUG: adding host node3 to group kube-node

Next we need to use pip3 to install the same on this host (sudo pip3 install -r requirements.txt):

k8sCoreTest kubespray # sudo /opt/ActivePython-3.6/bin/pip3 install -r requirements.txt
Collecting ansible>=2.7.8 (from -r requirements.txt (line 1))
  Using cached https://files.pythonhosted.org/packages/9a/9d/5e3d67bd998236f32a72f255394eccd1e22b3e2843aa60dc30dd164816d0/ansible-2.7.10.tar.gz
Collecting jinja2>=2.9.6 (from -r requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/1d/e7/fd8b501e7a6dfe492a433deb7b9d833d39ca74916fa8bc63dd1a4947a671/Jinja2-2.10.1-py2.py3-none-any.whl
Collecting netaddr (from -r requirements.txt (line 3))
  Downloading https://files.pythonhosted.org/packages/ba/97/ce14451a9fd7bdb5a397abf99b24a1a6bb7a1a440b019bebd2e9a0dbec74/netaddr-0.7.19-py2.py3-none-any.whl (1.6MB)
    100% |################################| 1.6MB 897kB/s 
Collecting pbr>=1.6 (from -r requirements.txt (line 4))
  Downloading https://files.pythonhosted.org/packages/07/3e/22d1d35a4b51706ca3590c54359aeb5fa7ea60df46180143a3ea13d45f29/pbr-5.2.0-py2.py3-none-any.whl (107kB)
    100% |################################| 112kB 10.0MB/s 
Collecting hvac (from -r requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/97/96/ee2d10b985bb756cbcc8f177bb4eb5cb780a749fb15cff443f9a33751de5/hvac-0.8.2-py2.py3-none-any.whl
Collecting jmespath (from -r requirements.txt (line 6))
  Downloading https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl
Collecting ruamel.yaml (from -r requirements.txt (line 7))
  Downloading https://files.pythonhosted.org/packages/3b/74/4491ff49198da5f9059f6ce0e71449f4798226323318661b152ef9074d79/ruamel.yaml-0.15.94-cp36-cp36m-manylinux1_x86_64.whl (652kB)
    100% |################################| 655kB 2.0MB/s 
Collecting PyYAML (from ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
    100% |################################| 276kB 4.8MB/s 
Collecting paramiko (from ansible>=2.7.8->-r requirements.txt (line 1))
  Using cached https://files.pythonhosted.org/packages/cf/ae/94e70d49044ccc234bfdba20114fa947d7ba6eb68a2e452d89b920e62227/paramiko-2.4.2-py2.py3-none-any.whl
Collecting cryptography (from ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/5b/12/b0409a94dad366d98a8eee2a77678c7a73aafd8c0e4b835abea634ea3896/cryptography-2.6.1-cp34-abi3-manylinux1_x86_64.whl (2.3MB)
    100% |################################| 2.3MB 646kB/s 
Requirement already satisfied: setuptools in /opt/ActivePython-3.6/lib/python3.6/site-packages (from ansible>=2.7.8->-r requirements.txt (line 1))
Requirement already satisfied: MarkupSafe>=0.23 in /opt/ActivePython-3.6/lib/python3.6/site-packages (from jinja2>=2.9.6->-r requirements.txt (line 2))
Collecting requests>=2.21.0 (from hvac->-r requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/7d/e3/20f3d364d6c8e5d2353c72a67778eb189176f08e873c9900e10c0287b84b/requests-2.21.0-py2.py3-none-any.whl
Collecting bcrypt>=3.1.3 (from paramiko->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/d0/79/79a4d167a31cc206117d9b396926615fa9c1fdbd52017bcced80937ac501/bcrypt-3.1.6-cp34-abi3-manylinux1_x86_64.whl (55kB)
    100% |################################| 61kB 10.5MB/s 
Collecting pynacl>=1.0.1 (from paramiko->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/27/15/2cd0a203f318c2240b42cd9dd13c931ddd61067809fee3479f44f086103e/PyNaCl-1.3.0-cp34-abi3-manylinux1_x86_64.whl (759kB)
    100% |################################| 768kB 1.9MB/s 
Collecting pyasn1>=0.1.7 (from paramiko->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/7b/7c/c9386b82a25115cccf1903441bba3cbadcfae7b678a20167347fa8ded34c/pyasn1-0.4.5-py2.py3-none-any.whl (73kB)
    100% |################################| 81kB 10.8MB/s 
Requirement already satisfied: six>=1.4.1 in /opt/ActivePython-3.6/lib/python3.6/site-packages (from cryptography->ansible>=2.7.8->-r requirements.txt (line 1))
Collecting asn1crypto>=0.21.0 (from cryptography->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl (101kB)
    100% |################################| 102kB 9.9MB/s 
Collecting cffi!=1.11.3,>=1.8 (from cryptography->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/5f/bf/6aa1925384c23ffeb579e97a5569eb9abce41b6310b329352b8252cee1c3/cffi-1.12.3-cp36-cp36m-manylinux1_x86_64.whl (430kB)
    100% |################################| 440kB 3.2MB/s 
Requirement already satisfied: packaging>=16.8 in /opt/ActivePython-3.6/lib/python3.6/site-packages (from setuptools->ansible>=2.7.8->-r requirements.txt (line 1))
Requirement already satisfied: appdirs>=1.4.0 in /opt/ActivePython-3.6/lib/python3.6/site-packages (from setuptools->ansible>=2.7.8->-r requirements.txt (line 1))
Collecting idna<2.9,>=2.5 (from requests>=2.21.0->hvac->-r requirements.txt (line 5))
  Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB)
    100% |################################| 61kB 10.3MB/s 
Collecting certifi>=2017.4.17 (from requests>=2.21.0->hvac->-r requirements.txt (line 5))
  Downloading https://files.pythonhosted.org/packages/60/75/f692a584e85b7eaba0e03827b3d51f45f571c2e793dd731e598828d380aa/certifi-2019.3.9-py2.py3-none-any.whl (158kB)
    100% |################################| 163kB 7.3MB/s 
Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.21.0->hvac->-r requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting urllib3<1.25,>=1.21.1 (from requests>=2.21.0->hvac->-r requirements.txt (line 5))
  Downloading https://files.pythonhosted.org/packages/01/11/525b02e4acc0c747de8b6ccdab376331597c569c42ea66ab0a1dbd36eca2/urllib3-1.24.3-py2.py3-none-any.whl (118kB)
    100% |################################| 122kB 9.2MB/s 
Collecting pycparser (from cffi!=1.11.3,>=1.8->cryptography->ansible>=2.7.8->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB)
    100% |################################| 163kB 7.4MB/s 
Requirement already satisfied: pyparsing in /opt/ActivePython-3.6/lib/python3.6/site-packages (from packaging>=16.8->setuptools->ansible>=2.7.8->-r requirements.txt (line 1))
Installing collected packages: jinja2, PyYAML, asn1crypto, pycparser, cffi, cryptography, bcrypt, pynacl, pyasn1, paramiko, ansible, netaddr, pbr, idna, certifi, chardet, urllib3, requests, hvac, jmespath, ruamel.yaml
  Running setup.py install for PyYAML ... done
  Running setup.py install for pycparser ... done
  Running setup.py install for ansible ... done
  Found existing installation: requests 2.12.5
    Uninstalling requests-2.12.5:
      Successfully uninstalled requests-2.12.5
Successfully installed PyYAML-5.1 ansible-2.7.10 asn1crypto-0.24.0 bcrypt-3.1.6 certifi-2019.3.9 cffi-1.12.3 chardet-3.0.4 cryptography-2.6.1 hvac-0.8.2 idna-2.8 jinja2-2.10.1 jmespath-0.9.4 netaddr-0.7.19 paramiko-2.4.2 pbr-5.2.0 pyasn1-0.4.5 pycparser-2.19 pynacl-1.3.0 requests-2.21.0 ruamel.yaml-0.15.94 urllib3-1.24.3

Note: to finally get Py2 and Py3 to work at the same time, i just went to /opt/ActivePython-3.6/ where the package had been expanded and installed it.  Then ran "sudo /opt/ActivePython-3.6/bin/pip3 install -r requirements.txt" as you see above.

On my orchestration host, i ran the following to create ssh keys:

ssh-keygen -t rsa

I then copied them to each host

ssh-copy-id -i .ssh/id_rsa.pub root@173.199.90.97
ssh-copy-id -i ~/.ssh/id_rsa.pub root@149.28.124.229
ssh-copy-id -i ~/.ssh/id_rsa.pub root@144.202.48.221

In each of the above cases, i pulled the root password for that host from the vultr dashboard.

Installing Kubespray

Now that we are ready to install, use the playbook to push kubespray out.  This is fairly truncated below. The process took 7 minutes.

k8sCoreTest kubespray # ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

PLAY [localhost] **********************************************************************************************************************************

TASK [Check ansible version >=2.7.8] **************************************************************************************************************
Saturday 11 May 2019  21:30:18 +0000 (0:00:00.088)       0:00:00.088 ********** 
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}
 [WARNING]: Could not match supplied host pattern, ignoring: bastion


PLAY [bastion[0]] *********************************************************************************************************************************
skipping: no hosts matched

PLAY [k8s-cluster:etcd:calico-rr] *****************************************************************************************************************

...snip...

TASK [kubernetes/preinstall : run growpart] *******************************************************************************************************
Saturday 11 May 2019  22:32:09 +0000 (0:00:00.125)       0:07:00.598 ********** 

TASK [kubernetes/preinstall : run xfs_growfs] *****************************************************************************************************
Saturday 11 May 2019  22:32:09 +0000 (0:00:00.132)       0:07:00.731 ********** 

PLAY RECAP ****************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0   
node1                      : ok=378  changed=48   unreachable=0    failed=0   
node2                      : ok=241  changed=37   unreachable=0    failed=0   
node3                      : ok=238  changed=19   unreachable=0    failed=0   

Saturday 11 May 2019  22:32:09 +0000 (0:00:00.104)       0:07:00.836 ********** 
=============================================================================== 
kubernetes/kubeadm : Join to cluster ------------------------------------------------------------------------------------------------------ 17.33s
etcd : reload etcd ------------------------------------------------------------------------------------------------------------------------ 10.95s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources -------------------------------------------------------------------------------- 8.75s
kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template ---------------------------------------------------------------------- 7.82s
etcd : Configure | Check if etcd cluster is healthy ---------------------------------------------------------------------------------------- 5.60s
etcd : Configure | Check if etcd cluster is healthy ---------------------------------------------------------------------------------------- 5.57s
kubernetes-apps/network_plugin/calico : Start Calico resources ----------------------------------------------------------------------------- 4.29s
network_plugin/calico : Calico | Create calico manifests ----------------------------------------------------------------------------------- 3.83s
kubernetes/master : slurp kubeadm certs ---------------------------------------------------------------------------------------------------- 3.75s
policy_controller/calico : Create calico-kube-controllers manifests ------------------------------------------------------------------------ 3.53s
download : Download items ------------------------------------------------------------------------------------------------------------------ 3.36s
gather facts from all instances ------------------------------------------------------------------------------------------------------------ 3.24s
policy_controller/calico : Start of Calico kube controllers -------------------------------------------------------------------------------- 3.02s
download : container_download | download images for kubeadm config images ------------------------------------------------------------------ 2.77s
kubernetes/preinstall : Hosts | Extract existing entries for localhost from hosts file ----------------------------------------------------- 2.52s
download : Sync items ---------------------------------------------------------------------------------------------------------------------- 2.45s
download : file_download | Download item (all) --------------------------------------------------------------------------------------------- 2.42s
download : Download items ------------------------------------------------------------------------------------------------------------------ 2.34s
kubernetes/master : Backup old certs and keys ---------------------------------------------------------------------------------------------- 2.27s
kubernetes-apps/ansible : Kubernetes Apps | Lay Down nodelocaldns Template ----------------------------------------------------------------- 2.26s

Testing:

Go to your master node and run commands from there. So, again, we've ssh'ed to our installer node and we now will ssh to "node 1" our master from the hosts.yml above:

ssh root@149.28.124.229
Last login: Sun May 12 02:16:56 2019 from 45.76.230.208
Container Linux by CoreOS stable (2079.3.0)
node1 ~ # kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6d77964746-r9jg2   1/1     Running   0          3h54m
kube-system   calico-node-2pd56                          1/1     Running   0          3h54m
kube-system   calico-node-j4s2s                          1/1     Running   0          3h54m
kube-system   calico-node-vxj96                          1/1     Running   0          3h54m
kube-system   coredns-56bc6b976d-2jvlw                   1/1     Running   0          3h54m
kube-system   coredns-56bc6b976d-9c5sm                   1/1     Running   0          3h54m
kube-system   dns-autoscaler-56c969bdb8-8tg46            1/1     Running   0          3h54m
kube-system   kube-apiserver-node1                       1/1     Running   0          4h33m
kube-system   kube-controller-manager-node1              1/1     Running   1          4h34m
kube-system   kube-proxy-9lcbl                           1/1     Running   0          3h54m
kube-system   kube-proxy-fgn2s                           1/1     Running   0          3h54m
kube-system   kube-proxy-j24v5                           1/1     Running   0          3h54m
kube-system   kube-scheduler-node1                       1/1     Running   1          4h34m
kube-system   kubernetes-dashboard-6c7466966c-cqqnr      1/1     Running   0          3h54m
kube-system   nginx-proxy-node2                          1/1     Running   0          3h55m
kube-system   nginx-proxy-node3                          1/1     Running   0          3h55m
kube-system   nodelocaldns-8ts6x                         1/1     Running   0          3h54m
kube-system   nodelocaldns-c4b4h                         1/1     Running   0          3h54m
kube-system   nodelocaldns-sv54s                         1/1     Running   0          3h54m

You can get the kubeconfig from this master node.  Just base64 so you can capture the whole thing to your clipboard

node1 ~ # cat ~/.kube/config | base64
YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhv
cml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VONVJF
TkRRV0pEWjBGM1NVSkJaMGxDUVVSQlRrSm5hM0ZvYTJsSE9YY3dRa0ZSYzBaQlJFRldUVkpOZDBW
UldVUldVVkZFUlhkd2NtUlhTbXdLWTIwMWJHUkhWbnBOUWpSWVJGUkZOVTFFVlhoTlZFbDRUbFJC
ZWsweGIxaEVWRWsxVFVSVmQwOUVTWGhPVkVGNlRURnZkMFpVUlZSTlFrVkhRVEZWUlFwQmVFMUxZ
VE5XYVZwWVNuVmFXRkpzWTNwRFEwRlRTWGRFVVZsS1MyOWFTV2gyWTA1QlVVVkNRbEZCUkdkblJW
…..

Copy that to the clipboard and go back to your laptop/machine.  Past it into a file and then base64 decode it to verify.  E.g. on my mac

$ vi test-config.b64
$ cat test-config.b64 | base64 --decode
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EVXhNVEl4TlRBek0xb1hEVEk1TURVd09ESXhOVEF6TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2
….

To use it, just copy it to a file:

$ cat test-config.b64 | base64 --decode > vultr.kubeconfig

And we can now leverage it locally.

$ kubectl --kubeconfig=vultr.kubeconfig get nodes
NAME    STATUS   ROLES    AGE     VERSION
node1   Ready    master   4h37m   v1.14.1
node2   Ready    <none>   3h58m   v1.14.1
node3   Ready    <none>   3h58m   v1.14.1

Using our Cluster

Let's put the kubeconfig in the standard location (for helm to use) and install tiller and set up RBAC

$ mv vultr.kubeconfig ~/.kube/config 
$ kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
node1   Ready    master   4h41m   v1.14.1
node2   Ready    <none>   4h1m    v1.14.1
node3   Ready    <none>   4h1m    v1.14.1
$ helm init
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Next let’s install Sonarqube.

I’ve mentioned this in prior blog posts, but the primary reason I like using Sonarqube for an example app is that it has the key components of a good example kubernetes app - a database backend with a persistent volume claim and a password stored in a secret.  Then an app front end that uses that secret and fronts itself with a loadbalancer and a public IP.  In a way, it exercises all the key pieces of kubernetes.

$ helm install stable/sonarqube
NAME:   inky-bison
LAST DEPLOYED: Sat May 11 21:34:07 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                  DATA  AGE
inky-bison-sonarqube-config           0     1s
inky-bison-sonarqube-copy-plugins     1     1s
inky-bison-sonarqube-install-plugins  1     1s
inky-bison-sonarqube-tests            1     1s

==> v1/PersistentVolumeClaim
NAME                   STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
inky-bison-postgresql  Pending  1s

==> v1/Pod(related)
NAME                                   READY  STATUS             RESTARTS  AGE
inky-bison-postgresql-6cc68797-glkkh   0/1    Pending            0         1s
inky-bison-sonarqube-699f8c555f-pwqsc  0/1    ContainerCreating  0         1s

==> v1/Secret
NAME                   TYPE    DATA  AGE
inky-bison-postgresql  Opaque  1     1s

==> v1/Service
NAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)         AGE
inky-bison-postgresql  ClusterIP     10.233.53.228  <none>       5432/TCP        1s
inky-bison-sonarqube   LoadBalancer  10.233.22.133  <pending>    9000:32647/TCP  1s

==> v1beta1/Deployment
NAME                   READY  UP-TO-DATE  AVAILABLE  AGE
inky-bison-postgresql  0/1    1           0          1s
inky-bison-sonarqube   0/1    1           0          1s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w inky-bison-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default inky-bison-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

The request for the public IP didn’t work too hot:

$ kubectl get svc --namespace default inky-bison-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Unable to connect to the server: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""


node1 ~ # kubectl get svc --namespace default inky-bison-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Unable to connect to the server: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""

Though we do see Sonarqube listed as successfully deployed

$ helm list


NAME      	REVISION	UPDATED                 	STATUS  	CHART          	APP VERSION	NAMESPACE
inky-bison	1       	Sat May 11 21:34:07 2019	DEPLOYED	sonarqube-1.0.0	7.7        	default 

So we know the public IP part failed because Vultr doesnt really have a service that kubespray could leverage for that.  What about our Volume Claims?

$ kubectl get storageclass
NAME                 PROVISIONER            AGE
standard (default)   kubernetes.io/cinder   20m

$ kubectl get pvc/exegetical-porcupine-postgresql
NAME                              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
exegetical-porcupine-postgresql   Pending                                      standard       31s
AHD-MBP13-048:current isaac.johnson$ kubectl get pvc/exegetical-porcupine-postgresql
NAME                              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
exegetical-porcupine-postgresql   Pending                                      standard       35s
AHD-MBP13-048:current isaac.johnson$ kubectl describe pvc/exegetical-porcupine-postgresql
Name:          exegetical-porcupine-postgresql
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:        
Labels:        app=exegetical-porcupine-postgresql
               chart=postgresql-0.8.3
               heritage=Tiller
               release=exegetical-porcupine
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason              Age                From                         Message
  ----       ------              ----               ----                         -------
  Warning    ProvisioningFailed  11s (x3 over 41s)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory
Mounted By:  exegetical-porcupine-postgresql-69b8989d7c-6bsg5

It seems that while "standard" should have offered standard storage, it's not working on Vultr.  We can do it a bit hacky and force local node storage provided we make some volumes on each host.

$ cat make-storage-class.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: standard
provisioner: kubernetes.io/cinder
parameters:
  type: iscsi
  availability: nova

Make a mount on each host that we'll use with cinder

mkdir /pvc-mount
chmod 777 /pvc-mount

Create the volume yaml to use it

vultr ~ # cat minio-volume.yaml 
kind: PersistentVolume
apiVersion: v1
metadata:
  name: minio-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 15Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/pvc-mount"

Then apply it

kubectl create -f minio-volume.yaml

Once we have a volume, we can create a PVC against it manually:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim
  labels:
    app: minio-storage-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
      
vultr ~ # kubectl create -f minio-pvc.yaml 
persistentvolumeclaim/minio-pv-claim created

Now let’s use that in our Sonarqube chart.  We'll need the values.yaml first.

k8sCoreTest sqube # wget https://raw.githubusercontent.com/helm/charts/master/stable/sonarqube/values.yaml
--2019-05-12 15:53:54--  https://raw.githubusercontent.com/helm/charts/master/stable/sonarqube/values.yaml
Resolving raw.githubusercontent.com... 151.101.184.133
Connecting to raw.githubusercontent.com|151.101.184.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6513 (6.4K) [text/plain]
Saving to: 'values.yaml'

values.yaml                    100%[==================================================>]   6.36K  --.-KB/s    in 0s      

2019-05-12 15:53:54 (128 MB/s) - 'values.yaml' saved [6513/6513]

Here you can see the change we made to use that PVC we created:

k8sCoreTest sqube # cp values.yaml values.yaml.old
k8sCoreTest sqube # diff values.yaml values.yaml.old 
104c104
<   enabled: true
---
>   enabled: false
107c107
<   existingClaim: minio-pv-claim
---
>   # existingClaim:
>   ```

And we can relaunch with it:

$ helm install stable/sonarqube -f values.yaml 
NAME:   nihilist-wolf
LAST DEPLOYED: Sun May 12 10:57:19 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                     DATA  AGE
nihilist-wolf-sonarqube-config           0     1s
nihilist-wolf-sonarqube-copy-plugins     1     1s
nihilist-wolf-sonarqube-install-plugins  1     1s
nihilist-wolf-sonarqube-tests            1     1s

==> v1/PersistentVolumeClaim
NAME                      STATUS   VOLUME         CAPACITY  ACCESS MODES  STORAGECLASS  AGE
nihilist-wolf-postgresql  Pending  local-storage  1s

==> v1/Pod(related)
NAME                                      READY  STATUS             RESTARTS  AGE
nihilist-wolf-postgresql-7d5bdb4f7-dcrs2  0/1    Pending            0         1s
nihilist-wolf-sonarqube-58d45dc644-rgb98  0/1    ContainerCreating  0         1s

==> v1/Secret
NAME                      TYPE    DATA  AGE
nihilist-wolf-postgresql  Opaque  1     1s

==> v1/Service
NAME                      TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)         AGE
nihilist-wolf-postgresql  ClusterIP     10.233.13.182  <none>       5432/TCP        1s
nihilist-wolf-sonarqube   LoadBalancer  10.233.26.251  <pending>    9000:30308/TCP  1s

==> v1beta1/Deployment
NAME                      READY  UP-TO-DATE  AVAILABLE  AGE
nihilist-wolf-postgresql  0/1    1           0          1s
nihilist-wolf-sonarqube   0/1    1           0          1s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w nihilist-wolf-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default nihilist-wolf-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

Once i hand-jammed the two PVC volumes and claims (one for postgres and one for the app), i managed to get the pods to run.

$ kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
nihilist-wolf-postgresql-7d5bdb4f7-gzlgl   1/1     Running   0          38m
nihilist-wolf-sonarqube-58d45dc644-5m7bs   1/1     Running   0          4m

Launching a port-forward showed the app was working:

$ kubectl port-forward nihilist-wolf-sonarqube-58d45dc644-5m7bs 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000

However, there were some database issues:

And trying many times with fresh pods always failed:

I was hoping the logs might clue me into why

$ kubectl logs nihilist-wolf-sonarqube-58d45dc644-5m7bs
2019.05.12 16:53:53 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2019.05.12 16:53:53 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2019.05.12 16:53:53 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2019.05.12 16:53:53 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2019.05.12 16:53:53 INFO  app[][o.e.p.PluginsService] no modules loaded
2019.05.12 16:53:53 INFO  app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.05.12 16:54:13 INFO  app[][o.s.a.SchedulerImpl] Process[es] is up
2019.05.12 16:54:13 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/opt/sonarqube]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -cp ./lib/common/*:/opt/sonarqube/lib/jdbc/postgresql/postgresql-42.2.5.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process1859407970706007506properties
2019.05.12 16:54:13 INFO  web[][o.s.p.ProcessEntryPoint] Starting web
2019.05.12 16:54:14 INFO  web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2019.05.12 16:54:16 INFO  web[][o.e.p.PluginsService] no modules loaded
2019.05.12 16:54:16 INFO  web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2019.05.12 16:54:16 INFO  web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2019.05.12 16:54:16 INFO  web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2019.05.12 16:54:18 INFO  web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2019.05.12 16:54:18 INFO  web[][o.s.s.p.LogServerVersion] SonarQube Server / 7.7.0.23042 / 1dcac8b8de36b377a1810cc8f1c4c31744e12729
2019.05.12 16:54:18 INFO  web[][o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://nihilist-wolf-postgresql:5432/sonarDB
2019.05.12 16:54:20 INFO  web[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2019.05.12 16:54:20 INFO  web[][o.s.s.u.SystemPasscodeImpl] System authentication by passcode is disabled
2019.05.12 16:54:21 WARN  web[][o.s.s.p.DatabaseServerCompatibility] Database must be upgraded. Please backup database and browse /setup
2019.05.12 16:54:21 WARN  app[][startup] 
################################################################################
      Database must be upgraded. Please backup database and browse /setup
################################################################################
2019.05.12 16:54:21 INFO  web[][o.s.s.p.d.m.c.PostgresCharsetHandler] Verify that database charset supports UTF8
2019.05.12 16:54:21 INFO  web[][o.s.s.p.Platform] Database needs migration
2019.05.12 16:54:21 INFO  web[][o.s.s.p.w.MasterServletFilter] Initializing servlet filter org.sonar.server.ws.WebServiceFilter@44572b9f [pattern=UrlPattern{inclusions=[/api/system/migrate_db.*, ...], exclusions=[/api/properties*, ...]}]
2019.05.12 16:54:21 INFO  web[][o.s.s.a.EmbeddedTomcat] HTTP connector enabled on port 9000
2019.05.12 16:56:43 INFO  web[][o.s.s.p.d.m.DatabaseMigrationImpl] Starting DB Migration and container restart
2019.05.12 16:56:43 INFO  web[][DbMigrations] Executing DB migrations...
2019.05.12 16:56:43 INFO  web[][DbMigrations] #2123 'Rename SUBMITTER_LOGIN TO SUBMITTER_UUID on table CE_ACTIVITY'...
2019.05.12 16:56:43 ERROR web[][DbMigrations] #2123 'Rename SUBMITTER_LOGIN TO SUBMITTER_UUID on table CE_ACTIVITY': failure | time=39ms
2019.05.12 16:56:43 ERROR web[][DbMigrations] Executed DB migrations: failure | time=42ms
2019.05.12 16:56:43 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=213ms
2019.05.12 16:56:43 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #2123 'Rename SUBMITTER_LOGIN TO SUBMITTER_UUID on table CE_ACTIVITY' failed
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
	at java.lang.Iterable.forEach(Iterable.java:75)
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:52)
	at org.sonar.server.platform.db.migration.engine.MigrationEngineImpl.execute(MigrationEngineImpl.java:68)
	at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doUpgradeDb(DatabaseMigrationImpl.java:105)
	at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doDatabaseMigration(DatabaseMigrationImpl.java:80)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Fail to execute ALTER TABLE ce_activity RENAME COLUMN submitter_login TO submitter_uuid
	at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:97)
	at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:77)
	at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:117)
	at org.sonar.server.platform.db.migration.version.v72.RenameSubmitterLoginToSubmitterUuidOnTableCeActivity.execute(RenameSubmitterLoginToSubmitterUuidOnTableCeActivity.java:41)
	at org.sonar.server.platform.db.migration.step.DdlChange.execute(DdlChange.java:45)
	at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:75)
	... 9 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: column "submitter_login" does not exist
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
	at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:175)
	at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:175)
	at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:82)
	... 14 common frames omitted

And even worse, the cluster performance wasn't great. i could see the connection had stability issues:

It was about that time the k8s cluster became unresponsive alltogether

k8sCoreTest ~ # ssh root@149.28.124.229
Last login: Sun May 12 20:21:06 2019 from 45.76.230.208
Container Linux by CoreOS stable (2079.3.0)
vultr ~ # kubectl get pods
Unable to connect to the server: unexpected EOF
$ kubectl get pods
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

$ kubectl get pods
Unable to connect to the server: http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""

At this point I decided that while i like postgres, maybe i should try the other choice (even though MySQL is technically deprecated):

relaunching with mysql

And I finally got it working.

I needed to switch to MySQL as the postgres migration steps kept failing with the latest “stable” build (clearly not as ‘stable’ as they might imply).

Just as before, I had to create, by hand, persistent volumes for both the app and the database:

vultr ~ # find . -name \*.yaml -exec cat {} \; -print
kind: PersistentVolume
apiVersion: v1
metadata:
  name: minio-pv-volume3
  labels:
    type: local
spec:
  storageClassName: manual2
  capacity:
    storage: 15Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/pvc-mount"
./minio-volume.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nomadic-whale-mysql
  labels:
    app: minio-storage-claim5
spec:
  storageClassName: manual5
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
./sqube-pvc.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: minio-pv-volume5
  labels:
    type: local
spec:
  storageClassName: manual5
  capacity:
    storage: 15Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/pvc-mount2"
./minio-sqube-vol.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim
  labels:
    app: minio-storage-claim2
spec:
  storageClassName: manual2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
./minio-pvc.yaml

And this did mean cleaning up from before to relaunch

$ kubectl delete pvc/nihilist-wolf-postgresql
$ kubectl create -f ./minio-sqube-vol.yaml 
$ kubectl create -f sqube-pvc.yaml 

I also found (and perhaps errantly), that i needed to create the underlying mounts on all the nodes or i sometimes got an error about not finding a mount point (so i zipped through the 3 nodes and creating /pvc-mount and /pvc-mount2)

$ mkdir /pvc-mount2
$ chmod 777 /pvc-mount2

I likely could have used ansible with the inventory to do the same thing.

And in all of this, there is no “public ip” provider so one should expect the “external-ip” request to remain unsatisfied forever:

vultr ~ # kubectl get svc
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes                ClusterIP      10.233.0.1     <none>        443/TCP          22h
nomadic-whale-mysql       ClusterIP      10.233.9.221   <none>        3306/TCP         38m
nomadic-whale-sonarqube   LoadBalancer   10.233.47.13   <pending>     9000:30785/TCP   38m

Terraform

I tried on several occasions to get Terraform to work as well.  In the end, while a cluster appeared to be created, no form of ssh key seemed to work.  The nodes themselves did not expose a user/password and the terminal window was left on a login prompt.  Even if they did work, they were left in a state of complete inaccessibility

  1. Download terraform modules
  2. Expand into ~/.terraform.d/plugins
  3. terraform init
  4. Note: put your id_rsa.pub somewhere tf can read, also ssh-add
  5. You can put whatever domain name in there - it will be registered for you!

When you do run it, note that it is a huge plan and will fail.

module.typhoon.null_resource.bootkube_start: Still creating... (19m50s elapsed)
module.typhoon.null_resource.bootkube_start: Still creating... (22m29s elapsed)
module.typhoon.null_resource.bootkube_start: Still creating... (22m39s elapsed)

Error: Error applying plan:

1 error(s) occurred:

* module.typhoon.null_resource.bootkube_start: error executing "/tmp/terraform_1831028436.sh": Process exited with status 1

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

You wont see a login on the dashboard and the interactive sessions are left with a login prompt:

And the SSH key, whether you tried to reference it by file or pasted in the value, never seems to work:

Summary

I spent almost a full week trying to get Vultr to work as a basic k8s host.  I tried Terraform and Kubespray multiple times. I had to hand modify all the hosts and resorted to rigging persistent volumes manually just to get a demo to work.

Even after all that, during my initial changing of the Sonarqube admin password, the cluster vomited and died and i had to bounce all the hosts again as well as switch underlying database engines.

My wife said i was acting “obsessed” at times having to pull out the laptop to have another go at the Vultr stack.  We were outside on Mother’s Day when i finally held up the laptop and shouted “Finally! It works!” (only to have the aforementioned cluster crash).

In all that time, however, i never came close to burning through the $50 initial credit:

You are welcome to use this referral link to also get $50 to try them out (and perhaps you can let me know if you find it a better experience than I).