LKE: A solid k8s offering from Linode

Linode announced LKE this year, and while still in private beta, it’s looking quite good for a release any day now (they are actively updating as I write).  Just this past week I was invited to try out LKE and am excited to present share this early review (noting it’s still in development and still in beta).

Creating a cluster from the console:

Kubernetes will appear a left hand option and option the main create dropdown

Clicking Create will take us to the area we can set location, size, count and version.

Right now the only region available is Dallas, TX (us-central)

we can add multiple pools, of different sizes

The UI appears to allow us to create a cluster of several node pool sizes which sets it apart from the standard offering from AWS and Azure:

Lastly, we can select version (only 1.16 is available right now) and choose create:

The resulting details page allows us to download the kubeconfig and see our current monthly costs:

The dashboard also lets us horizontally scale any of our node pools:

The kubeconfig worked right out of the box.  Which considering the speed the cluster was created impressed me (sometimes properly generating the ssl and config takes a few):

It’s a pretty basic cluster.  There was no pre-installed dashboard, rbac, istio, or rbac of any type.

In a way, I like that.  It allows us to configure the cluster just as we want it.  The only real difference between this and a k3s cluster would be the linode CNI pods.

Adding a dashboard is easy:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-dc6cb64cb-ckjtq      1/1     Running   0          9m43s
kube-system            calico-node-hwff2                            1/1     Running   0          8m22s
kube-system            calico-node-mqcs7                            1/1     Running   0          8m39s
kube-system            calico-node-t67p6                            1/1     Running   0          8m16s
kube-system            coredns-5644d7b6d9-5zg87                     1/1     Running   0          9m42s
kube-system            coredns-5644d7b6d9-t2sdp                     1/1     Running   0          9m43s
kube-system            csi-linode-controller-0                      3/3     Running   0          9m42s
kube-system            csi-linode-node-9dzps                        2/2     Running   0          7m36s
kube-system            csi-linode-node-p4t2k                        2/2     Running   0          7m19s
kube-system            csi-linode-node-pxjz9                        2/2     Running   0          7m1s
kube-system            kube-proxy-hqw5j                             1/1     Running   0          8m22s
kube-system            kube-proxy-lsv65                             1/1     Running   0          8m39s
kube-system            kube-proxy-n7gbv                             1/1     Running   0          8m16s
kubernetes-dashboard   dashboard-metrics-scraper-76585494d8-gwntc   1/1     Running   0          58s
kubernetes-dashboard   kubernetes-dashboard-b65488c4-trmxs          1/1     Running   0          59s

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Note: you’ll need a bearer token to access:

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

I did have troubles with the RBAC user it created, so i ended up needing to create another one:

builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ cat newsa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard2
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard2
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard2
  namespace: kubernetes-dashboard


builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ kubectl apply -f newsa.yaml
serviceaccount/kubernetes-dashboard2 created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard2 created

$ kubectl -n kubernetes-dashboard describe secret kubernetes-dashboard2-token-rd4rg
Name:         kubernetes-dashboard2-token-rd4rg
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard2
              kubernetes.io/service-account.uid: 978fad7e-38b1-427b-b323-8e4a8d3ea072

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlpDQ2M2MHNERE5Val9tNXRUZllkandlY214NjR3X0RXQ0JRcDFzX1JENGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZDItdG9rZW4tcmQ0cmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZXJuZXRlcy1kYXNoYm9hcmQyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTc4ZmFkN2UtMzhiMS00MjdiLWIzMjMtOGU0YThkM2VhMDcyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmt1YmVybmV0ZXMtZGFzaGJvYXJkMiJ9.UqadLc9faUyGhYiIW4_9z3pw_sqOCcNNtp_E1-VZKqb5f61204ej3i_-lLNaiV7KWpo1z_4ZNqOBIvUwqnKbmqVpF8Qey-ToH90SdkbvJNCkK0Rkw66DzTE3SB_I7wvzxafZlw3WzqGJM5k5iatyqRTAoDIocP2WeTlLWOUDlzFwh-zTMp3tayN0o4-JoJbnS3VbTW5mmL1OLr4ee3kmf-b4jWAJVX_Xk6JVDk5Ga7iJoGNnoIQJhXl8foSdb1Qem_heFOy6O488l8rpigvbbCGjkTz9iS9oDXPBN4TNw-3HYV03vmfRmCYn0eB4mBwo2Ekhfz5JFftr23xOqg4nDg

Relaunched with the right token:

dashboard launched via proxy

We can see we have a default storage class already defined:

Testing With Sonarqube

First, let’s get helm

builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6617  100  6617    0     0  12253      0 --:--:-- --:--:-- --:--:-- 12276
builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ chmod 755 ./get_helm.sh
builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

Then add a helm registry

builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
"banzaicloud-stable" has been added to your repositories
builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "banzaicloud-stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

and launch a chart

builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads$ helm install banzaicloud-stable/sonarqube --generate-name
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"]

When this failed, i pulled the github stable charts and tried manually

builder@DESKTOP-2SQ9NQM:/mnt/c/Users/isaac/Downloads/charts/stable$ helm install ./sonarqube --generate-name
Error: found in Chart.yaml, but missing in charts/ directory: postgresql, mysql

The underlying issue is that kubernetes 1.16 depricated some common extensions used in the charts and a lot of charts will need updating to work.

Getting Helm 2 (with Tiller) into k8s 1.16

While helm2 does have a bug with k8s 1.16 (see https://github.com/helm/helm/issues/6374), you can install helm 2 with tiller manually:

$ cat ~/Documents/rbac-config.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
JOHNSI10-M1:linode-cli johnsi10$ kubectl create -f /Users/johnsi10/Documents/rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created


$ cat helm_init.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "200"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

$ kubectl apply -f helm_init.yaml 
deployment.apps/tiller-deploy created
service/tiller-deploy created

$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

While Helm / Tiller is now running in 1.16, as we said earlier, a lot of the older charts (like stable/sonarqube) aren’t updated for 1.16:

$ helm install stable/sonarqube
Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

So we can use another current chart to test things out, artifactory

Installing Artifactory via Helm into K8s 1.16

$ helm repo add jfrog https://charts.jfrog.io
"jfrog" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "adwerx" chart repository
...Successfully got an update from the "jfrog" chart repository
...Successfully got an update from the "stable" chart repository
$ helm install --name my-release jfrog/artifactory
NAME:   my-release
LAST DEPLOYED: Tue Nov 26 11:41:24 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                           DATA  AGE
my-release-artifactory-installer-info          1     2s
my-release-artifactory-nginx-artifactory-conf  1     2s
my-release-artifactory-nginx-conf              1     2s
my-release-postgresql-configuration            1     2s

==> v1/Deployment
NAME                          READY  UP-TO-DATE  AVAILABLE  AGE
my-release-artifactory-nginx  0/1    1           0          1s

==> v1/NetworkPolicy
NAME                                              AGE
my-release-artifactory-artifactory-networkpolicy  1s

==> v1/Pod(related)
NAME                                           READY  STATUS    RESTARTS  AGE
my-release-artifactory-0                       0/1    Pending   0         1s
my-release-artifactory-nginx-658f4488c4-qq245  0/1    Init:0/1  0         1s
my-release-postgresql-0                        0/1    Pending   0         1s

==> v1/Role
NAME                    AGE
my-release-artifactory  1s

==> v1/RoleBinding
NAME                    AGE
my-release-artifactory  1s

==> v1/Secret
NAME                                      TYPE               DATA  AGE
my-release-artifactory                    Opaque             0     2s
my-release-artifactory-binarystore        Opaque             1     2s
my-release-artifactory-nginx-certificate  kubernetes.io/tls  2     2s
my-release-postgresql                     Opaque             1     2s

==> v1/Service
NAME                            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
my-release-artifactory          ClusterIP     10.128.197.237  <none>       8081/TCP                    1s
my-release-artifactory-nginx    LoadBalancer  10.128.45.211   <pending>    80:30855/TCP,443:32292/TCP  1s
my-release-postgresql           ClusterIP     10.128.104.182  <none>       5432/TCP                    1s
my-release-postgresql-headless  ClusterIP     None            <none>       5432/TCP                    1s

==> v1/ServiceAccount
NAME                    SECRETS  AGE
my-release-artifactory  1        2s

==> v1/StatefulSet
NAME                    READY  AGE
my-release-artifactory  0/1    1s
my-release-postgresql   0/1    1s


NOTES:
Congratulations. You have just deployed JFrog Artifactory!

1. Get the Artifactory URL by running these commands:

   NOTE: It may take a few minutes for the LoadBalancer IP to be available.
         You can watch the status of the service by running 'kubectl get svc -w my-release-artifactory-nginx'
   export SERVICE_IP=$(kubectl get svc --namespace default my-release-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
   echo http://$SERVICE_IP/

2. Open Artifactory in your browser
   Default credential for Artifactory:
   user: admin
   password: password

Let's get that Load Balancers IP:

$ kubectl get svc --namespace default my-release-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
45.79.61.59

And give a look:

And of course we can delete it just as easily:

$ helm list
NAME      	REVISION	UPDATED                 	STATUS  	CHART            	APP VERSION	NAMESPACE
my-release	1       	Tue Nov 26 11:41:24 2019	DEPLOYED	artifactory-8.2.4	6.15.0     	default  
$ helm list
NAME      	REVISION	UPDATED                 	STATUS  	CHART            	APP VERSION	NAMESPACE
my-release	1       	Tue Nov 26 11:41:24 2019	DEPLOYED	artifactory-8.2.4	6.15.0     	default  
$ helm delete my-release
release "my-release" deleted
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS        RESTARTS   AGE
default       my-release-artifactory-0                  1/1     Terminating   0          7m50s
default       my-release-postgresql-0                   1/1     Terminating   0          7m50s

Once done, we can delete the cluster:

$ linode-cli lke cluster-delete 390

And see it removed on https://cloud.linode.com/kubernetes/clusters

One thing that won’t delete, and frankly i don’t see this as a problem, are PVCs.  Our charts made some persistent volume claims and those are still sitting out there in https://cloud.linode.com/volumes

PVCs show up as Linode Volumes

You can use the ellipse menu on those volumes to get ways to mount them in case you cared about the data:

However, we will just delete them.  The costs, if we forget, won't be that high, but worth noting the 20gb would be about $2 and 50Gb $5/month (as of this writing).

CLI support

I’ve had several threads with Tech support on LKE support in the Linode-CLI.  There are hooks there:

$ linode-cli lke
linode-cli lke [ACTION]

Available actions: 
┌───────────────────┬──────────────────────────────┐
│ action            │ summary                      │
├───────────────────┼──────────────────────────────┤
│ clusters-list     │ List Kubernetes Clusters     │
│ cluster-create    │ Create Kubernetes Cluster    │
│ cluster-view      │ View Kubernetes Cluster      │
│ cluster-update    │ Update Kubernetes Cluster    │
│ cluster-delete    │ Delete Kubernetes Cluster    │
│ pools-list        │ List Node Pools              │
│ pool-create       │ Create Node Pool             │
│ pool-view         │ View Node Pool               │
│ pool-update       │ Update Node Pool             │
│ pool-delete       │ Delete Node Pool             │
│ api-endpoint-view │ View Kubernetes API Endpoint │
│ kubeconfig-view   │ View Kubeconfig              │
│ versions-list     │ List Kubernetes Versions     │
│ version-view      │ View Kubernetes Version      │
└───────────────────┴──────────────────────────────┘

But there seems to be no way to actually create a cluster presently:

$ linode-cli lke cluster-create
Request failed: 400
┌errors──────┬────────────────────────┐
│ field      │ reason                 │
├────────────┼────────────────────────┤
│ label      │ label is required      │
│ version    │ version is required    │
│ node_pools │ node_pools is required │
└────────────┴────────────────────────┘
$ linode-cli lke cluster-create node_pools
usage: linode-cli [-h] [--label label] [--region region] [--version version]
                  [--tags tags]
linode-cli: error: unrecognized arguments: node_pools
$ linode-cli lke cluster-create --node_pools
usage: linode-cli [-h] [--label label] [--region region] [--version version]
                  [--tags tags]
linode-cli: error: unrecognized arguments: --node_pools
$ linode-cli lke cluster-create -node_pools
usage: linode-cli [-h] [--label label] [--region region] [--version version]
                  [--tags tags]
linode-cli: error: unrecognized arguments: -node_pools

While cluster create isn’t quite ready, we can at least list our clusters and node pools:

$ linode-cli lke clusters-list
┌─────┬─────────┬────────────┐
│ id  │ label   │ region     │
├─────┼─────────┼────────────┤
│ 335 │ MyLabel │ us-central │
└─────┴─────────┴────────────┘
$ linode-cli lke pools-list 335
┌─────┬──────────────────────────────────────────────────────────────────────────────┐
│ id  │ linodes                                                                      │
├─────┼──────────────────────────────────────────────────────────────────────────────┤
│ 433 │ {'id': '18594114', 'status': 'ready'}, {'id': '18594115', 'status': 'ready'} │
└─────┴──────────────────────────────────────────────────────────────────────────────┘

The kubeconfig output sort of works… it creates a messy output with a base64 string embedded.

Delete also works:

$ linode-cli lke cluster-delete 335

One last note on the Linode-CLI command line - and perhaps this is because I compiled locally trying to fix LKE CLI issues.  The output from the commands is not UTF-8 and tends to make my shell wonky when i try and use it..  

So here is how I presently can get the kubeconfig from the CLI:

$ linode-cli lke clusters-list
┌─────┬────────────┬────────────┐
│ id  │ label      │ region     │
├─────┼────────────┼────────────┤
│ 390 │ MyNewLabel │ us-central │
└─────┴────────────┴────────────┘
$ linode-cli lke kubeconfig-view 390 > t.o
$ cat t.o | tail -n2 | head -n1 | sed 's/.\{8\}$//' | sed 's/^.\{8\}//' | base64 --decode > ~/.kube/config
$ kubectl get nodes
NAME                      STATUS   ROLES    AGE   VERSION
lke390-494-5ddd5b6ac759   Ready    <none>   14m   v1.16.2
lke390-494-5ddd5b6accf4   Ready    <none>   14m   v1.16.2

Summary

I have a special place in my heart for cloud providers outside the non big-three.  It’s very cool to see how companies like DO, Linode and Ali-Cloud are implementing their clusters and i would expect to see more Clouds provide Kubernetes offerings in the near future.

Linode is unapologetically developer focused.  You can get that from their vendor booths and big tagline “The Developer’s Cloud Simplified” on their site.  I won’t lie, i’ve been pestering them incessantly since they teased LKE at HashiConf this year to get in on the beta.  And so i want to also summarize this knowing it’s in beta (or just coming out of beta presently).

LKE is a solid offering.  Like good cloud providers, they offer the control plan gratis (i’m giving you the stink eye, Amazon!).  They’re pricing is all inclusive so you know the costs right up front.

E.g. a 3 node Standard 8Gb/ 4CPU cluster (so 24gb/12cpu total) is $120 a month presently.  This is a pretty good deal, considering the closest offering from AKS would be 3 F4s_v2 at $125/mo each, not counting per-public IP charges and data charges. GCE costs for GKE would be $145. And that’s not even identical (using n1-standard-2’s with only 2CPUs, not 4, each) - and no IPs or data included.  

Another really cool fact here, and this is perhaps because they are bleading edge is that LKE runs 1.16 - that is the only option presently. For me, right now, 1.14.8 is the latest I can create with GKE and AKS newest is 1.15.5 (listed as “preview”) with 1.14.8 as it’s latest.  K8s 1.16 was announçemed just this year (Sept 18).

While right now the beta has us locked in at 1.16 k8s and only 1 region (Dallas), i would expect that to expand after release.  For now i think LKE is a great offering and one really worth considering.