Getting Started with K3S

Published: Aug 27, 2019 by Isaac Johnson

With all the kubernetes talk lately, it caught my eye when k3s started to make its way into my news feeds.  After a conference seeing plenty of k3s stickers, i felt it was time to dig into it.  

K3s is a fully kubernetes compliant distribution with a few changes:

  1. Legacy alpha level non-default features are removed
  2. Many in-tree addons for various storage and cloud providers have been removed (but can be added back as plugins)
  3. Slqlite3 is default storage
  4. Simple launcher (and containerized launch with k3d)
  5. Minimal OS dependencies (cgroups being the one that comes up for me)

Because of these optimizations, k3s can be run on edge devices, Raspberry Pi’s and older hardware quite well.  

Launching into Docker

We can use k3d to create a k3s cluster inside of an existing docker.  This makes it easy to launch on laptop with a working docker already.

$ k3d create
Creating cluster k3s_default
Running command: docker run --name k3s_default -e K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml --publish 6443:6443 --privileged -d rancher/k3s:v0.1.0 server --https-listen-port 6443

Created cluster

$ export KUBECONFIG="$(k3d get-kubeconfig)"
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7748f7f6df-z2nqq 1/1 Running 0 56s
kube-system helm-install-traefik-kr5fb 1/1 Running 0 55s

Add some basic local storage

k3s does not come with any local storage so our first step should be to add some form of storage for later PVCs.

$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
$ kubectl get storageclass
NAME PROVISIONER AGE
local-path rancher.io/local-path 16s

Setting up Helm

$ helm list
Error: could not find tiller
$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Note, tiller takes a while to come up, you may get an error if you go too fast:

$ helm install stable/sonarqube
Error: could not find a ready tiller pod

Search and install with helm:

$ helm search sonarqube
NAME CHART VERSION	APP VERSION	DESCRIPTION                                            
stable/sonarqube	2.1.4 7.8 Sonarqube is an open sourced code quality scanning tool
$ helm install stable/sonarqube
NAME: torrid-molly
LAST DEPLOYED: Fri Aug 23 21:22:44 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
torrid-molly-sonarqube-config 0 1s
torrid-molly-sonarqube-copy-plugins 1 1s
torrid-molly-sonarqube-install-plugins 1 1s
torrid-molly-sonarqube-tests 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
torrid-molly-postgresql Pending 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
torrid-molly-postgresql-6bfd4687fb-26dhn 0/1 Pending 0 0s
torrid-molly-sonarqube-58c8c5cd8d-cl8rk 0/1 Init:0/1 0 0s

==> v1/Secret
NAME TYPE DATA AGE
torrid-molly-postgresql Opaque 1 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
torrid-molly-postgresql ClusterIP 10.43.145.229 <none> 5432/TCP 1s
torrid-molly-sonarqube LoadBalancer 10.43.133.73 <pending> 9000:31914/TCP 1s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
torrid-molly-postgresql 0/1 1 0 1s
torrid-molly-sonarqube 0/1 1 0 0s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w torrid-molly-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default torrid-molly-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

But in a moment, we will note there is an issue:

$ helm list
NAME REVISION	UPDATED STATUS CHART APP VERSION	NAMESPACE
torrid-molly	1 Fri Aug 23 21:22:44 2019	DEPLOYED	sonarqube-2.1.4	7.8 default  
AHD-MBP13-048:k3s isaac.johnson$ kubectl get pods
NAME READY STATUS RESTARTS AGE
svclb-torrid-molly-sonarqube-5c4c4b6d57-pqr8j 1/1 Running 0 12s
torrid-molly-postgresql-6bfd4687fb-26dhn 0/1 Pending 0 12s
torrid-molly-sonarqube-58c8c5cd8d-cl8rk 0/1 PodInitializing 0 12s
AHD-MBP13-048:k3s isaac.johnson$ kubectl get pods
NAME READY STATUS RESTARTS AGE
svclb-torrid-molly-sonarqube-5c4c4b6d57-pqr8j 1/1 Running 0 27s
torrid-molly-postgresql-6bfd4687fb-26dhn 0/1 Pending 0 27s
torrid-molly-sonarqube-58c8c5cd8d-cl8rk 0/1 PodInitializing 0 27s
AHD-MBP13-048:k3s isaac.johnson$ kubectl describe torrid-molly-postgresql-6bfd4687fb-26dhn
error: the server doesn't have a resource type "torrid-molly-postgresql-6bfd4687fb-26dhn"
AHD-MBP13-048:k3s isaac.johnson$ kubectl describe pod torrid-molly-postgresql-6bfd4687fb-26dhn
Name: torrid-molly-postgresql-6bfd4687fb-26dhn
Namespace: default
Priority: 0
Node: <none>
Labels: app=torrid-molly-postgresql
                pod-template-hash=6bfd4687fb
Annotations: <none>
Status: Pending
IP:             
Controlled By: ReplicaSet/torrid-molly-postgresql-6bfd4687fb
Containers:
  torrid-molly-postgresql:
    Image: postgres:9.6.2
    Port: 5432/TCP
    Host Port: 0/TCP
    Requests:
      cpu: 100m
      memory: 256Mi
    Liveness: exec [sh -c exec pg_isready --host $POD_IP] delay=60s timeout=5s period=10s #success=1 #failure=6
    Readiness: exec [sh -c exec pg_isready --host $POD_IP] delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:
      POSTGRES_USER: sonarUser
      PGUSER: sonarUser
      POSTGRES_DB: sonarDB
      POSTGRES_INITDB_ARGS:  
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'torrid-molly-postgresql'> Optional: false
      POD_IP: (v1:status.podIP)
    Mounts:
      /var/lib/postgresql/data/pgdata from data (rw,path="postgresql-db")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9x4tq (ro)
Conditions:
  Type Status
  PodScheduled False 
Volumes:
  data:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: torrid-molly-postgresql
    ReadOnly: false
  default-token-9x4tq:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-9x4tq
    Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Warning FailedScheduling 45s (x4 over 47s) default-scheduler pod has unbound immediate PersistentVolumeClaims
AHD-MBP13-048:k3s isaac.johnson$ kubectl describe pod torrid-molly-postgresql-6bfd4687fb-26dhn
Name: torrid-molly-postgresql-6bfd4687fb-26dhn
Namespace: default
Priority: 0
Node: <none>
Labels: app=torrid-molly-postgresql
                pod-template-hash=6bfd4687fb
Annotations: <none>
Status: Pending
IP:             
Controlled By: ReplicaSet/torrid-molly-postgresql-6bfd4687fb
Containers:
  torrid-molly-postgresql:
    Image: postgres:9.6.2
    Port: 5432/TCP
    Host Port: 0/TCP
    Requests:
      cpu: 100m
      memory: 256Mi
    Liveness: exec [sh -c exec pg_isready --host $POD_IP] delay=60s timeout=5s period=10s #success=1 #failure=6
    Readiness: exec [sh -c exec pg_isready --host $POD_IP] delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:
      POSTGRES_USER: sonarUser
      PGUSER: sonarUser
      POSTGRES_DB: sonarDB
      POSTGRES_INITDB_ARGS:  
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'torrid-molly-postgresql'> Optional: false
      POD_IP: (v1:status.podIP)
    Mounts:
      /var/lib/postgresql/data/pgdata from data (rw,path="postgresql-db")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9x4tq (ro)
Conditions:
  Type Status
  PodScheduled False 
Volumes:
  data:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: torrid-molly-postgresql
    ReadOnly: false
  default-token-9x4tq:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-9x4tq
    Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Warning FailedScheduling 55s (x5 over 117s) default-scheduler pod has unbound immediate PersistentVolumeClaims
AHD-MBP13-048:k3s isaac.johnson$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
torrid-molly-postgresql Pending 2m16s
AHD-MBP13-048:k3s isaac.johnson$ kubectl describe pvc torrid-molly-postgresql
Name: torrid-molly-postgresql
Namespace: default
StorageClass:  
Status: Pending
Volume:        
Labels: app=torrid-molly-postgresql
               chart=postgresql-0.8.3
               heritage=Tiller
               release=torrid-molly
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode: Filesystem
Mounted By: torrid-molly-postgresql-6bfd4687fb-26dhn
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal FailedBinding 6s (x12 over 2m24s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
AHD-MBP13-048:k3s isaac.johnson$ helm list
NAME REVISION	UPDATED STATUS CHART APP VERSION	NAMESPACE
torrid-molly	1 Fri Aug 23 21:22:44 2019	DEPLOYED	sonarqube-2.1.4	7.8 default  

The error around “pod has unbound immediate PersistentVolumeClaims” is because, while we did add the storage class, it wasn’t set to default.  Let’s just delete our install and this time, launch SonarQube with the specific storage class we have:

$ helm delete torrid-molly
release "torrid-molly" deleted
AHD-MBP13-048:k3s isaac.johnson$ helm install --name mytest2 stable/sonarqube --set=postgresql.persistence.storageClass=local-path,persistence.storageClass=local-path
NAME: mytest2
LAST DEPLOYED: Fri Aug 23 21:29:10 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
mytest2-sonarqube-config 0 1s
mytest2-sonarqube-copy-plugins 1 1s
mytest2-sonarqube-install-plugins 1 1s
mytest2-sonarqube-tests 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mytest2-postgresql Pending local-path 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mytest2-postgresql-5b4b974d9d-88pwp 0/1 Pending 0 1s
mytest2-sonarqube-789546477d-wmnjx 0/1 Pending 0 1s

==> v1/Secret
NAME TYPE DATA AGE
mytest2-postgresql Opaque 1 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mytest2-postgresql ClusterIP 10.43.245.184 <none> 5432/TCP 1s
mytest2-sonarqube LoadBalancer 10.43.213.158 <pending> 9000:31004/TCP 1s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mytest2-postgresql 0/1 1 0 1s
mytest2-sonarqube 0/1 1 0 1s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w mytest2-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default mytest2-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

AHD-MBP13-048:k3s isaac.johnson$ helm list
NAME REVISION	UPDATED STATUS CHART APP VERSION	NAMESPACE
mytest2	1 Fri Aug 23 21:29:10 2019	DEPLOYED	sonarqube-2.1.4	7.8 default  

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mytest2-postgresql-5b4b974d9d-88pwp 0/1 Pending 0 22s
mytest2-sonarqube-789546477d-wmnjx 0/1 PodInitializing 0 22s
svclb-mytest2-sonarqube-6f6cd9fc7d-dfb9p 1/1 Running 0 22s

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mytest2-postgresql-5b4b974d9d-88pwp 1/1 Running 0 20m
mytest2-sonarqube-789546477d-wmnjx 1/1 Running 4 20m
svclb-mytest2-sonarqube-6f6cd9fc7d-dfb9p 1/1 Running 1 20m

It did indeed take about 20m to come up, so patience is required.  We can now test our SonarQube with port-forwards

$ kubectl port-forward mytest2-sonarqube-789546477d-wmnjx 9000:9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000
Handling connection for 9000

The key point here is that indeed, we have a 2-tier app running but it took nearly 20 minutes to come up - not entirely performant.

K3s on ARM

But how about using a Raspberry Pi?

I have a spare Pi3 for running projects.  I ssh’ed into it and ran:

curl -sfL https://get.k3s.io | sh -

Now we can copy the kubeconfig :

I copied it to a “config” file then changed “localhost” to my raspberry pi’s ip, then tried using standard kubernetes kubectl:

$ sed -i -e 's/localhost:6443/192.168.1.225:6443/g' config

$ export KUBECONFIG=$(pwd)/config
$ kubectl get pods
No resources found.
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-b7464766c-sjpw6 1/1 Running 0 15m
kube-system helm-install-traefik-sgv4p 0/1 Completed 0 15m
kube-system svclb-traefik-9xb4r 2/2 Running 0 13m
kube-system traefik-5c79b789c5-slrcb 1/1 Running 0 13m

Next let’s set  some local storage for any PVC claims:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created

$ kubectl get storageclass
NAME PROVISIONER AGE
local-path rancher.io/local-path 22s

and lastly, as before, set up Helm:

$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
AHD-MBP13-048:k3s isaac.johnson$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Let’s test it out with our favourite helm chart - sonarqube:

$ helm install --name mypitest stable/sonarqube --set=postgresql.persistence.storageClass=local-path,persistence.storageClass=local-path

That failed…   and the reason is two-fold: first we attempted to set up Tiller from our Mac into the ARM cluster, which gave it the entirely wrong binary.  Second, the stable/sonarqube chart is for x86, not ARM architecture, so had Helm worked, the pods launched would fail as they are the wrong container type.

Installing Helm the Right Way

We will need direct access or SSH into our Pi

pi@raspberrypi:~/helm $ wget https://get.helm.sh/helm-v2.14.3-linux-arm.tar.gz
--2019-08-25 02:12:43-- https://get.helm.sh/helm-v2.14.3-linux-arm.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.195.19.95, 2606:2800:11f:b40:171d:1a2f:2077:f6b
Connecting to get.helm.sh (get.helm.sh)|152.195.19.95|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24389505 (23M) [application/x-tar]
Saving to: ‘helm-v2.14.3-linux-arm.tar.gz’

helm-v2.14.3-linux-arm.tar 100%[========================================>] 23.26M 5.61MB/s in 4.5s   

2019-08-25 02:12:48 (5.19 MB/s) - ‘helm-v2.14.3-linux-arm.tar.gz’ saved [24389505/24389505]

pi@raspberrypi:~/helm $ tar -xzf helm-v2.14.3-linux-arm.tar.gz 
pi@raspberrypi:~/helm $ cd linux-arm
pi@raspberrypi:~/helm/linux-arm $ sudo cat /etc/rancher/k3s/k3s.yaml > config
pi@raspberrypi:~/helm/linux-arm $ export KUBECONFIG=$(pwd)/config
pi@raspberrypi:~/helm/linux-arm $ ./helm init --upgrade --service-account tiller --history-max 200 --tiller-image jessestuart/tiller
$HELM_HOME has been configured at /home/pi/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.

pi@raspberrypi:~/helm/linux-arm $ ./helm list
Error: could not find a ready tiller pod
pi@raspberrypi:~/helm/linux-arm $ ./helm list
pi@raspberrypi:~/helm/linux-arm $ 

Next, to solve PVC issues:

I’ll use the NAS that sits right next to the PI for this (though there are guides on using USB thumb drives as well)

$ ./helm install stable/nfs-client-provisioner --set nfs.server=192.168.1.129 --set nfs.path=/volume1/testing --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm


$ kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/nfs-client patched
pi@raspberrypi:~/helm/linux-arm $ kubectl get storageclass
NAME PROVISIONER AGE
local-path rancher.io/local-path 97m
nfs-client (default) cluster.local/musty-wallaby-nfs-client-provisioner 4m29s

Lastly, let’s fire up a chart we know is for ARM

./helm repo add arm-stable https://peterhuene.github.io/arm-charts/stable
./helm install arm-stable/ghost --set ghostUrl=https://testing.blog.com

That took a while, but ultimately it did run

Every 2.0s: kubectl get pods Sun Aug 25 03:55:42 2019

NAME READY STATUS RESTARTS AGE
dining-cricket-ghost-5c6ccdff6-lj454 1/1 Running 0 7m46s
dining-cricket-mariadb-6d56c74675-k5cbb 1/1 Running 0 7m46s
musty-wallaby-nfs-client-provisioner-7d9f76f98c-pbbl5 1/1 Running 0 22m

While we did get Ghost blog to run from some other users image, it had redirect issues as it was setup with a production.json making it not really usable.

Google Cloud

First, lets create a machine to host it - a small n1 instance should do

gcloud beta compute --project=api-project-79123451234563 instances create my-k3s-test --zone=us-central1-a --machine-type=n1-standard-1 --subnet=default --network-tier=PREMIUM --metadata=ssh-keys=johnsi10:ssh-rsa\ AAAAB3NzaASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFASDFvzNl5pB\ johnsi10@cyberdyne.com --maintenance-policy=MIGRATE --service-account=799042244963-compute@developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append --tags=http-server,https-server --image=debian-9-stretch-v20190813 --image-project=debian-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=my-k3s-test --reservation-affinity=any
 
gcloud compute --project=api-project-799042244963 firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
 
gcloud compute --project=api-project-799042244963 firewall-rules create default-allow-https --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:443 --source-ranges=0.0.0.0/0 --target-tags=https-server
same as above but in the UI

We can get the kubeconfig (just change the localhost to your IP like we did with the PI example):

johnsi10@my-k3s-test:~$ sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUyTmpnME5UWTJNVEFlRncweE9UQTRNall4T0RVME1qRmFGdzB5T1RBNE1qTXhPRFUwTWpGYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUyTmpnME5UWTJNVEJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkRIS1lxclpOYUc4R2pMZjIySnVYeFJtMTN2ZUhUa1VBOWRjbDdJd1Bxb1MKdE12eS9yL1lpVEdVMlY4WnZWY3IvajBoUmtsdDlRU2oyUVFrZkozZm5hZWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFEODE1RlJjOVhHCm0reS95amZFMld4b0ozTkFqSTJuSlc3NkdOWWJ0RkovdlFJZ2FxMFlwV3FsK1pCbnM4Y2c3SjNkUWhuYkNpbFAKbzkwTStUVTMyQnNYT0xzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://34.67.10.218:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    password: d587fb323045ef2f39a5760032ba60a0
    username: admin

One thing you’ll notice if you try to use this externally is that the default HTTP/HTTPS rules will block ingress on 6443:

H:\>vim H:\.kube\config

H:\>kubectl get pods
Unable to connect to the server: dial tcp 34.67.10.218:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

So we need to add a rule on the firewall rules to the VPC:

Now that we added an ingress rule for 6443, we can connect from our local workstation:

Let’s install a dashboard

H:\>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml --insecure-skip-tls-verify
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

Create a file dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

and apply it

H:\>kubectl apply -f dashboard-adminuser.yaml --insecure-skip-tls-verify
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

On a mac or linux, i could just then run

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}').

but as im in Windows, i did some of those steps manually

H:\>kubectl -n kube-system describe secret admin-user-token-n865w
Name: admin-user-token-n865w
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 3e8dac37-c835-11e9-89a6-42010a800009

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 526 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW44NjV3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzZThkYWMzNy1jODM1LTExZTktODlhNi00MjAxMGE4MDAwMDkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.f-KteQvmW2by7FBPZqHV3TzswWMVwLhmulTcoiSFYfm0mWnBucQm_3nddmkSqwPdUzTEuBGAhu8ku8bHZH5Npv7PyyGfWw-WjsGVRJtHPTgpl09j3XczrE2AUrw6olULv1SXkcRqdrm9l3lHJeiAqYBPRamN-Kp-XRDmrGVlg8k4DPnB4szJeh5nL9rtHY7mhCqX9nZNW0CWukI7d3suI1ESsS69w0FisKF1f5glQMYonYZKbpj95uyzHHYkC2jZPVJ5bEYvUUw1boZjEzdArIxL3aRqwIQ9Z_JNFAfPqFkWEJOXSzkJhA0h-_tq8nyaOLuMxz59Lqq7K2Och9kIuw

Then use the token (dont base64 decode as you might expect), as the bearer token in the dashboard.

Proxy the dashboard pod then go to: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/node?namespace=default

And while there are certainly other factors to consider, the n1-standard-1 is just $24 a month at time of writing:https://cloud.google.com/compute/all-pricing

But what if this is totally just dev.. Can we save more?

When creating our instance, we can enable pre-emptabilty which will bounce the host at least once every 24h and will cost us about $7.30 for the month:

You can also shut down your instance when not in use (as you’re paying about a penny an hour).

Summary:

k3s offers a stripped down low resource requiring version of Kubernetes that could be useful on older hardware or smaller devices.  While removing the legacy alpha level support and cloud storage wasn’t a big deal, the lack of any form of PVC or dashboard added some additional complexity.  This complexity could be easily solved with Terraform or Ansible playbooks so it’s not anything that can’t be overcome.

Next time we’ll show you more k3s fun with WSL 2 and some uses as a build environment.

Guides for more info:

https://medium.com/@marcovillarreal_40011/cheap-and-local-kubernetes-playground-with-k3s-helm-5a0e2a110de9

https://github.com/rancher/k3s/issues/85

https://gist.github.com/ibuildthecloud/1b7d6940552ada6d37f54c71a89f7d00

getting-started k3s gcp tutorial

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes