Helm offers a fantastic way to install charts in a repeatable way, but often it can seem daunting to get started creating charts.  And once one has charts, there are a variety of ways to serve them, so more complicated than others.  

In this guide we will start with a basic YAML for a VNC pod and work through creating the Helm chart.  We’ll then explore hosting options in Azure, AWS and local filesystems.

Getting started

Here is a common VNC deployment yaml I use for testing:

$ cat myvncdep.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: my-vnc-server
  name: my-vnc-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-vnc-server
  template:
    metadata:
      labels:
        app: my-vnc-server
    spec:
      containers:
      - image: consol/ubuntu-xfce-vnc
        imagePullPolicy: IfNotPresent
        name: my-vnc-server
        ports:
        - containerPort: 24000
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30

When launched, one can use

$ kubectl port-forward (container name) 5901:5901 &

And then VNC to localhost:5901 (with password vncpassword) for a nice simple graphical container. I find this useful for testing non-exposed microservices and URLs.

Create a Helm Chart:

We’ll start with Helm 3.x and create a new chart for a VNC container:

JOHNSI10-M1:Documents johnsi10$ helm -version
Error: invalid argument "ersion" for "-v, --v" flag: strconv.ParseInt: parsing "ersion": invalid syntax
JOHNSI10-M1:Documents johnsi10$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
JOHNSI10-M1:Documents johnsi10$ helm create vnc
Creating vnc
JOHNSI10-M1:Documents johnsi10$ cd vnc
JOHNSI10-M1:vnc johnsi10$ ls
Chart.yaml	charts		templates	values.yaml

Next, i’m going to init a new repo so we can track changes;

JOHNSI10-M1:vnc johnsi10$ git init
Initialized empty Git repository in /Users/johnsi10/Documents/vnc/.git/
JOHNSI10-M1:vnc johnsi10$ git add -A
JOHNSI10-M1:vnc johnsi10$ git commit -m "init 0"
[master (root-commit) a64c9fe] init 0
 10 files changed, 327 insertions(+)
 create mode 100644 .helmignore
 create mode 100644 Chart.yaml
 create mode 100644 templates/NOTES.txt
 create mode 100644 templates/_helpers.tpl
 create mode 100644 templates/deployment.yaml
 create mode 100644 templates/ingress.yaml
 create mode 100644 templates/service.yaml
 create mode 100644 templates/serviceaccount.yaml
 create mode 100644 templates/tests/test-connection.yaml
 create mode 100644 values.yaml

We want to basically launch an xfce based VNC container and expose 5901 and 6901.

We will now expose 5901, 6901 and turn on ingress:

@@ -1,6 +1,6 @@
 apiVersion: v2
 name: vnc
-description: A Helm chart for Kubernetes
+description: A Helm chart for Kubernetes VNC pod
 
 # A chart can be either an 'application' or a 'library' chart.
 #
@@ -18,4 +18,4 @@ version: 0.1.0
 
 # This is the version number of the application being deployed. This version number should be
 # incremented each time you make changes to the application.
-appVersion: 1.16.0
+appVersion: latest
diff --git a/templates/service.yaml b/templates/service.yaml
index befc726..17c26b0 100644
--- a/templates/service.yaml
+++ b/templates/service.yaml
@@ -8,8 +8,12 @@ spec:
   type: {{ .Values.service.type }}
   ports:
     - port: {{ .Values.service.port }}
-      targetPort: http
+      targetPort: 5901
       protocol: TCP
-      name: http
+      name: vnc
+    - port: {{ .Values.service.webport }}
+      targetPort: 6901
+      protocol: TCP
+      name: vncweb
   selector:
     {{- include "vnc.selectorLabels" . | nindent 4 }}
diff --git a/values.yaml b/values.yaml
index d2658bf..a3143b0 100644
--- a/values.yaml
+++ b/values.yaml
@@ -5,7 +5,7 @@
 replicaCount: 1
 
 image:
-  repository: nginx
+  repository: consol/ubuntu-xfce-vnc
   pullPolicy: IfNotPresent
 
 imagePullSecrets: []
@@ -32,16 +32,17 @@ securityContext: {}
 
 service:
   type: ClusterIP
-  port: 80
+  port: 5901
+  webport: 6901
 
 ingress:
-  enabled: false
+  enabled: true
   annotations: {}
-    # kubernetes.io/ingress.class: nginx
+    kubernetes.io/ingress.class: nginx
     # kubernetes.io/tls-acme: "true"
   hosts:
-    - host: chart-example.local
-      paths: []
+  #  - host: chart-example.local
+  #    paths: []
   tls: []
   #  - secretName: chart-example-tls
   #    hosts:

You can get the full chart here: https://freshbrewed.science/helm/vnc-0.1.615.tgz

Let’s test in a fresh cluster first.

JOHNSI10-M1:vnc johnsi10$ az group create --name testHelmRg --location centralus
{
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b12345/resourceGroups/testHelmRg",
  "location": "centralus",
  "managedBy": null,
  "name": "testHelmRg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
JOHNSI10-M1:vnc johnsi10$ az ad sp create-for-RBAC --skip-assignment --name myAKSsp
Changing "myAKSsp" to a valid URI of "http://myAKSsp", which is the required format used for service principal names
{
  "appId": "50fe641d-e9c8-4fca-8d8b-de2bde3e22cb",
  "displayName": "myAKSsp",
  "name": "http://myAKSsp",
  "password": "16962b51-28c0-4806-b73d-1234567890",
  "tenant": "d73a39db-6eda-495d-8000-7579f56d68b7"
}

Refreshing my memory what’s available today:

JOHNSI10-M1:vnc johnsi10$ az aks get-versions --location centralus -o table
KubernetesVersion    Upgrades
-------------------  -------------------------------------------------
1.18.2(preview)      None available
1.18.1(preview)      1.18.2(preview)
1.17.5(preview)      1.18.1(preview), 1.18.2(preview)
1.17.4(preview)      1.17.5(preview), 1.18.1(preview), 1.18.2(preview)
1.16.9               1.17.4(preview), 1.17.5(preview)
1.16.8               1.16.9, 1.17.4(preview), 1.17.5(preview)
1.15.11              1.16.8, 1.16.9
1.15.10              1.15.11, 1.16.8, 1.16.9
1.14.8               1.15.10, 1.15.11
1.14.7               1.14.8, 1.15.10, 1.15.11

Now let’s create a new cluster:

JOHNSI10-M1:vnc johnsi10$ az aks create --resource-group testHelmRg --name testHelmAks --location centralus --kubernetes-version 1.15.11 --enable-rbac --node-count 2 --enable-cluster-autoscaler --min-count 2 --max-count 5 --generate-ssh-keys --network-plugin azure --service-principal 50fe641d-e9c8-4fca-8d8b-de2bde3e22cb --client-secret 16962b51-28c0-4806-b73d-1234567890
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
 - Running ..

Next get the admin kubeconfig

JOHNSI10-M1:vnc johnsi10$ az aks list -o table
Name         Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
-----------  ----------  ---------------  -------------------  -------------------  -------------------------------------------------------------
testHelmAks  centralus   testHelmRg       1.15.11              Succeeded            testhelmak-testhelmrg-70b42e-51d84499.hcp.centralus.azmk8s.io
JOHNSI10-M1:vnc johnsi10$ rm -f ~/.kube/config && az aks get-credentials -n testHelmAks -g testHelmRg --admin
Merged "testHelmAks-admin" as current context in /Users/johnsi10/.kube/config

Testing

Let's install from a local path and try and access our VNC pod

$ ls -ltra
total 32
-rw-r--r--   1 johnsi10  staff   342 Jun  3 07:04 .helmignore
drwxr-xr-x   2 johnsi10  staff    64 Jun  3 07:04 charts
drwxr-xr-x  12 johnsi10  staff   384 Jun  3 07:05 .git
drwxr-xr-x   9 johnsi10  staff   288 Jun  3 07:14 templates
-rw-r--r--   1 johnsi10  staff  1520 Jun  3 07:15 values.yaml
-rw-r--r--   1 johnsi10  staff   909 Jun  3 07:15 Chart.yaml
-rw-r--r--   1 johnsi10  staff   707 Jun  4 11:01 myvncdep.yaml
drwx------@ 81 johnsi10  staff  2592 Jun  4 13:45 ..
drwxr-xr-x   9 johnsi10  staff   288 Jun  5 08:37 .

$ helm install myvnc2 ./
NAME: myvnc2
LAST DEPLOYED: Fri Jun  5 10:53:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

$ kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsonpath="{.items[0].metadata.name}"
myvnc2-55474d4dd5-9q69kJOHNSI10-M1:vnc johnsi10$ export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsdata.name}")ems[0].meta 

$ kubectl --namespace default port-forward $POD_NAME 5901:5901
Forwarding from 127.0.0.1:5901 -> 5901
Forwarding from [::1]:5901 -> 5901
Handling connection for 5901
Handling connection for 5901
Handling connection for 5901

Setting up Harbor

$ helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
$ helm fetch harbor/harbor --untar
$ cd harbor/
$ helm install myharbor .
NAME: myharbor
LAST DEPLOYED: Fri Jun  5 11:19:28 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://core.harbor.domain
For more details, please visit https://github.com/goharbor/harbor

I’ll be honest.. I tried to get Harbor going in a few ways.  I got stuck on ingress and realized there are a lot easier ways to host Charts.

Building our Own Container image

If we don’t mind sharing, we can build and push our own docker images to use in Helm to the free docker hub.

We can use a basic azure-pipelines.yaml that utilizes a service connection to docker hub.

- task: Docker@2
  inputs:
    containerRegistry: 'DockerHub'
    repository: 'vnc'
    command: 'buildAndPush'
    Dockerfile: '**/Dockerfile'

We then can run it

We do need a few more steps… for one, we need to push into our own namespace/org in docker hub:

trigger:
- master
 
pool:
 vmImage: 'ubuntu-latest'
 
steps:
- task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   command: 'logout'
 displayName: 'Docker Logout'
- task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   command: 'login'
 displayName: 'Docker Login'
 - task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   repository: 'idjohnson/tpkvnc'
   command: 'buildAndPush'
   Dockerfile: '**/Dockerfile'
 displayName: 'Docker Build and Push'

But once done, we can see the image here: https://hub.docker.com/repository/docker/idjohnson/tpkvnc

Storing Helm Charts in Azure

Leveraging this Medium Post, we can create a Helm Repo in Azure Blob Storage. Let’s use the same resource group we used for AKS:

$ az storage account create -n helmrepoblob -g testHelmRg -l centralus --sku Standard_LRS --kind BlobStorage --access-tier Cool
{
  "accessTier": "Cool",
  "azureFilesIdentityBasedAuthentication": null,
  "blobRestoreStatus": null,
  "creationTime": "2020-06-08T23:32:08.928397+00:00",
  "customDomain": null,
  "enableHttpsTrafficOnly": true,
  "encryption": {
    "keySource": "Microsoft.Storage",
    "keyVaultProperties": null,
    "services": {
      "blob": {
        "enabled": true,
        "keyType": "Account",
        "lastEnabledTime": "2020-06-08T23:32:09.006504+00:00"
      },
      "file": {
        "enabled": true,
        "keyType": "Account",
        "lastEnabledTime": "2020-06-08T23:32:09.006504+00:00"
      },
      "queue": null,
      "table": null
    }
  },
  "failoverInProgress": null,
  "geoReplicationStats": null,
  "id": "/subscriptions/12345678-1234-1234-abcd-ef12345ad/resourceGroups/testHelmRg/providers/Microsoft.Storage/storageAccounts/helmrepoblob",
  "identity": null,
  "isHnsEnabled": null,
  "kind": "BlobStorage",
  "largeFileSharesState": null,
  "lastGeoFailoverTime": null,
  "location": "centralus",
  "name": "helmrepoblob",
  "networkRuleSet": {
    "bypass": "AzureServices",
    "defaultAction": "Allow",
    "ipRules": [],
    "virtualNetworkRules": []
  },
  "primaryEndpoints": {
    "blob": "https://helmrepoblob.blob.core.windows.net/",
    "dfs": "https://helmrepoblob.dfs.core.windows.net/",
    "file": null,
    "internetEndpoints": null,
    "microsoftEndpoints": null,
    "queue": null,
    "table": "https://helmrepoblob.table.core.windows.net/",
    "web": null
  },
  "primaryLocation": "centralus",
  "privateEndpointConnections": [],
  "provisioningState": "Succeeded",
  "resourceGroup": "testHelmRg",
  "routingPreference": null,
  "secondaryEndpoints": null,
  "secondaryLocation": null,
  "sku": {
    "name": "Standard_LRS",
    "tier": "Standard"
  },
  "statusOfPrimary": "available",
  "statusOfSecondary": null,
  "tags": {},
  "type": "Microsoft.Storage/storageAccounts"
}

Next we need to set our storage access and key

builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ export AZURE_STORAGE_ACCOUNT=helmrepoblob
builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ export AZURE_STORAGE_KEY=$(az storage account keys list --resource-group testHelmRg --account-name $AZURE_STORAGE_ACCOUNT | grep -m 1 value | awk -F'"' '{print $4}')
builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ az storage container create --name helmrepo --public-access blob
{
  "created": true
}

Next we can package our helm repo:

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm package .
Successfully packaged chart and saved it to: /home/builder/Workspaces/Helm-Demo/vnc-0.1.0.tgz

Create an index.yaml file and upload it to the blob container

$ helm repo index --url https://helmrepoblob.blob.core.windows.net/helmrepo/ .
$ az storage blob upload --container-name helmrepo --file index.yaml --name index.yaml
$ az storage blob upload --container-name helmrepo --file *.tgz --name *.tgz

Testing

We can now add it and verify the repo

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo add myVncRepo https://helmrepoblob.blob.core.windows.net/helmrepo/
"myVncRepo" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo list
NAME                    URL
banzaicloud-stable      https://kubernetes-charts.banzaicloud.com
stable                  https://kubernetes-charts.storage.googleapis.com/
kedacore                https://kedacore.github.io/charts
nginx-stable            https://helm.nginx.com/stable
azure-samples           https://azure-samples.github.io/helm-charts/
myVncRepo               https://helmrepoblob.blob.core.windows.net/helmrepo/

We can see the VNC we launched locally:

$ helm list
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
myvnc2  default         1               2020-06-05 10:53:27.023141 -0500 CDT    deployed        vnc-0.1.0       latest

Let's first delete it

$ helm delete myvnc2
release "myvnc2" uninstalled

We can see our Chart by searching the repo:

$ helm search repo myVncRepo
NAME            CHART VERSION   APP VERSION     DESCRIPTION
myvncrepo/vnc   0.1.0           1.16.0          A Helm chart for Kubernetes

Now we can install it

$ helm --debug install --generate-name myVncRepo/vnc
install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/builder/.cache/helm/repository/vnc-0.1.0.tgz

client.go:108: [debug] creating 3 resource(s)
NAME: vnc-1591713797
LAST DEPLOYED: Tue Jun  9 09:43:19 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
  pullPolicy: IfNotPresent
  repository: nginx
imagePullSecrets: []
ingress:
  annotations: {}
  enabled: false
  hosts:
  - host: chart-example.local
    paths: []
  tls: []
nameOverride: ""
nodeSelector: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
  port: 80
  type: ClusterIP
serviceAccount:
  create: true
  name: null
tolerations: []

HOOKS:
---
# Source: vnc/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "vnc-1591713797-test-connection"
  labels:

    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['vnc-1591713797:80']
  restartPolicy: Never
MANIFEST:
---
# Source: vnc/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vnc-1591713797
  labels:

    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: vnc/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: vnc-1591713797
  labels:
    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
---
# Source: vnc/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vnc-1591713797
  labels:
    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: vnc
      app.kubernetes.io/instance: vnc-1591713797
  template:
    metadata:
      labels:
        app.kubernetes.io/name: vnc
        app.kubernetes.io/instance: vnc-1591713797
    spec:
      serviceAccountName: vnc-1591713797
      securityContext:
        {}
      containers:
        - name: vnc
          securityContext:
            {}
          image: "nginx:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=vnc-1591713797" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

and verify it's running

$ helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS         CHART            APP VERSION
vnc-1591713797  default         1               2020-06-09 09:43:19.5344071 -0500 CDT   deployed       vnc-0.1.0        1.16.0

You can automate most of this.  Here is an azure pipelines that I used below.

# Starter pipeline
 
# set two variables:
# AZURE_STORAGE_ACCOUNT="helmrepoblob"
# AZURE_STORAGE_KEY="kexxxxxxxxxxxxxcQ=="
 
trigger:
- master
 
pool:
 vmImage: 'ubuntu-latest'
 
steps:
- task: HelmInstaller@1
 inputs:
   helmVersionToInstall: 'latest'
- task: Bash@3
 inputs:
   targetType: 'inline'
   script: |
     sed -i 's/^version: \([0-9]*\)\.\([0-9]*\)\..*/version: \1.\2.$(Build.BuildId)/' Chart.yaml 
     cat Chart.yaml
 displayName: 'sub build id'
 
- task: HelmDeploy@0
 inputs:
   command: 'package'
   chartPath: './'
   save: false
- task: Bash@3
 inputs:
   targetType: 'inline'
   script: |
     # Write your commands here
     set -x
    
     export
    
     cp $(Build.ArtifactStagingDirectory)/*.tgz ./
 
     cat Chart.yaml
 displayName: 'copy tgz'
 
- task: HelmDeploy@0
 inputs:
   connectionType: 'None'
   command: 'repo'
   arguments: 'index --url https://helmrepoblob.blob.core.windows.net/helmrepo/ .'
 displayName: 'helm repo index'
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage blob upload --container-name helmrepo --file index.yaml --name index.yaml'
 displayName: 'az storage blob upload index.yaml'
 
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage blob upload --container-name helmrepo --file *.tgz --name *.tgz'
 displayName: 'az storage blob upload tgz'
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage account list -o table'
 displayName: 'az storage account list'
  

After a build, we can see it updates the helm repo in Azure blob storage.

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ cat ~/.cache/helm/repository/myVncRepo-index.yaml
apiVersion: v1
entries:
  vnc:
  - apiVersion: v2
    appVersion: latest
    created: "2020-06-09T19:08:30.880376658Z"
    description: A Helm chart for Kubernetes VNC pod
    digest: e3aede2ee10be54c147a680df3eb586cc03be20d1eb34422588605f493e1c335
    name: vnc
    type: application
    urls:
    - https://helmrepoblob.blob.core.windows.net/helmrepo/vnc-0.1.0.tgz
    version: 0.1.0
generated: "2020-06-09T19:08:30.879738258Z"
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "myVncRepo" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ cat ~/.cache/helm/repository/myVncRepo-index.yaml
apiVersion: v1
entries:
  vnc:
  - apiVersion: v2
    appVersion: latest
    created: "2020-06-09T19:12:24.69355305Z"
    description: A Helm chart for Kubernetes VNC pod
    digest: 96d2768d9ff55bc8f3a5cf64be952e0bf8d3ead6416c1a55c844f81206e24cb0
    name: vnc
    type: application
    urls:
    - https://helmrepoblob.blob.core.windows.net/helmrepo/vnc-0.1.614.tgz
    version: 0.1.614
generated: "2020-06-09T19:12:24.692413255Z"

AWS

Adding AWS was easy as we already have this website hosted in S3 and fronted by CloudFront.

I just added some steps to the build:

- task: Bash@3
  inputs:
    targetType: 'inline'
    script: |
      # Write your commands here
      rm index.yaml
  displayName: 'remove last index'
 
- task: HelmDeploy@0
  inputs:
    connectionType: 'None'
    command: 'repo'
    arguments: 'index --url https://freshbrewed.science/helm/ .'
  displayName: 'helm repo index (aws fb)'
 
- task: S3Upload@1
  inputs:
    awsCredentials: 'AWS-FB'
    regionName: 'us-east-1'
    bucketName: 'freshbrewed.science'
    sourceFolder: '$(Build.SourcesDirectory)'
    globExpressions: |
      *.tgz
      index.yaml
    targetFolder: 'helm'
    filesAcl: 'public-read'

We can now add and test it

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo add freshbrewed https://freshbrewed.science/helm/
"freshbrewed" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "freshbrewed" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "myVncRepo" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm search repo freshbrewed
NAME            CHART VERSION   APP VERSION     DESCRIPTION
freshbrewed/vnc 0.1.615         latest          A Helm chart for Kubernetes VNC pod

Another way: The S3 plugin

You can host a private chart repo with a commonly used plugin

$ helm plugin install https://github.com/hypnoglow/helm-s3.git

Next we will create a bucket to hold the charts

$ aws s3 mb s3://idjhelmtes

Next let's add it as a Chart repo

$ helm repo add fbs3 s3://idjhelmtest/helm
"fbs3" has been added to your repositories

And we can push a local tgz of our chart into it

$ helm s3 push ./vnc-0.1.333.tgz fbs3

Lastly, we can update and search to prove it's indexed

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "appdynamics-charts" chart repository
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "vnc" chart repository
...Successfully got an update from the "fbs3" chart repository
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository

$ helm search repo fbs3
NAME     	CHART VERSION	APP VERSION	DESCRIPTION                        
fbs3/vnc	0.1.333      	latest     	A Helm chart for Kubernetes VNC pod

As a remote repo:

And of course, you can always use Azure Blob or S3 as a filestore and download the helm chart to install locally:

builder@DESKTOP-2SQ9NQM:~$ aws s3 ls s3://idjhelmtest/helm/
2020-06-09 15:20:26        386 helm
2020-06-09 15:24:33        408 index.yaml
2020-06-09 15:24:32       3071 vnc-0.1.333.tgz
builder@DESKTOP-2SQ9NQM:~$ aws s3 cp s3://idjhelmtest/helm/vnc-0.1.333.tgz ./
download: s3://idjhelmtest/helm/vnc-0.1.333.tgz to ./vnc-0.1.333.tgz
builder@DESKTOP-2SQ9NQM:~$ tar -xzvf vnc-0.1.333.tgz
vnc/Chart.yaml
vnc/values.yaml
vnc/templates/NOTES.txt
vnc/templates/_helpers.tpl
vnc/templates/deployment.yaml
vnc/templates/ingress.yaml
vnc/templates/service.yaml
vnc/templates/serviceaccount.yaml
vnc/templates/tests/test-connection.yaml
vnc/.helmignore
vnc/myvncdep.yaml

lastly install from the locally expanded path

builder@DESKTOP-2SQ9NQM:~$ helm install ./vnc --generate-name
NAME: vnc-1591738811
LAST DEPLOYED: Tue Jun  9 16:40:13 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=vnc-1591738811" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:6901 to use Web VNC to your pod"
  kubectl --namespace default port-forward $POD_NAME 5901:5901

Local charts

It might be handy to use a common cloud storage system like Box, DropBox, Google Drive or OneDrive.

For instance, I can copy and index my chart into a new onedrive folder in WSL:

builder@DESKTOP-2SQ9NQM:~$ mkdir /mnt/c/Users/isaac/OneDrive/helm
builder@DESKTOP-2SQ9NQM:~$ cp vnc-0.1.333.tgz /mnt/c/Users/isaac/OneDrive/helm
builder@DESKTOP-2SQ9NQM:~$ helm repo index /mnt/c/Users/isaac/OneDrive/helm/

With Helm 2 we had helm serve, but that was removed in favour of a plugin that exposes ChartMuseum.

builder@DESKTOP-2SQ9NQM:~$ helm version
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}
builder@DESKTOP-2SQ9NQM:~$ helm plugin install https://github.com/jdolitsky/helm-servecm
Installed plugin: servecm

Seems one cannot run bare:

builder@DESKTOP-2SQ9NQM:~$ helm servecm
ChartMuseum not installed. Install latest stable release? (type "yes"): yes
Attempting to install ChartMuseum server (v0.12.0)...
Detected your os as "linux"
+ curl -LO https://s3.amazonaws.com/chartmuseum/release/v0.12.0/bin/linux/amd64/chartmuseum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52.5M  100 52.5M    0     0  2917k      0  0:00:18  0:00:18 --:--:-- 5208k
+ chmod +x ./chartmuseum
+ mv ./chartmuseum /usr/local/bin
mv: cannot move './chartmuseum' to '/usr/local/bin/chartmuseum': Permission denied
+ rm -rf /tmp/tmp.cgFL4VoBwx
Error: plugin "servecm" exited with error
builder@DESKTOP-2SQ9NQM:~$ sudo helm servecm
ChartMuseum not installed. Install latest stable release? (type "yes"): yes
Attempting to install ChartMuseum server (v0.12.0)...
Detected your os as "linux"
+ curl -LO https://s3.amazonaws.com/chartmuseum/release/v0.12.0/bin/linux/amd64/chartmuseum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52.5M  100 52.5M    0     0  5535k      0  0:00:09  0:00:09 --:--:-- 6174k
+ chmod +x ./chartmuseum
+ mv ./chartmuseum /usr/local/bin
+ set +x
2020-06-09 16:49:48.014344 I | Missing required flags(s): --storage
Error: plugin "servecm" exited with error

But once we have the right syntax, we can serve with ease:

builder@DESKTOP-2SQ9NQM:~$ helm servecm --port=8879 --context-path=/chart --storage="local" --storage-local-rootdir="/mnt/c/Users/isaac/OneDrive/helm"
2020-06-09T16:53:07.564-0500    INFO    Starting ChartMuseum    {"port": 8879}
builder@DESKTOP-2SQ9NQM:~$ helm repo add local http://127.0.0.1:8879/charts
"local" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~$ helm search repo local
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
local/vnc               0.1.333         latest          A Helm chart for Kubernetes VNC pod
stable/magic-ip-address 0.1.0           0.9.0           A Helm chart to assign static IP addresses for 

Cleanup

builder@DESKTOP-2SQ9NQM:~$ az aks delete -n testHelmAks -g testHelmRg
Are you sure you want to perform this operation? (y/n): y

Summary

Writing helm charts is pretty easy.  We can also use Azure DevOps to easily package and deploy charts as well build and deploy container images.  We then explored how to host container images as a Helm repository with Azure Blob storage and then 3 different ways in Amazon.  Lastly we even showed hosting local charts with helm 3’s chart museum plugin (servecm).

Hopefully this will help provide options to teams looking to take on Helm and host their charts in a supportable and cost effective fashion.