<![CDATA[Fresh Brewed Science]]>https://freshbrewed.science/http://localhost:2368/favicon.pngFresh Brewed Sciencehttp://localhost:2368/Ghost 2.13Mon, 29 Jun 2020 19:46:09 GMT60<![CDATA[Datadog for DevOps : Metrics, Dashboards and Logs]]>Last time I talked about Datadog it was to focus on how Datadog helps us track deployments into AKS and tie to Azure DevOps data from builds.  Today I want to dig in to how Datadog can help track and expose metrics and logs from builds as a really complimentary

]]>
http://localhost:2368/datadog-for-devops-metrics-dashboards-and-logs/5ef9f757507f933cd5f1a99eMon, 29 Jun 2020 14:41:24 GMT

Last time I talked about Datadog it was to focus on how Datadog helps us track deployments into AKS and tie to Azure DevOps data from builds.  Today I want to dig in to how Datadog can help track and expose metrics and logs from builds as a really complimentary tool to Azure DevOps.  To make it more interesting, I’ll be using just the free tier of Datadog to show how easy it is to get started.

Setting up WebHooks for Events

This is a bit of an overview of what I already covered, but the first piece of information you’ll need after signing up for DataDog is the API key.  This is the unique identifier you can use for all your agents and integrations.  It’s also easy to revoke.  You’ll find it under “Integrations/APIs”.

Datadog for DevOps : Metrics, Dashboards and Logs
mouse over the purple to reveal the key

We can now add service hooks for any events we wish to tracks natively in Azure DevOps.  You’ll find those under Project/General/Service Hooks:

Datadog for DevOps : Metrics, Dashboards and Logs
list of service hooks already created

Adding a new event just requires hitting the “+” to add an event and selecting Datadog:

Datadog for DevOps : Metrics, Dashboards and Logs

Then choose the event type and any filters (perhaps you only wish to monitor one pipeline):

Datadog for DevOps : Metrics, Dashboards and Logs

You can then set the key and click Test to verify:

Datadog for DevOps : Metrics, Dashboards and Logs

Just click Finish to add the hook:

Datadog for DevOps : Metrics, Dashboards and Logs

But what about custom events?

Sometimes as a release engineer I want to more tightly control my notifications.  This can be useful in a multi-tenant organization (where my project team has a datadog instance but not necessarily for use by others).

In this case I can send data several ways.

DataDog Event AzDO plugin.

Here we can use the “Datadog Events” plugin from William Matheus Batista.  It’s a pretty functional off-the-shelf utility function:

Datadog for DevOps : Metrics, Dashboards and Logs
Add Tasks from AzDO Pipeline

You can then add to your pipeline and expose any message output captured into a variable:

Datadog for DevOps : Metrics, Dashboards and Logs

In YAML syntax:

steps:
- task: WilliamMatheusBatista.vsts-datadogevents.vsts-datadogevents.DatadogEvents@0
  displayName: 'Datadog Event'
  inputs:
    ddApiKey: asdfasdfasdfasdf
    ddEventTitle: MyEvent2
    ddEventText: 'Hello There'
    ddEventTags: 'ghost-blog'

Here we can see the event pushed to Datadog:

Datadog for DevOps : Metrics, Dashboards and Logs
We see the pipeline invokation above and result in Datadog Events below

This can be useful for simple one-line outputs.

For example, perhaps we want to capture some basics about the Agent in use so later i could check event logs for problems associated to an agent or OS type:

#!/bin/bash
set +x
echo "##vso[task.setvariable variable=AGENTDETAILS]"$(Agent.Id) - $(Agent.OS) - $(Agent.OSArchitecture) for $(Build.BuildId) > t.o
set -x

cat t.o

Which in YAML syntax:

steps:
- bash: |
   #!/bin/bash
   set +x
   echo "##vso[task.setvariable variable=AGENTDETAILS]"$(Agent.Id) - $(Agent.OS) - $(Agent.OSArchitecture) for $(Build.BuildId) > t.o
   set -x
   
   cat t.o
  displayName: 'sample output'

- task: WilliamMatheusBatista.vsts-datadogevents.vsts-datadogevents.DatadogEvents@0
  displayName: 'Datadog Event'
  inputs:
    ddApiKey: asdfasdfasdfasdfasdasdfasdf
    ddEventTitle: MyEvent2
    ddEventText: '$(AGENTDETAILS)'
    ddEventTags: 'ghost-blog'

The results of which:

Datadog for DevOps : Metrics, Dashboards and Logs
The "MyEvent2" shows Agent ID, arch and Build ID

But what about Multi-line outputs?

We can actually use the catch-all Log POST url for things like this.  That is also useful for integrating Datadog into our containerized service.  As long as the ‘thing’ running can reach the internet and run an HTTP POST, you can get log data into Datadog.

Here, we’ll capture all the running processes on the agent (which serves as some good multiline output) and push it to Datadog:

steps:
- bash: |
   #!/bin/bash -x
   
   ps -ef > ps.log
   
   curl -X POST https://http-intake.logs.datadoghq.com/v1/input \
        -H "Content-Type: text/plain" \
        -H "DD-API-KEY: 08ce42ae942840eaa60ab9a893964229" \
        --data-binary @./ps.log
   
  displayName: 'DD Build Log Upload'
  enabled: false
  condition: succeededOrFailed()

Once run, in logs we can now see log details.  With the free tier of Datadog, they won’t persist that long, but that’s okay - for now we are mostly interested in showing the abilities and perhaps we would use this for debugging build outputs on remote unmanaged instances.

Datadog for DevOps : Metrics, Dashboards and Logs
multi-line ps output

We can also include things like Host, App and Tags in the URL (while i didn’t find this in the documentation, this article inspired me);

$ curl -X POST 'https://http-intake.logs.datadoghq.com/v1/input?host=host13&service=AzDO&ddtags=env:dev' -H "Content-Type: text/plain" -H "DD-API-KEY: 08ce42ae942840eaa60ab9a893964229" --data-binary @./LICENSE
$
Datadog for DevOps : Metrics, Dashboards and Logs
see tags and env on right hand pane

Dashboards

We can create Dashboards to share data.  You’ll find them, of course, under Dashboards:

Datadog for DevOps : Metrics, Dashboards and Logs

Here I’ll just add a few metrics.  CPU percentage on agents, build duration and build events:

Datadog for DevOps : Metrics, Dashboards and Logs

Next, under the gear in the upper right, i can get a public URL for this dashboard as well as set the Widget size. This can be useful for Confluence/Wiki pages that expose build details:

Datadog for DevOps : Metrics, Dashboards and Logs
The dashboard can use different sizes on widgets with "Graph size"

You can decide on whether to let the users change time frames - in this case I’m fine with that:

Datadog for DevOps : Metrics, Dashboards and Logs

And you should be able to view this live dashboard right now:

https://p.datadoghq.com/sb/ah2hpiy5bf53u16p-0593d81c4f0bf2aa8766533b72209153

Another example: say we wished to add a graph of "passing builds".  We could add that metric as another graph:

Datadog for DevOps : Metrics, Dashboards and Logs
Datadog for DevOps : Metrics, Dashboards and Logs

and then see it immediately added to the dashboard:

Datadog for DevOps : Metrics, Dashboards and Logs

One of the really nice features is the “TV Mode” which is great for 2nd screens and public displays:

Datadog for DevOps : Metrics, Dashboards and Logs

There is also a Dark mode there as well.

An example might look like this (the latter on a display my eldest daughter fished out of a nearby small lake)

Datadog for DevOps : Metrics, Dashboards and Logs

and the "pond" monitor:

Datadog for DevOps : Metrics, Dashboards and Logs

Monitoring and Alerts

So once we've got some metrics, one might realize that a dashboard showing pass/failed builds is good, but can we get alerts directly from Datadog?

For instance, say we have a graph of Succeeding and Failing builds:

Datadog for DevOps : Metrics, Dashboards and Logs

And the only difference in the Metric query is the "status" parameter:

Datadog for DevOps : Metrics, Dashboards and Logs

Now we can create a monitor that would be triggered on any build or just failed builds:

Datadog for DevOps : Metrics, Dashboards and Logs

For instance, say I wish to determine why we had failed builds. I can mouse over the graph to see the time window:

Datadog for DevOps : Metrics, Dashboards and Logs

And checking my inbox, indeed i see an alert from AzDO and from Datadog:

Datadog for DevOps : Metrics, Dashboards and Logs

And indeed I can lookup build 643 to see it failed:

Datadog for DevOps : Metrics, Dashboards and Logs

To be fair, the link in the Datadog alert is bad (must use some depricated _build and has URLEncoding issues).  (i.e. it listed https://princessking.visualstudio.com/_build?amp%3Bbuilduri=vstfs%3A%2F%2F%2FBuild%2FBuild%2F643&_a=completed instead of https://princessking.visualstudio.com/ghost-blog/_build/results?buildId=643&view=results - likely based on older VSTS pattern)

Alerting through PagerDuty

To really understand the power of what we have here, let's setup PagerDuty with Slack integration.

Datadog for DevOps : Metrics, Dashboards and Logs

And a triggered alert example

Datadog for DevOps : Metrics, Dashboards and Logs
Pagerduty triggered alert in Slack

In integrations, add PagerDuty

Datadog for DevOps : Metrics, Dashboards and Logs

We now have an event rule we can trigger a monitor on:

Datadog for DevOps : Metrics, Dashboards and Logs

We can now test it:

Datadog for DevOps : Metrics, Dashboards and Logs

Which not only called me, as PD can page the on-call resource, it also posted to Slack where i could acknowledge or resolve:

Datadog for DevOps : Metrics, Dashboards and Logs
Slack notification

And texted and emailed me as well:

Datadog for DevOps : Metrics, Dashboards and Logs
Email from PD
Datadog for DevOps : Metrics, Dashboards and Logs
PD Calling me
Datadog for DevOps : Metrics, Dashboards and Logs
TXT from PD

Since the routing is using PD via APIs, the alerts from Datadog would be routed to the appropriate oncall technician:

Datadog for DevOps : Metrics, Dashboards and Logs
Pagerduty Oncall scheduling

Summary

Datadog as an APM and Logging tool is great. As I blogged about before, it’s one of the few that can give us end-to-end traceability from code to through to the performance of running services.  However, many don’t realize how it’s also a solid metrics, alerting and dashboarding tool that compliments Azure DevOps.

While I demoed this with AzDO, as it’s my primary pipelining tool, the examples I showed of custom events could be tied into any CI/CD tool as well. For those smaller shops and individual developers, everything demoed above was done with the free tier so really there is no reason not to try it out.  While Pagerduty isn't exactly free (it has a free tier without alerting, and a cheap $10 starter that would cover 6 users that includes phone and SMS) it does illustrate a great low-cost solution to SLA enforcement and response escalation.

Leveraging Datadog as a metrics, logging and alerting hub we can have SLA enforced CICD ensuring any build issues are tracked and escalated.

]]>
<![CDATA[Codespaces]]>While i have signed up for the new Codespaces feature in Github and am awaiting enrollment, we can explore codespaces (formerly Visual Studio Online, which Microsoft also first called VSTS, now Azure DevOps).  


Sign up for Visual Studio Code:

Then set a plan:

But before we go further, we need

]]>
http://localhost:2368/codespaces/5ef20c81e5f9a304255f6ce6Tue, 23 Jun 2020 14:35:06 GMT

While i have signed up for the new Codespaces feature in Github and am awaiting enrollment, we can explore codespaces (formerly Visual Studio Online, which Microsoft also first called VSTS, now Azure DevOps).  


Sign up for Visual Studio Code:

Codespaces

Then set a plan:

Codespaces

But before we go further, we need to create a dotfiles repo and determine sizing:

Codespaces

Creating a dotfiles repo:

Dotfiles (like .bash_profile and .vimrc) are ways we can control various settings for those not as familiar with linux.

Here I created a fork of https://github.com/mathiasbynens/dotfiles: https://github.com/idjohnson/dotfiles

More on dotfiles: https://dotfiles.github.io/utilities/

When we head back to Codespaces, we can create a new codespace.

Codespaces

You can use any valid https GIT url for the “Git Repository” parameter.

Codespaces

We clicked Auth and Create:

Codespaces
Codespaces

It will now try and clone the repo:

Codespaces

Something I found is that if your Codespaces is tied to the same account as your GIT repo (AzDO), it works fine:

Codespaces

Under source control, we can checkout a branch from “Checkout to…”

Codespaces

We can make a small change and go back to the source section to commit this change:

Codespaces
Codespaces

We can then Push:

Codespaces

Pushing will prompt about git fetch

Codespaces
Codespaces

What’s nice is we can do live collaboration with others;

Live Sharing

Choose Live Share:

Codespaces

We can send the link to a colleague to work with us:

Codespaces

Here we can change stuff and see a live update.

Other browsers.

Even “unsupported” browsers like Opera work just great:

Codespaces

That includes access to the hosted agent terminal for debugging:

Codespaces

You can sign into the Visual Studio Code client.

If you have troubles, you can use ctrl-shift-p/cmd-shift-p and search for “codespaces: sign-in”:

Codespaces

Once signed in, we can see our workspaces:

Codespaces

Visual Studio Code and Kubernetes

Go to extensions and add the kubernetes extension:

Codespaces

Once installed, you can click the Kubernetes icon on the left to launch the explorer.

For instance, if we want to explore our own namespace, click on namespaces under the cluster:

Codespaces

Here on my supported cluster, we see my namespace. I can right click and “use namespace”

Codespaces

Now when we go under Workloads/Pods, we can see our podsAnd we’ll see an asterisk now next to the selected namespace:

Codespaces

Now when we go under Workloads/Pods, we can see our pods:

Codespaces

For instance, say I wanted to see a secret defined in my namespace?

We could pull the secret from the Kubernetes explorer and then use the terminal to base64 decode it.

Codespaces

Code on VNC:

For Ubuntu, you can use the link:

https://go.microsoft.com/fwlink/?LinkID=760868

Codespaces

For RHEL/Cent, you can use the Yum steps:

sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'
yum check-update
sudo yum install code

And now we can run code (my containerized VNC runs as root so i have to pass in the user dir)

Codespaces

Now we can install codespaces.

Go to extensions:

Codespaces

Search and install codespaces:

Codespaces

We can now connect to codespaces from the bottom left

Codespaces

Sadly, my VNC could not sign in regardless of browser.

Codespaces

Self Hosting Codespaces

In a Windows system, i want to self host.

First install the Azure extension

Codespaces

And sign in

Codespaces

Then click register self-hosted and pick a local workspace.  Note: i did need to restart VS Code for it to recognize the Azure plugin was installed and logged in.

Codespaces

Then give a name

Codespaces

Then click register as a process or service.  Here we will use service.

Codespaces

Then I'm prompted to login;

Codespaces

I login as my domain password.  The primary reason I am doing this is we use DevTest Labs which have formulas and are a great way to leverage Lab style auto-shutdown/startup to minimize cloud expense on Development machines.

I realized a standard policy blocked adding a service as a user (even as an Administrator):

Codespaces

Open the group policy and add your local user (see this guide):

Codespaces

The next time through, after login, i was prompted:

Codespaces

Now I see the box listed:

Codespaces

Going back to my mac, i found it didn’t care for it:

Codespaces

I did later realize most of the core issues were around firewall rules particular to this DTL’s network.  You can follow the Microsoft guide on specifics on what ports/protocols to open.

Note: if you already have a Self-hosted, you obviously do not see the link to register to a new one.  You can find that in the command palette, however to add another:

Codespaces

A working Example

Let’s show a real example of the power of self hosted.

Here i’ll fire up an older copy of this blog:

Codespaces

As it runs on ghost, WSL is serving it on :2368.  Now i can hop back to my mac and port-forward to that self-hosted codespace

Codespaces

And now on the Mac i can hit localhost:2368 which forwards it through Codespaces back to the PC running the nodejs blog on WSL:

Codespaces

When connected to Codespaces on a remote machine, our terminal also forwards through for remote shell commands:

Codespaces

One last point, I wanted to ensure that connectivity worked across the public internet (and wasn’t dependent on the physical networking between my Mac and PC).

Here we see I can connect back to my Windows desktop from a remote endpoint (using Opera’s built in VPN):

Codespaces

There are more guides on launching containers using Visual Studio Code here: https://code.visualstudio.com/docs/remote/containers

Cleaning up:

We can go to https://online.visualstudio.com/environments and remove any you do not wish to keep.  If you launched Windows Services on those hosts, you’ll need to remove them manually.

Just pick your Codespace and Unregister to remove:

Codespaces

After unregister:

Codespaces

Summary

Codespaces, formerly Visual Studio Online (and VSO was VSTS before VSTS became Azure DevOps) is a great code hosted offering.  It was already powerful as a terminal option in the Azure Shell.  Now as a first class product, this allows both remote codesharing and debugging in a way that few other tools can do.  

Is it useful?  I’m writing this right now via codespaces port-forwarding.  I’ve always needed to keep multiple working node environments over all my machines to keep the blog up to date.  Now i can keep a live version in one place and use remote editing to update it.

]]>
<![CDATA[Helm 3 : Creating and Sharing Charts]]>Helm offers a fantastic way to install charts in a repeatable way, but often it can seem daunting to get started creating charts.  And once one has charts, there are a variety of ways to serve them, so more complicated than others.  

In this guide we will start with a

]]>
http://localhost:2368/helm-3-creating-and-sharing-charts/5ee016c2a625a61843b36767Tue, 09 Jun 2020 23:40:23 GMT

Helm offers a fantastic way to install charts in a repeatable way, but often it can seem daunting to get started creating charts.  And once one has charts, there are a variety of ways to serve them, so more complicated than others.  

In this guide we will start with a basic YAML for a VNC pod and work through creating the Helm chart.  We’ll then explore hosting options in Azure, AWS and local filesystems.

Getting started

Here is a common VNC deployment yaml I use for testing:

$ cat myvncdep.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: my-vnc-server
  name: my-vnc-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-vnc-server
  template:
    metadata:
      labels:
        app: my-vnc-server
    spec:
      containers:
      - image: consol/ubuntu-xfce-vnc
        imagePullPolicy: IfNotPresent
        name: my-vnc-server
        ports:
        - containerPort: 24000
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30

When launched, one can use

$ kubectl port-forward (container name) 5901:5901 &

And then VNC to localhost:5901 (with password vncpassword) for a nice simple graphical container. I find this useful for testing non-exposed microservices and URLs.

Create a Helm Chart:

We’ll start with Helm 3.x and create a new chart for a VNC container:

JOHNSI10-M1:Documents johnsi10$ helm -version
Error: invalid argument "ersion" for "-v, --v" flag: strconv.ParseInt: parsing "ersion": invalid syntax
JOHNSI10-M1:Documents johnsi10$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
JOHNSI10-M1:Documents johnsi10$ helm create vnc
Creating vnc
JOHNSI10-M1:Documents johnsi10$ cd vnc
JOHNSI10-M1:vnc johnsi10$ ls
Chart.yaml	charts		templates	values.yaml

Next, i’m going to init a new repo so we can track changes;

JOHNSI10-M1:vnc johnsi10$ git init
Initialized empty Git repository in /Users/johnsi10/Documents/vnc/.git/
JOHNSI10-M1:vnc johnsi10$ git add -A
JOHNSI10-M1:vnc johnsi10$ git commit -m "init 0"
[master (root-commit) a64c9fe] init 0
 10 files changed, 327 insertions(+)
 create mode 100644 .helmignore
 create mode 100644 Chart.yaml
 create mode 100644 templates/NOTES.txt
 create mode 100644 templates/_helpers.tpl
 create mode 100644 templates/deployment.yaml
 create mode 100644 templates/ingress.yaml
 create mode 100644 templates/service.yaml
 create mode 100644 templates/serviceaccount.yaml
 create mode 100644 templates/tests/test-connection.yaml
 create mode 100644 values.yaml

We want to basically launch an xfce based VNC container and expose 5901 and 6901.

We will now expose 5901, 6901 and turn on ingress:

@@ -1,6 +1,6 @@
 apiVersion: v2
 name: vnc
-description: A Helm chart for Kubernetes
+description: A Helm chart for Kubernetes VNC pod
 
 # A chart can be either an 'application' or a 'library' chart.
 #
@@ -18,4 +18,4 @@ version: 0.1.0
 
 # This is the version number of the application being deployed. This version number should be
 # incremented each time you make changes to the application.
-appVersion: 1.16.0
+appVersion: latest
diff --git a/templates/service.yaml b/templates/service.yaml
index befc726..17c26b0 100644
--- a/templates/service.yaml
+++ b/templates/service.yaml
@@ -8,8 +8,12 @@ spec:
   type: {{ .Values.service.type }}
   ports:
     - port: {{ .Values.service.port }}
-      targetPort: http
+      targetPort: 5901
       protocol: TCP
-      name: http
+      name: vnc
+    - port: {{ .Values.service.webport }}
+      targetPort: 6901
+      protocol: TCP
+      name: vncweb
   selector:
     {{- include "vnc.selectorLabels" . | nindent 4 }}
diff --git a/values.yaml b/values.yaml
index d2658bf..a3143b0 100644
--- a/values.yaml
+++ b/values.yaml
@@ -5,7 +5,7 @@
 replicaCount: 1
 
 image:
-  repository: nginx
+  repository: consol/ubuntu-xfce-vnc
   pullPolicy: IfNotPresent
 
 imagePullSecrets: []
@@ -32,16 +32,17 @@ securityContext: {}
 
 service:
   type: ClusterIP
-  port: 80
+  port: 5901
+  webport: 6901
 
 ingress:
-  enabled: false
+  enabled: true
   annotations: {}
-    # kubernetes.io/ingress.class: nginx
+    kubernetes.io/ingress.class: nginx
     # kubernetes.io/tls-acme: "true"
   hosts:
-    - host: chart-example.local
-      paths: []
+  #  - host: chart-example.local
+  #    paths: []
   tls: []
   #  - secretName: chart-example-tls
   #    hosts:
Helm 3 : Creating and Sharing Charts

You can get the full chart here: https://freshbrewed.science/helm/vnc-0.1.615.tgz

Let’s test in a fresh cluster first.

JOHNSI10-M1:vnc johnsi10$ az group create --name testHelmRg --location centralus
{
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b12345/resourceGroups/testHelmRg",
  "location": "centralus",
  "managedBy": null,
  "name": "testHelmRg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
JOHNSI10-M1:vnc johnsi10$ az ad sp create-for-RBAC --skip-assignment --name myAKSsp
Changing "myAKSsp" to a valid URI of "http://myAKSsp", which is the required format used for service principal names
{
  "appId": "50fe641d-e9c8-4fca-8d8b-de2bde3e22cb",
  "displayName": "myAKSsp",
  "name": "http://myAKSsp",
  "password": "16962b51-28c0-4806-b73d-1234567890",
  "tenant": "d73a39db-6eda-495d-8000-7579f56d68b7"
}

Refreshing my memory what’s available today:

JOHNSI10-M1:vnc johnsi10$ az aks get-versions --location centralus -o table
KubernetesVersion    Upgrades
-------------------  -------------------------------------------------
1.18.2(preview)      None available
1.18.1(preview)      1.18.2(preview)
1.17.5(preview)      1.18.1(preview), 1.18.2(preview)
1.17.4(preview)      1.17.5(preview), 1.18.1(preview), 1.18.2(preview)
1.16.9               1.17.4(preview), 1.17.5(preview)
1.16.8               1.16.9, 1.17.4(preview), 1.17.5(preview)
1.15.11              1.16.8, 1.16.9
1.15.10              1.15.11, 1.16.8, 1.16.9
1.14.8               1.15.10, 1.15.11
1.14.7               1.14.8, 1.15.10, 1.15.11

Now let’s create a new cluster:

JOHNSI10-M1:vnc johnsi10$ az aks create --resource-group testHelmRg --name testHelmAks --location centralus --kubernetes-version 1.15.11 --enable-rbac --node-count 2 --enable-cluster-autoscaler --min-count 2 --max-count 5 --generate-ssh-keys --network-plugin azure --service-principal 50fe641d-e9c8-4fca-8d8b-de2bde3e22cb --client-secret 16962b51-28c0-4806-b73d-1234567890
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
 - Running ..

Next get the admin kubeconfig

JOHNSI10-M1:vnc johnsi10$ az aks list -o table
Name         Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
-----------  ----------  ---------------  -------------------  -------------------  -------------------------------------------------------------
testHelmAks  centralus   testHelmRg       1.15.11              Succeeded            testhelmak-testhelmrg-70b42e-51d84499.hcp.centralus.azmk8s.io
JOHNSI10-M1:vnc johnsi10$ rm -f ~/.kube/config && az aks get-credentials -n testHelmAks -g testHelmRg --admin
Merged "testHelmAks-admin" as current context in /Users/johnsi10/.kube/config

Testing

Let's install from a local path and try and access our VNC pod

$ ls -ltra
total 32
-rw-r--r--   1 johnsi10  staff   342 Jun  3 07:04 .helmignore
drwxr-xr-x   2 johnsi10  staff    64 Jun  3 07:04 charts
drwxr-xr-x  12 johnsi10  staff   384 Jun  3 07:05 .git
drwxr-xr-x   9 johnsi10  staff   288 Jun  3 07:14 templates
-rw-r--r--   1 johnsi10  staff  1520 Jun  3 07:15 values.yaml
-rw-r--r--   1 johnsi10  staff   909 Jun  3 07:15 Chart.yaml
-rw-r--r--   1 johnsi10  staff   707 Jun  4 11:01 myvncdep.yaml
drwx------@ 81 johnsi10  staff  2592 Jun  4 13:45 ..
drwxr-xr-x   9 johnsi10  staff   288 Jun  5 08:37 .

$ helm install myvnc2 ./
NAME: myvnc2
LAST DEPLOYED: Fri Jun  5 10:53:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

$ kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsonpath="{.items[0].metadata.name}"
myvnc2-55474d4dd5-9q69kJOHNSI10-M1:vnc johnsi10$ export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=myvnc2" -o jsdata.name}")ems[0].meta 

$ kubectl --namespace default port-forward $POD_NAME 5901:5901
Forwarding from 127.0.0.1:5901 -> 5901
Forwarding from [::1]:5901 -> 5901
Handling connection for 5901
Handling connection for 5901
Handling connection for 5901
Helm 3 : Creating and Sharing Charts

Setting up Harbor

$ helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
$ helm fetch harbor/harbor --untar
$ cd harbor/
$ helm install myharbor .
NAME: myharbor
LAST DEPLOYED: Fri Jun  5 11:19:28 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://core.harbor.domain
For more details, please visit https://github.com/goharbor/harbor

I’ll be honest.. I tried to get Harbor going in a few ways.  I got stuck on ingress and realized there are a lot easier ways to host Charts.

Building our Own Container image

If we don’t mind sharing, we can build and push our own docker images to use in Helm to the free docker hub.

We can use a basic azure-pipelines.yaml that utilizes a service connection to docker hub.

- task: Docker@2
  inputs:
    containerRegistry: 'DockerHub'
    repository: 'vnc'
    command: 'buildAndPush'
    Dockerfile: '**/Dockerfile'

We then can run it

Helm 3 : Creating and Sharing Charts

We do need a few more steps… for one, we need to push into our own namespace/org in docker hub:

trigger:
- master
 
pool:
 vmImage: 'ubuntu-latest'
 
steps:
- task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   command: 'logout'
 displayName: 'Docker Logout'
- task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   command: 'login'
 displayName: 'Docker Login'
 - task: Docker@2
 inputs:
   containerRegistry: 'DockerHub'
   repository: 'idjohnson/tpkvnc'
   command: 'buildAndPush'
   Dockerfile: '**/Dockerfile'
 displayName: 'Docker Build and Push'

But once done, we can see the image here: https://hub.docker.com/repository/docker/idjohnson/tpkvnc

Storing Helm Charts in Azure

Leveraging this Medium Post, we can create a Helm Repo in Azure Blob Storage. Let’s use the same resource group we used for AKS:

$ az storage account create -n helmrepoblob -g testHelmRg -l centralus --sku Standard_LRS --kind BlobStorage --access-tier Cool
{
  "accessTier": "Cool",
  "azureFilesIdentityBasedAuthentication": null,
  "blobRestoreStatus": null,
  "creationTime": "2020-06-08T23:32:08.928397+00:00",
  "customDomain": null,
  "enableHttpsTrafficOnly": true,
  "encryption": {
    "keySource": "Microsoft.Storage",
    "keyVaultProperties": null,
    "services": {
      "blob": {
        "enabled": true,
        "keyType": "Account",
        "lastEnabledTime": "2020-06-08T23:32:09.006504+00:00"
      },
      "file": {
        "enabled": true,
        "keyType": "Account",
        "lastEnabledTime": "2020-06-08T23:32:09.006504+00:00"
      },
      "queue": null,
      "table": null
    }
  },
  "failoverInProgress": null,
  "geoReplicationStats": null,
  "id": "/subscriptions/12345678-1234-1234-abcd-ef12345ad/resourceGroups/testHelmRg/providers/Microsoft.Storage/storageAccounts/helmrepoblob",
  "identity": null,
  "isHnsEnabled": null,
  "kind": "BlobStorage",
  "largeFileSharesState": null,
  "lastGeoFailoverTime": null,
  "location": "centralus",
  "name": "helmrepoblob",
  "networkRuleSet": {
    "bypass": "AzureServices",
    "defaultAction": "Allow",
    "ipRules": [],
    "virtualNetworkRules": []
  },
  "primaryEndpoints": {
    "blob": "https://helmrepoblob.blob.core.windows.net/",
    "dfs": "https://helmrepoblob.dfs.core.windows.net/",
    "file": null,
    "internetEndpoints": null,
    "microsoftEndpoints": null,
    "queue": null,
    "table": "https://helmrepoblob.table.core.windows.net/",
    "web": null
  },
  "primaryLocation": "centralus",
  "privateEndpointConnections": [],
  "provisioningState": "Succeeded",
  "resourceGroup": "testHelmRg",
  "routingPreference": null,
  "secondaryEndpoints": null,
  "secondaryLocation": null,
  "sku": {
    "name": "Standard_LRS",
    "tier": "Standard"
  },
  "statusOfPrimary": "available",
  "statusOfSecondary": null,
  "tags": {},
  "type": "Microsoft.Storage/storageAccounts"
}

Next we need to set our storage access and key

builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ export AZURE_STORAGE_ACCOUNT=helmrepoblob
builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ export AZURE_STORAGE_KEY=$(az storage account keys list --resource-group testHelmRg --account-name $AZURE_STORAGE_ACCOUNT | grep -m 1 value | awk -F'"' '{print $4}')
builder@DESKTOP-2SQ9NQM:~/Workspaces/vnc-container$ az storage container create --name helmrepo --public-access blob
{
  "created": true
}

Next we can package our helm repo:

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm package .
Successfully packaged chart and saved it to: /home/builder/Workspaces/Helm-Demo/vnc-0.1.0.tgz

Create an index.yaml file and upload it to the blob container

$ helm repo index --url https://helmrepoblob.blob.core.windows.net/helmrepo/ .
$ az storage blob upload --container-name helmrepo --file index.yaml --name index.yaml
$ az storage blob upload --container-name helmrepo --file *.tgz --name *.tgz

Testing

We can now add it and verify the repo

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo add myVncRepo https://helmrepoblob.blob.core.windows.net/helmrepo/
"myVncRepo" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo list
NAME                    URL
banzaicloud-stable      https://kubernetes-charts.banzaicloud.com
stable                  https://kubernetes-charts.storage.googleapis.com/
kedacore                https://kedacore.github.io/charts
nginx-stable            https://helm.nginx.com/stable
azure-samples           https://azure-samples.github.io/helm-charts/
myVncRepo               https://helmrepoblob.blob.core.windows.net/helmrepo/

We can see the VNC we launched locally:

$ helm list
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
myvnc2  default         1               2020-06-05 10:53:27.023141 -0500 CDT    deployed        vnc-0.1.0       latest

Let's first delete it

$ helm delete myvnc2
release "myvnc2" uninstalled

We can see our Chart by searching the repo:

$ helm search repo myVncRepo
NAME            CHART VERSION   APP VERSION     DESCRIPTION
myvncrepo/vnc   0.1.0           1.16.0          A Helm chart for Kubernetes

Now we can install it

$ helm --debug install --generate-name myVncRepo/vnc
install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/builder/.cache/helm/repository/vnc-0.1.0.tgz

client.go:108: [debug] creating 3 resource(s)
NAME: vnc-1591713797
LAST DEPLOYED: Tue Jun  9 09:43:19 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
fullnameOverride: ""
image:
  pullPolicy: IfNotPresent
  repository: nginx
imagePullSecrets: []
ingress:
  annotations: {}
  enabled: false
  hosts:
  - host: chart-example.local
    paths: []
  tls: []
nameOverride: ""
nodeSelector: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
  port: 80
  type: ClusterIP
serviceAccount:
  create: true
  name: null
tolerations: []

HOOKS:
---
# Source: vnc/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "vnc-1591713797-test-connection"
  labels:

    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['vnc-1591713797:80']
  restartPolicy: Never
MANIFEST:
---
# Source: vnc/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vnc-1591713797
  labels:

    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: vnc/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: vnc-1591713797
  labels:
    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
---
# Source: vnc/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vnc-1591713797
  labels:
    helm.sh/chart: vnc-0.1.0
    app.kubernetes.io/name: vnc
    app.kubernetes.io/instance: vnc-1591713797
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: vnc
      app.kubernetes.io/instance: vnc-1591713797
  template:
    metadata:
      labels:
        app.kubernetes.io/name: vnc
        app.kubernetes.io/instance: vnc-1591713797
    spec:
      serviceAccountName: vnc-1591713797
      securityContext:
        {}
      containers:
        - name: vnc
          securityContext:
            {}
          image: "nginx:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=vnc-1591713797" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

and verify it's running

$ helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS         CHART            APP VERSION
vnc-1591713797  default         1               2020-06-09 09:43:19.5344071 -0500 CDT   deployed       vnc-0.1.0        1.16.0

You can automate most of this.  Here is an azure pipelines that I used below.

# Starter pipeline
 
# set two variables:
# AZURE_STORAGE_ACCOUNT="helmrepoblob"
# AZURE_STORAGE_KEY="kexxxxxxxxxxxxxcQ=="
 
trigger:
- master
 
pool:
 vmImage: 'ubuntu-latest'
 
steps:
- task: HelmInstaller@1
 inputs:
   helmVersionToInstall: 'latest'
- task: Bash@3
 inputs:
   targetType: 'inline'
   script: |
     sed -i 's/^version: \([0-9]*\)\.\([0-9]*\)\..*/version: \1.\2.$(Build.BuildId)/' Chart.yaml 
     cat Chart.yaml
 displayName: 'sub build id'
 
- task: HelmDeploy@0
 inputs:
   command: 'package'
   chartPath: './'
   save: false
- task: Bash@3
 inputs:
   targetType: 'inline'
   script: |
     # Write your commands here
     set -x
    
     export
    
     cp $(Build.ArtifactStagingDirectory)/*.tgz ./
 
     cat Chart.yaml
 displayName: 'copy tgz'
 
- task: HelmDeploy@0
 inputs:
   connectionType: 'None'
   command: 'repo'
   arguments: 'index --url https://helmrepoblob.blob.core.windows.net/helmrepo/ .'
 displayName: 'helm repo index'
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage blob upload --container-name helmrepo --file index.yaml --name index.yaml'
 displayName: 'az storage blob upload index.yaml'
 
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage blob upload --container-name helmrepo --file *.tgz --name *.tgz'
 displayName: 'az storage blob upload tgz'
 
- task: AzureCLI@2
 inputs:
   azureSubscription: 'MSDN'
   scriptType: 'bash'
   scriptLocation: 'inlineScript'
   inlineScript: 'az storage account list -o table'
 displayName: 'az storage account list'
  

After a build, we can see it updates the helm repo in Azure blob storage.

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ cat ~/.cache/helm/repository/myVncRepo-index.yaml
apiVersion: v1
entries:
  vnc:
  - apiVersion: v2
    appVersion: latest
    created: "2020-06-09T19:08:30.880376658Z"
    description: A Helm chart for Kubernetes VNC pod
    digest: e3aede2ee10be54c147a680df3eb586cc03be20d1eb34422588605f493e1c335
    name: vnc
    type: application
    urls:
    - https://helmrepoblob.blob.core.windows.net/helmrepo/vnc-0.1.0.tgz
    version: 0.1.0
generated: "2020-06-09T19:08:30.879738258Z"
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "myVncRepo" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ cat ~/.cache/helm/repository/myVncRepo-index.yaml
apiVersion: v1
entries:
  vnc:
  - apiVersion: v2
    appVersion: latest
    created: "2020-06-09T19:12:24.69355305Z"
    description: A Helm chart for Kubernetes VNC pod
    digest: 96d2768d9ff55bc8f3a5cf64be952e0bf8d3ead6416c1a55c844f81206e24cb0
    name: vnc
    type: application
    urls:
    - https://helmrepoblob.blob.core.windows.net/helmrepo/vnc-0.1.614.tgz
    version: 0.1.614
generated: "2020-06-09T19:12:24.692413255Z"

AWS

Adding AWS was easy as we already have this website hosted in S3 and fronted by CloudFront.

I just added some steps to the build:

- task: Bash@3
  inputs:
    targetType: 'inline'
    script: |
      # Write your commands here
      rm index.yaml
  displayName: 'remove last index'
 
- task: HelmDeploy@0
  inputs:
    connectionType: 'None'
    command: 'repo'
    arguments: 'index --url https://freshbrewed.science/helm/ .'
  displayName: 'helm repo index (aws fb)'
 
- task: S3Upload@1
  inputs:
    awsCredentials: 'AWS-FB'
    regionName: 'us-east-1'
    bucketName: 'freshbrewed.science'
    sourceFolder: '$(Build.SourcesDirectory)'
    globExpressions: |
      *.tgz
      index.yaml
    targetFolder: 'helm'
    filesAcl: 'public-read'

We can now add and test it

builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo add freshbrewed https://freshbrewed.science/helm/
"freshbrewed" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "freshbrewed" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "myVncRepo" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~/Workspaces/Helm-Demo$ helm search repo freshbrewed
NAME            CHART VERSION   APP VERSION     DESCRIPTION
freshbrewed/vnc 0.1.615         latest          A Helm chart for Kubernetes VNC pod

Another way: The S3 plugin

You can host a private chart repo with a commonly used plugin

$ helm plugin install https://github.com/hypnoglow/helm-s3.git

Next we will create a bucket to hold the charts

$ aws s3 mb s3://idjhelmtes

Next let's add it as a Chart repo

$ helm repo add fbs3 s3://idjhelmtest/helm
"fbs3" has been added to your repositories

And we can push a local tgz of our chart into it

$ helm s3 push ./vnc-0.1.333.tgz fbs3

Lastly, we can update and search to prove it's indexed

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "appdynamics-charts" chart repository
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "vnc" chart repository
...Successfully got an update from the "fbs3" chart repository
...Successfully got an update from the "bitnami" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository

$ helm search repo fbs3
NAME     	CHART VERSION	APP VERSION	DESCRIPTION                        
fbs3/vnc	0.1.333      	latest     	A Helm chart for Kubernetes VNC pod

As a remote repo:

And of course, you can always use Azure Blob or S3 as a filestore and download the helm chart to install locally:

builder@DESKTOP-2SQ9NQM:~$ aws s3 ls s3://idjhelmtest/helm/
2020-06-09 15:20:26        386 helm
2020-06-09 15:24:33        408 index.yaml
2020-06-09 15:24:32       3071 vnc-0.1.333.tgz
builder@DESKTOP-2SQ9NQM:~$ aws s3 cp s3://idjhelmtest/helm/vnc-0.1.333.tgz ./
download: s3://idjhelmtest/helm/vnc-0.1.333.tgz to ./vnc-0.1.333.tgz
builder@DESKTOP-2SQ9NQM:~$ tar -xzvf vnc-0.1.333.tgz
vnc/Chart.yaml
vnc/values.yaml
vnc/templates/NOTES.txt
vnc/templates/_helpers.tpl
vnc/templates/deployment.yaml
vnc/templates/ingress.yaml
vnc/templates/service.yaml
vnc/templates/serviceaccount.yaml
vnc/templates/tests/test-connection.yaml
vnc/.helmignore
vnc/myvncdep.yaml

lastly install from the locally expanded path

builder@DESKTOP-2SQ9NQM:~$ helm install ./vnc --generate-name
NAME: vnc-1591738811
LAST DEPLOYED: Tue Jun  9 16:40:13 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=vnc,app.kubernetes.io/instance=vnc-1591738811" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:6901 to use Web VNC to your pod"
  kubectl --namespace default port-forward $POD_NAME 5901:5901

Local charts

It might be handy to use a common cloud storage system like Box, DropBox, Google Drive or OneDrive.

For instance, I can copy and index my chart into a new onedrive folder in WSL:

builder@DESKTOP-2SQ9NQM:~$ mkdir /mnt/c/Users/isaac/OneDrive/helm
builder@DESKTOP-2SQ9NQM:~$ cp vnc-0.1.333.tgz /mnt/c/Users/isaac/OneDrive/helm
builder@DESKTOP-2SQ9NQM:~$ helm repo index /mnt/c/Users/isaac/OneDrive/helm/

With Helm 2 we had helm serve, but that was removed in favour of a plugin that exposes ChartMuseum.

builder@DESKTOP-2SQ9NQM:~$ helm version
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}
builder@DESKTOP-2SQ9NQM:~$ helm plugin install https://github.com/jdolitsky/helm-servecm
Installed plugin: servecm

Seems one cannot run bare:

builder@DESKTOP-2SQ9NQM:~$ helm servecm
ChartMuseum not installed. Install latest stable release? (type "yes"): yes
Attempting to install ChartMuseum server (v0.12.0)...
Detected your os as "linux"
+ curl -LO https://s3.amazonaws.com/chartmuseum/release/v0.12.0/bin/linux/amd64/chartmuseum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52.5M  100 52.5M    0     0  2917k      0  0:00:18  0:00:18 --:--:-- 5208k
+ chmod +x ./chartmuseum
+ mv ./chartmuseum /usr/local/bin
mv: cannot move './chartmuseum' to '/usr/local/bin/chartmuseum': Permission denied
+ rm -rf /tmp/tmp.cgFL4VoBwx
Error: plugin "servecm" exited with error
builder@DESKTOP-2SQ9NQM:~$ sudo helm servecm
ChartMuseum not installed. Install latest stable release? (type "yes"): yes
Attempting to install ChartMuseum server (v0.12.0)...
Detected your os as "linux"
+ curl -LO https://s3.amazonaws.com/chartmuseum/release/v0.12.0/bin/linux/amd64/chartmuseum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 52.5M  100 52.5M    0     0  5535k      0  0:00:09  0:00:09 --:--:-- 6174k
+ chmod +x ./chartmuseum
+ mv ./chartmuseum /usr/local/bin
+ set +x
2020-06-09 16:49:48.014344 I | Missing required flags(s): --storage
Error: plugin "servecm" exited with error

But once we have the right syntax, we can serve with ease:

builder@DESKTOP-2SQ9NQM:~$ helm servecm --port=8879 --context-path=/chart --storage="local" --storage-local-rootdir="/mnt/c/Users/isaac/OneDrive/helm"
2020-06-09T16:53:07.564-0500    INFO    Starting ChartMuseum    {"port": 8879}
builder@DESKTOP-2SQ9NQM:~$ helm repo add local http://127.0.0.1:8879/charts
"local" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~$ helm search repo local
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
local/vnc               0.1.333         latest          A Helm chart for Kubernetes VNC pod
stable/magic-ip-address 0.1.0           0.9.0           A Helm chart to assign static IP addresses for 
Helm 3 : Creating and Sharing Charts

Cleanup

builder@DESKTOP-2SQ9NQM:~$ az aks delete -n testHelmAks -g testHelmRg
Are you sure you want to perform this operation? (y/n): y

Summary

Writing helm charts is pretty easy.  We can also use Azure DevOps to easily package and deploy charts as well build and deploy container images.  We then explored how to host container images as a Helm repository with Azure Blob storage and then 3 different ways in Amazon.  Lastly we even showed hosting local charts with helm 3’s chart museum plugin (servecm).

Hopefully this will help provide options to teams looking to take on Helm and host their charts in a supportable and cost effective fashion.

]]>
<![CDATA[Whitesource Bolt for AzDO and Github]]>On recommendations from a colleague, I decided to checkout Whitesource Bolt which was recently made free for developers and demo’ed at Microsoft Build 2020.  It integrates natively with Azure DevOps and Github.

They have a commercial offering that starts at $1260/user for “Whitesource for Developers” and $4200/user

]]>
https://freshbrewed.science/whitesource-bolt-for-azdo-and-github/5ece5b9519de02737c5b758cWed, 27 May 2020 12:48:27 GMT

On recommendations from a colleague, I decided to checkout Whitesource Bolt which was recently made free for developers and demo’ed at Microsoft Build 2020.  It integrates natively with Azure DevOps and Github.

They have a commercial offering that starts at $1260/user for “Whitesource for Developers” and $4200/user for “Whitesource Core”.  However, they do have this free offering.  I have to admit the pricing throws me (seems like American Pharma pricing with free samples). I have reached out for a demo/commercial demo license and if that plays out, I’ll followup this review with those findings.

Adding to Azure DevOps

Go to the Marketplace to add the plugin: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt

Whitesource Bolt for AzDO and Github

And then click “Get it free” to install it into your organization:

Whitesource Bolt for AzDO and Github

We should now see it in Settings/Extensions:

Whitesource Bolt for AzDO and Github

Using with NodeJS

Let’s start by doing this in a fork:

C:\Users\isaac\Workspaces\ghost-blog>git checkout -b feature/enable-whitesource-bolt
M       node_modules/.bin/atob
M       node_modules/.bin/bunyan
M       node_modules/.bin/css-beautify
…

Next, lets add a step to our pipeline to scan:

        - task: whitesource.ws-bolt.bolt.wss.WhiteSource Bolt@20
            displayName: 'WhiteSource Bolt'
            inputs:
              cwd: '$(Pipeline.Workspace)'
Whitesource Bolt for AzDO and Github

Next add and push:

C:\Users\isaac\Workspaces\ghost-blog>git add azure-pipelines.yml

C:\Users\isaac\Workspaces\ghost-blog>git commit -m "Add Whitesource Bolt"
[feature/enable-whitesource-bolt d7cee75c] Add Whitesource Bolt
 1 file changed, 5 insertions(+)

C:\Users\isaac\Workspaces\ghost-blog>git push --set-upstream origin feature/enable-whitesource-bolt
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 389 bytes | 194.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Analyzing objects... (3/3) (781 ms)
remote: Storing packfile... done (191 ms)
remote: Storing index... done (68 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://princessking.visualstudio.com/ghost-blog/_git/ghost-blog
 * [new branch]        feature/enable-whitesource-bolt -> feature/enable-whitesource-bolt
Branch 'feature/enable-whitesource-bolt' set up to track remote branch 'feature/enable-whitesource-bolt' from 'origin'.

Next, we can either create a PR which will trigger a build Or we can manually invoke a build on this branch (and our release steps should be skipped due to the branch restrictions).

Manual Build

Choose Run Pipeline:

Whitesource Bolt for AzDO and Github

Then change the branch and choose run:

Whitesource Bolt for AzDO and Github

If we want, we can click “Stages to run” and manually remove the release stages (though I trust the conditions to not run so i’ll leave them be):

Whitesource Bolt for AzDO and Github

If we just run it now, we’ll get an error:

2020-05-25T14:45:02.7683324Z ##[section]Starting: WhiteSource Bolt
2020-05-25T14:45:02.7690177Z ==============================================================================
2020-05-25T14:45:02.7690474Z Task         : WhiteSource Bolt
2020-05-25T14:45:02.7690790Z Description  : Detect & fix security vulnerabilities, problematic open source licenses.
2020-05-25T14:45:02.7691093Z Version      : 20.5.1
2020-05-25T14:45:02.7691290Z Author       : WhiteSource
2020-05-25T14:45:02.7691539Z Help         : http://www.whitesourcesoftware.com
2020-05-25T14:45:02.7691832Z ==============================================================================
2020-05-25T14:45:03.0790674Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.0791264Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.0791753Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.0792255Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.0792844Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.0793344Z (node:4269) Warning: Use Cipheriv for counter mode of aes-256-ctr
2020-05-25T14:45:03.5787750Z WhiteSource Bolt hasn't been activated in collection 97b8c412-5dbf-4532-ba2d-4c54232b59b3. To activate please place a valid code in the WhiteSource Bolt hub under "Build and Release" tab.
2020-05-25T14:45:03.5788741Z Exiting Bolt build task.
2020-05-25T14:45:03.5837257Z ##[section]Finishing: WhiteSource Bolt

It’s not actually in "Build and Release" but rather Pipelines where we need to setup the plugin:

Whitesource Bolt for AzDO and Github

When done

Whitesource Bolt for AzDO and Github

Now we can see results after a run:

Whitesource Bolt for AzDO and Github

Let’s take a look at some:

Whitesource Bolt for AzDO and Github

I then tried to mitigate these.  E.g. I upgraded lodash to ^4.17.12:

Whitesource Bolt for AzDO and Github

I even removed any and all “node_modules” that had errantly been checked in (which did speed up the build, but didn’t fix the error).

The problem is that without line numbers, it’s rather hard to figure out where to change.

License Checks

Another really good feature of Whitesource Bolt is to check the Licenses:

Whitesource Bolt for AzDO and Github

If my software was commercial, I would need to honor the GPL and opensource it.

That’s where the Inventor section can help identify which software is using GPL:

Whitesource Bolt for AzDO and Github

This would mean i would need to hunt down that js in my app and since only the minified version is GPL’ed, I could minify it myself.

Go Lang

I tried to point it to a golang build i have.  I tried at the parent dir and the folder containing my go code, but neither would show results:

Whitesource Bolt for AzDO and Github
Whitesource Bolt for AzDO and Github

And lest you think my golang has no OSS components:

Whitesource Bolt for AzDO and Github

You can see several imports from github.com I would have expected flagged.

Dotnet Core

Let’s build a simple dotnet core app.

It’s easy to instantiate a new project

$ dotnet new console -o myApp
 
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.300
 
Telemetry
---------
The .NET Core tools collect usage data in order to help us improve your experience. The data is anonymous. It is collected by Microsoft and shared with the community. You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using your favorite shell.
 
Read more about .NET Core CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry
 
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Getting ready...
The template "Console Application" was created successfully.
 
Processing post-creation actions...
Running 'dotnet restore' on myApp/myApp.csproj...
  Determining projects to restore...
  Restored /Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj (in 113 ms).
 
Restore succeeded.

And then add a 3rdParty library:

$ dotnet add package YamlDotNet --version 8.1.1
  Determining projects to restore...
  Writing /var/folders/dp/wgg0qtcs2lv7j0vwx4fnrgq80000gp/T/tmp4UgGOT.tmp
info : Adding PackageReference for package 'YamlDotNet' into project '/Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj'.
info : Restoring packages for /Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj...
info :   GET https://api.nuget.org/v3-flatcontainer/yamldotnet/index.json
info :   OK https://api.nuget.org/v3-flatcontainer/yamldotnet/index.json 36ms
info :   GET https://api.nuget.org/v3-flatcontainer/yamldotnet/8.1.1/yamldotnet.8.1.1.nupkg
info :   OK https://api.nuget.org/v3-flatcontainer/yamldotnet/8.1.1/yamldotnet.8.1.1.nupkg 24ms
info : Installing YamlDotNet 8.1.1.
info : Package 'YamlDotNet' is compatible with all the specified frameworks in project '/Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj'.
info : PackageReference for package 'YamlDotNet' version '8.1.1' added to file '/Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj'.
info : Committing restore...
info : Writing assets file to disk. Path: /Users/johnsi10/Workspaces/Dotnet-Core/myApp/obj/project.assets.json
log  : Restored /Users/johnsi10/Workspaces/Dotnet-Core/myApp/myApp.csproj (in 835 ms).

To use this library we can update Program.cs:

$ cat Program.cs 
using System;
using System.IO;
using YamlDotNet.Serialization;
 
namespace myApp
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
			Console.WriteLine("The current time is " + DateTime.Now);
 
            var r = new StringReader(@"
scalar: my scalar
sequence:
  - one
  - two
  - three
");
            var deserializer = new DeserializerBuilder().Build();
            var yamlObject = deserializer.Deserialize(r);
 
            var serializer = new SerializerBuilder()
                .JsonCompatible()
                .Build();
 
            var json = serializer.Serialize(yamlObject);
 
            Console.WriteLine(json);
 
        }
    }
}
$ dotnet run
Hello World!
The current time is 5/26/2020 9:19:38 AM
{"scalar": "my scalar", "sequence": ["one", "two", "three"]}

We can now fire a build with Whitesource Bolt:

$ cat azure-pipelines.yml 
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
 
trigger:
- master
 
pool:
  vmImage: 'ubuntu-latest'
 
steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'
 
- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml
  displayName: 'Run a multi-line script'
 
- task: DotNetCoreCLI@2
  inputs:
    command: 'build'
    projects: '**/*.csproj'
 
- task: WhiteSource Bolt@20
  inputs:
    cwd: 'myApp'

Results

Whitesource Bolt for AzDO and Github

I did need to run WS Bolt after my build step, but it found the library and indicated no issues.

Moving the scan before compile proves that:

Whitesource Bolt for AzDO and Github

Summary

The addition of WS Bolt scanning seemed to add between 47s and 1'40" to the build. Even if all one watches for are GPL licenses, I think that's a pretty good value.  

The question is whether the "free license" is for commercial use or not (or if it's free for open-source/'self employed' developers).  

The commercial pricing:

Whitesource Bolt for AzDO and Github

Seems rather high for an annually licensed product. But again, I'll reserve judgement until I can see a demo (However, at those prices we are in JFrog Enterprise and PRISMA levels and with PRISMA i get Twistlock and JFrog includes Xray as part of a full artifact management suite).

]]>
<![CDATA[Azure Arc for Kubernetes]]>Azure Arc for Kuberentes has finally gone out to public preview.  Like Arc for Servers, Arc for Kubernetes let’s us add our external kubernetes clusters, be them on-prem or in other clouds into Azure to be managed with Policies, RBAC and GitOps deployment.

What I hope to discover is:

]]>
http://localhost:2368/azure-arc-for-kubernetes/5ec5e7e289856e5ccc52a4dfThu, 21 May 2020 03:19:52 GMT

Azure Arc for Kuberentes has finally gone out to public preview.  Like Arc for Servers, Arc for Kubernetes let’s us add our external kubernetes clusters, be them on-prem or in other clouds into Azure to be managed with Policies, RBAC and GitOps deployment.

What I hope to discover is:

  1. What kind of clusters will it support? Old versions/new versions of k8s? ARM?
  2. Can i use Azure Pipelines Kubernetes Tasks with these?

Creating an CIVO K8s cluster

First, let’s get the CIVO client.  We need to install Ruby if we don’t have it already:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ sudo apt update && sudo apt install ruby-full
[sudo] password for builder:
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 https://packages.microsoft.com/repos/azure-cli bionic InRelease [3965 B]
Get:4 https://packages.microsoft.com/ubuntu/18.04/prod bionic InRelease [4003 B]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:6 https://packages.microsoft.com/repos/azure-cli bionic/main amd64 Packages [9314 B]
Get:7 https://packages.microsoft.com/ubuntu/18.04/prod bionic/main amd64 Packages [112 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:9 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [717 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [947 kB]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [227 kB]
Get:12 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [41.9 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [322 kB]
Get:14 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [10.5 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [54.9 kB]
Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [665 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [13.7 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1075 kB]
Get:19 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [221 kB]
Get:20 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [7596 B]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [334 kB]
Get:22 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [2824 B]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [15.7 kB]
Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [6384 B]
Get:25 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [7516 B]
Get:26 http://archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [4764 B]
Get:27 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [7484 B]
Get:28 http://archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [4436 B]
Fetched 5068 kB in 3s (1856 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
69 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fonts-lato javascript-common libgmp-dev libgmpxx4ldbl libjs-jquery libruby2.5 rake ri ruby ruby-dev ruby-did-you-mean ruby-minitest ruby-net-telnet ruby-power-assert ruby-test-unit
  ruby2.5 ruby2.5-dev ruby2.5-doc rubygems-integration
Suggested packages:
  apache2 | lighttpd | httpd gmp-doc libgmp10-doc libmpfr-dev bundler
The following NEW packages will be installed:
  fonts-lato javascript-common libgmp-dev libgmpxx4ldbl libjs-jquery libruby2.5 rake ri ruby ruby-dev ruby-did-you-mean ruby-full ruby-minitest ruby-net-telnet ruby-power-assert
  ruby-test-unit ruby2.5 ruby2.5-dev ruby2.5-doc rubygems-integration
0 upgraded, 20 newly installed, 0 to remove and 69 not upgraded.
Need to get 8208 kB/8366 kB of archives.
After this operation, 48.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 fonts-lato all 2.0-2 [2698 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 libgmpxx4ldbl amd64 2:6.1.2+dfsg-2 [8964 B]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 libgmp-dev amd64 2:6.1.2+dfsg-2 [316 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 rubygems-integration all 1.11 [4994 B]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5 amd64 2.5.1-1ubuntu1.6 [48.6 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby amd64 1:2.5.1 [5712 B]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 rake all 12.3.1-1ubuntu0.1 [44.9 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-did-you-mean all 1.2.0-2 [9700 B]
Get:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-minitest all 5.10.3-1 [38.6 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-net-telnet all 0.1.1-2 [12.6 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-power-assert all 0.3.0-1 [7952 B]
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-test-unit all 3.2.5-1 [61.1 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libruby2.5 amd64 2.5.1-1ubuntu1.6 [3069 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5-doc all 2.5.1-1ubuntu1.6 [1806 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic/universe amd64 ri all 1:2.5.1 [4496 B]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ruby2.5-dev amd64 2.5.1-1ubuntu1.6 [63.7 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic/main amd64 ruby-dev amd64 1:2.5.1 [4604 B]
Get:18 http://archive.ubuntu.com/ubuntu bionic/universe amd64 ruby-full all 1:2.5.1 [2716 B]
Fetched 8208 kB in 3s (2550 kB/s)
Selecting previously unselected package fonts-lato.
(Reading database ... 78441 files and directories currently installed.)
Preparing to unpack .../00-fonts-lato_2.0-2_all.deb .................................................................................................................................]
Unpacking fonts-lato (2.0-2) ........................................................................................................................................................]
Selecting previously unselected package javascript-common............................................................................................................................]
Preparing to unpack .../01-javascript-common_11_all.deb .............................................................................................................................]
Unpacking javascript-common (11) ...
Selecting previously unselected package libgmpxx4ldbl:amd64..........................................................................................................................]
Preparing to unpack .../02-libgmpxx4ldbl_2%3a6.1.2+dfsg-2_amd64.deb .................................................................................................................]
Unpacking libgmpxx4ldbl:amd64 (2:6.1.2+dfsg-2) ......................................................................................................................................]
Selecting previously unselected package libgmp-dev:amd64.............................................................................................................................]
Preparing to unpack .../03-libgmp-dev_2%3a6.1.2+dfsg-2_amd64.deb ...
Unpacking libgmp-dev:amd64 (2:6.1.2+dfsg-2) .........................................................................................................................................]
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../04-libjs-jquery_3.2.1-1_all.deb .............................................................................................................................]
Unpacking libjs-jquery (3.2.1-1) ....................................................................................................................................................]
Selecting previously unselected package rubygems-integration.
Preparing to unpack .../05-rubygems-integration_1.11_all.deb ........................................................................................................................]
Unpacking rubygems-integration (1.11) ...
Selecting previously unselected package ruby2.5......................................................................................................................................]
Preparing to unpack .../06-ruby2.5_2.5.1-1ubuntu1.6_amd64.deb ...
Unpacking ruby2.5 (2.5.1-1ubuntu1.6) ................................................................................................................................................]
Selecting previously unselected package ruby.
Preparing to unpack .../07-ruby_1%3a2.5.1_amd64.deb .................................................................................................................................]
Unpacking ruby (1:2.5.1) ...
Selecting previously unselected package rake.
Preparing to unpack .../08-rake_12.3.1-1ubuntu0.1_all.deb ...........................................................................................................................]
Unpacking rake (12.3.1-1ubuntu0.1) ...
Selecting previously unselected package ruby-did-you-mean............................................................................................................................]
Preparing to unpack .../09-ruby-did-you-mean_1.2.0-2_all.deb ........................................................................................................................]
Unpacking ruby-did-you-mean (1.2.0-2) ...
Selecting previously unselected package ruby-minitest................................................................................................................................]
Preparing to unpack .../10-ruby-minitest_5.10.3-1_all.deb ...........................................................................................................................]
Unpacking ruby-minitest (5.10.3-1) ......................................................................................................................................................]
Selecting previously unselected package ruby-net-telnet.
Preparing to unpack .../11-ruby-net-telnet_0.1.1-2_all.deb ...
Unpacking ruby-net-telnet (0.1.1-2) ...
Selecting previously unselected package ruby-power-assert.
Preparing to unpack .../12-ruby-power-assert_0.3.0-1_all.deb ...
Unpacking ruby-power-assert (0.3.0-1) ...
Selecting previously unselected package ruby-test-unit.
Preparing to unpack .../13-ruby-test-unit_3.2.5-1_all.deb ...
Unpacking ruby-test-unit (3.2.5-1) ...
Selecting previously unselected package libruby2.5:amd64.
Preparing to unpack .../14-libruby2.5_2.5.1-1ubuntu1.6_amd64.deb ...
Unpacking libruby2.5:amd64 (2.5.1-1ubuntu1.6) .........................................................................................................................................]
Selecting previously unselected package ruby2.5-doc.
Preparing to unpack .../15-ruby2.5-doc_2.5.1-1ubuntu1.6_all.deb ...
Unpacking ruby2.5-doc (2.5.1-1ubuntu1.6) ...
Selecting previously unselected package ri.
Preparing to unpack .../16-ri_1%3a2.5.1_all.deb ...
Unpacking ri (1:2.5.1) ...
Selecting previously unselected package ruby2.5-dev:amd64.
Preparing to unpack .../17-ruby2.5-dev_2.5.1-1ubuntu1.6_amd64.deb ...
Unpacking ruby2.5-dev:amd64 (2.5.1-1ubuntu1.6) ...
Selecting previously unselected package ruby-dev:amd64.
Preparing to unpack .../18-ruby-dev_1%3a2.5.1_amd64.deb ...
Unpacking ruby-dev:amd64 (1:2.5.1) ...
Selecting previously unselected package ruby-full.
Preparing to unpack .../19-ruby-full_1%3a2.5.1_all.deb ...
Unpacking ruby-full (1:2.5.1) ...
Setting up libjs-jquery (3.2.1-1) ...
Setting up fonts-lato (2.0-2) ...
Setting up ruby-did-you-mean (1.2.0-2) ...
Setting up ruby-net-telnet (0.1.1-2) ...
Setting up rubygems-integration (1.11) ...
Setting up ruby2.5-doc (2.5.1-1ubuntu1.6) ...
Setting up javascript-common (11) ...
Setting up libgmpxx4ldbl:amd64 (2:6.1.2+dfsg-2) ...
Setting up ruby-minitest (5.10.3-1) ...
Setting up ruby-power-assert (0.3.0-1) ...
Setting up libgmp-dev:amd64 (2:6.1.2+dfsg-2) ...
Setting up ruby2.5 (2.5.1-1ubuntu1.6) ...
Setting up ri (1:2.5.1) ...
Setting up ruby (1:2.5.1) ...
Setting up ruby-test-unit (3.2.5-1) ...
Setting up rake (12.3.1-1ubuntu0.1) ...
Setting up libruby2.5:amd64 (2.5.1-1ubuntu1.6) ...
Setting up ruby2.5-dev:amd64 (2.5.1-1ubuntu1.6) ...
Setting up ruby-dev:amd64 (1:2.5.1) ...
Setting up ruby-full (1:2.5.1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

Next let’s install the CIVO client:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ sudo gem install civo_cli
Fetching: unicode-display_width-1.7.0.gem (100%).......................................................................................................................................]
Successfully installed unicode-display_width-1.7.0
Fetching: terminal-table-1.8.0.gem (100%)
Successfully installed terminal-table-1.8.0
Fetching: thor-1.0.1.gem (100%)
Successfully installed thor-1.0.1
Fetching: colorize-0.8.1.gem (100%)
Successfully installed colorize-0.8.1
Fetching: bundler-2.1.4.gem (100%)
Successfully installed bundler-2.1.4
Fetching: mime-types-data-3.2020.0512.gem (100%)
Successfully installed mime-types-data-3.2020.0512
Fetching: mime-types-3.3.1.gem (100%)
Successfully installed mime-types-3.3.1
Fetching: multi_json-1.14.1.gem (100%)
Successfully installed multi_json-1.14.1
Fetching: safe_yaml-1.0.5.gem (100%)
Successfully installed safe_yaml-1.0.5
Fetching: crack-0.4.3.gem (100%)
Successfully installed crack-0.4.3
Fetching: multipart-post-2.1.1.gem (100%)
Successfully installed multipart-post-2.1.1
Fetching: faraday-1.0.1.gem (100%)
Successfully installed faraday-1.0.1
Fetching: concurrent-ruby-1.1.6.gem (100%)
Successfully installed concurrent-ruby-1.1.6
Fetching: i18n-1.8.2.gem (100%)

HEADS UP! i18n 1.1 changed fallbacks to exclude default locale.
But that may break your application.

If you are upgrading your Rails application from an older version of Rails:

Please check your Rails app for 'config.i18n.fallbacks = true'.
If you're using I18n (>= 1.1.0) and Rails (< 5.2.2), this should be
'config.i18n.fallbacks = [I18n.default_locale]'.
If not, fallbacks will be broken in your app by I18n 1.1.x.

If you are starting a NEW Rails application, you can ignore this notice.

For more info see:
https://github.com/svenfuchs/i18n/releases/tag/v1.1.0

Successfully installed i18n-1.8.2
Fetching: thread_safe-0.3.6.gem (100%)
Successfully installed thread_safe-0.3.6
Fetching: tzinfo-1.2.7.gem (100%)
Successfully installed tzinfo-1.2.7
Fetching: zeitwerk-2.3.0.gem (100%)
Successfully installed zeitwerk-2.3.0
Fetching: activesupport-6.0.3.1.gem (100%)
Successfully installed activesupport-6.0.3.1
Fetching: flexirest-1.9.15.gem (100%)
Successfully installed flexirest-1.9.15
Fetching: parslet-1.8.2.gem (100%)
Successfully installed parslet-1.8.2
Fetching: toml-0.2.0.gem (100%)
Successfully installed toml-0.2.0
Fetching: highline-2.0.3.gem (100%)
Successfully installed highline-2.0.3
Fetching: commander-4.5.2.gem (100%)
Successfully installed commander-4.5.2
Fetching: civo-1.3.3.gem (100%)
Successfully installed civo-1.3.3
Fetching: civo_cli-0.5.9.gem (100%)
Successfully installed civo_cli-0.5.9
Parsing documentation for unicode-display_width-1.7.0
Installing ri documentation for unicode-display_width-1.7.0
Parsing documentation for terminal-table-1.8.0
Installing ri documentation for terminal-table-1.8.0
Parsing documentation for thor-1.0.1
Installing ri documentation for thor-1.0.1
Parsing documentation for colorize-0.8.1
Installing ri documentation for colorize-0.8.1
Parsing documentation for bundler-2.1.4
Installing ri documentation for bundler-2.1.4
Parsing documentation for mime-types-data-3.2020.0512
Installing ri documentation for mime-types-data-3.2020.0512
Parsing documentation for mime-types-3.3.1
Installing ri documentation for mime-types-3.3.1
Parsing documentation for multi_json-1.14.1
Installing ri documentation for multi_json-1.14.1
Parsing documentation for safe_yaml-1.0.5
Installing ri documentation for safe_yaml-1.0.5
Parsing documentation for crack-0.4.3
Installing ri documentation for crack-0.4.3
Parsing documentation for multipart-post-2.1.1
Installing ri documentation for multipart-post-2.1.1
Parsing documentation for faraday-1.0.1
Installing ri documentation for faraday-1.0.1
Parsing documentation for concurrent-ruby-1.1.6
Installing ri documentation for concurrent-ruby-1.1.6
Parsing documentation for i18n-1.8.2
Installing ri documentation for i18n-1.8.2
Parsing documentation for thread_safe-0.3.6
Installing ri documentation for thread_safe-0.3.6
Parsing documentation for tzinfo-1.2.7
Installing ri documentation for tzinfo-1.2.7
Parsing documentation for zeitwerk-2.3.0
Installing ri documentation for zeitwerk-2.3.0
Parsing documentation for activesupport-6.0.3.1
Installing ri documentation for activesupport-6.0.3.1
Parsing documentation for flexirest-1.9.15
Installing ri documentation for flexirest-1.9.15
Parsing documentation for parslet-1.8.2
Installing ri documentation for parslet-1.8.2
Parsing documentation for toml-0.2.0
Installing ri documentation for toml-0.2.0
Parsing documentation for highline-2.0.3
Installing ri documentation for highline-2.0.3
Parsing documentation for commander-4.5.2
Installing ri documentation for commander-4.5.2
Parsing documentation for civo-1.3.3
Installing ri documentation for civo-1.3.3
Parsing documentation for civo_cli-0.5.9
Installing ri documentation for civo_cli-0.5.9
Done installing documentation for unicode-display_width, terminal-table, thor, colorize, bundler, mime-types-data, mime-types, multi_json, safe_yaml, crack, multipart-post, faraday, concurrent-ruby, i18n, thread_safe, tzinfo, zeitwerk, activesupport, flexirest, parslet, toml, highline, commander, civo, civo_cli after 21 seconds
25 gems installed

Verification that it’s installed:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo help
Commands:
/var/lib/gems/2.5.0/gems/thor-1.0.1/lib/thor/shell/basic.rb:399: warning: Insecure world writable dir /mnt/c in PATH, mode 040777
  civo apikey          # manage API keys stored in the client
  civo applications    # list and add marketplace applications to Kubernetes clusters. Alias: apps, addons, marketplace, k8s-apps, k3s-apps
  civo blueprint       # manage blueprints
  civo domain          # manage DNS domains
  civo domainrecord    # manage domain name DNS records for a domain
  civo firewall        # manage firewalls
  civo help [COMMAND]  # Describe available commands or one specific command
  civo instance        # manage instances
  civo kubernetes      # manage Kubernetes. Aliases: k8s, k3s
  civo loadbalancer    # manage load balancers
  civo network         # manage networks
  civo quota           # view the quota for the active account
  civo region          # manage regions
  civo size            # manage sizes
  civo snapshot        # manage snapshots
  civo sshkey          # manage uploaded SSH keys
  civo template        # manage templates
  civo update          # update to the latest Civo CLI
  civo version         # show the version of Civo CLI used
  civo volume          # manage volumes

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo version
You are running the current v0.5.9 of Civo CLI

Next we need to login to CIVO and get/set our API key. It’s now shown under Other/Security:

Azure Arc for Kubernetes

I have since reset it, but as you see, right now it’s listed as: I2fTuGRQYognBHZ0MdXhtJAlPD7ViU5q68yCepzS9OFLasbvWK

We now need to save that in the client:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo apikey add K8s I2fTuGRQYognBHZ0MdXhtJAlPD7ViU5q68yCepzS9OFLasbvWK
Saved the API Key I2fTuGRQYognBHZ0MdXhtJAlPD7ViU5q68yCepzS9OFLasbvWK as K8s
The current API Key is now K8s

We’ll need to refresh our memory on CIVO sizes before moving on:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo sizes
+------------+----------------------------------------------------+-----+----------+-----------+
| Name       | Description                                        | CPU | RAM (MB) | Disk (GB) |
+------------+----------------------------------------------------+-----+----------+-----------+
| g2.xsmall  | Extra Small - 1GB RAM, 1 CPU Core, 25GB SSD Disk   | 1   | 1024     | 25        |
| g2.small   | Small - 2GB RAM, 1 CPU Core, 25GB SSD Disk         | 1   | 2048     | 25        |
| g2.medium  | Medium - 4GB RAM, 2 CPU Cores, 50GB SSD Disk       | 2   | 4096     | 50        |
| g2.large   | Large - 8GB RAM, 4 CPU Cores, 100GB SSD Disk       | 4   | 8192     | 100       |
| g2.xlarge  | Extra Large - 16GB RAM, 6 CPU Core, 150GB SSD Disk | 6   | 16386    | 150       |
| g2.2xlarge | 2X Large - 32GB RAM, 8 CPU Core, 200GB SSD Disk    | 8   | 32768    | 200       |
+------------+----------------------------------------------------+-----+----------+-----------+

And the costs haven’t changed since the last time we blogged:

Azure Arc for Kubernetes

Lastly we’ll pick a version. Since our last post, they’ve added 1.17.2 in preview:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo kubernetes versions
+-------------+-------------+---------+
| Version     | Type        | Default |
+-------------+-------------+---------+
| 1.17.2+k3s1 | development |         |
| 1.0.0       | stable      | <=====  |
| 0.10.2      | deprecated  |         |
| 0.10.0      | deprecated  |         |
| 0.9.1       | deprecated  |         |
| 0.8.1       | legacy      |         |
+-------------+-------------+---------+

And like before, we can review applications we can install at the same time:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo applications list
+----------------------+-------------+--------------+-----------------+--------------+
| Name                 | Version     | Category     | Plans           | Dependencies |
+----------------------+-------------+--------------+-----------------+--------------+
| cert-manager         | v0.11.0     | architecture | Not applicable  | Helm         |
| Helm                 | 2.14.3      | management   | Not applicable  |              |
| Jenkins              | 2.190.1     | ci_cd        | 5GB, 10GB, 20GB | Longhorn     |
| KubeDB               | v0.12.0     | database     | Not applicable  | Longhorn     |
| Kubeless             | 1.0.5       | architecture | Not applicable  |              |
| kubernetes-dashboard | v2.0.0-rc7  | management   | Not applicable  |              |
| Linkerd              | 2.5.0       | architecture | Not applicable  |              |
| Longhorn             | 0.7.0       | storage      | Not applicable  |              |
| Maesh                | Latest      | architecture | Not applicable  | Helm         |
| MariaDB              | 10.4.7      | database     | 5GB, 10GB, 20GB | Longhorn     |
| metrics-server       | (default)   | architecture | Not applicable  |              |
| MinIO                | 2019-08-29  | storage      | 5GB, 10GB, 20GB | Longhorn     |
| MongoDB              | 4.2.0       | database     | 5GB, 10GB, 20GB | Longhorn     |
| OpenFaaS             | 0.18.0      | architecture | Not applicable  | Helm         |
| Portainer            | beta        | management   | Not applicable  |              |
| PostgreSQL           | 11.5        | database     | 5GB, 10GB, 20GB | Longhorn     |
| prometheus-operator  | 0.35.0      | monitoring   | Not applicable  |              |
| Rancher              | v2.3.0      | management   | Not applicable  |              |
| Redis                | 3.2         | database     | Not applicable  |              |
| Selenium             | 3.141.59-r1 | ci_cd        | Not applicable  |              |
| Traefik              | (default)   | architecture | Not applicable  |              |
+----------------------+-------------+--------------+-----------------+--------------+

I notice since February they dropped Klum but added Portainer and upgraded prometheus to 0.35.0 (from 0.34.0).

The last step is to create our cluster:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo kubernetes create MyNextCluster --size g2.medium --applications=Kubernetes-dashboard --nodes=2 --wait=true --save
Building new Kubernetes cluster MyNextCluster: Done
Created Kubernetes cluster MyNextCluster in 02 min 01 sec
KUBECONFIG=~/.kube/config:/tmp/import_kubeconfig20200520-3031-r5ix5y kubectl config view --flatten
/var/lib/gems/2.5.0/gems/civo_cli-0.5.9/lib/kubernetes.rb:350: warning: Insecure world writable dir /mnt/c in PATH, mode 040777
Saved config to ~/.kube/config
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
kube-node-8727     Ready    <none>   22s   v1.16.3-k3s.2
kube-master-e748   Ready    master   33s   v1.16.3-k3s.2

Azure Setup

First, login to Azure..

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az login
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FF64MDUEP to authenticate.
…

Create a Resource Group we’ll use to contain our clusters:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az group create --name myArcClusters --location centralus
{
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters",
  "location": "centralus",
  "managedBy": null,
  "name": "myArcClusters",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Next, let’s create a Service Principal for Arc

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az ad sp create-for-RBAC --skip-assignment --name "https://azure-arc-for-k8s-onboarding"
{
  "appId": "9f7e213f-3b03-411e-b874-ab272e46de41",
  "displayName": "azure-arc-for-k8s-onboarding",
  "name": "https://azure-arc-for-k8s-onboarding",
  "password": "4545458-aaee-3344-4321-12345678ad",
  "tenant": "d73a39db-6eda-495d-8000-7579f56d68b7"
}

Give it access to our subscription:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az role assignment create --role 34e09817-6cbe-4d01-b1a2-e0eac5743d41 --a
ssignee 9f7e213f-3b03-411e-b874-ab272e46de41 --scope /subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8
{
  "canDelegate": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/providers/Microsoft.Authorization/roleAssignments/c2cc591e-adca-4f91-af83-bdcea56a56f2",
  "name": "c2cc591e-adca-4f91-af83-bdcea56a56f2",
  "principalId": "e3349b46-3444-4f4a-b9f8-c36a8cf0bf38",
  "principalType": "ServicePrincipal",
  "roleDefinitionId": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/providers/Microsoft.Authorization/roleDefinitions/34e09817-6cbe-4d01-b1a2-e0eac5743d41",
  "scope": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8",
  "type": "Microsoft.Authorization/roleAssignments"
}

Next we need to login:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az login --service-principal -u 9f7e213f-3b03-411e-b874-ab272e46de41 -p 4545458-aaee-3344-4321-12345678ad --tenant d73a39db-6eda-495d-8000-7579f56d68b7
[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "d73a39db-6eda-495d-8000-7579f56d68b7",
    "id": "70b42e6a-asdf-asdf-asdf-9f3995b1aca8",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Visual Studio Enterprise Subscription",
    "state": "Enabled",
    "tenantId": "d73a39db-6eda-495d-8000-7579f56d68b7",
    "user": {
      "name": "9f7e213f-3b03-411e-b874-ab272e46de41",
      "type": "servicePrincipal"
    }
  }
]

If your Azure CLI is out of date, you may get an error:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az connectedk8s connect -n myCIVOCluster -g myArcClusters
az: 'connectedk8s' is not in the 'az' command group. See 'az --help'. If the command is from an extension, please make sure the corresponding extension is installed. To learn more about extensions, please visit https://docs.microsoft.com/en-us/cli/azure/azure-cli-extensions-overview
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az version
This command is in preview. It may be changed/removed in a future release.
{
  "azure-cli": "2.3.1",
  "azure-cli-command-modules-nspkg": "2.0.3",
  "azure-cli-core": "2.3.1",
  "azure-cli-nspkg": "3.0.4",
  "azure-cli-telemetry": "1.0.4",
  "extensions": {
    "azure-devops": "0.18.0"
  }
}

Updating is easy:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ sudo apt-get update && sudo apt-get install azure-cli
[sudo] password for builder:
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:3 https://packages.microsoft.com/repos/azure-cli bionic InRelease
Hit:4 https://packages.microsoft.com/ubuntu/18.04/prod bionic InRelease
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [334 kB]
Fetched 586 kB in 1s (396 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  azure-cli
1 upgraded, 0 newly installed, 0 to remove and 68 not upgraded.
Need to get 47.7 MB of archives.
After this operation, 23.5 MB of additional disk space will be used.
Get:1 https://packages.microsoft.com/repos/azure-cli bionic/main amd64 azure-cli all 2.6.0-1~bionic [47.7 MB]
Fetched 47.7 MB in 1s (32.0 MB/s)
(Reading database ... 94395 files and directories currently installed.)
Preparing to unpack .../azure-cli_2.6.0-1~bionic_all.deb ...
Unpacking azure-cli (2.6.0-1~bionic) over (2.3.1-1~bionic) ...
Setting up azure-cli (2.6.0-1~bionic) ...
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az version
{
  "azure-cli": "2.6.0",
  "azure-cli-command-modules-nspkg": "2.0.3",
  "azure-cli-core": "2.6.0",
  "azure-cli-nspkg": "3.0.4",
  "azure-cli-telemetry": "1.0.4",
  "extensions": {
    "azure-devops": "0.18.0"
  }
}

We need to register two providers.. Note, for me, i had permission issues so i had to elevate the role of the SP in my Subscription to get it done:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az provider register --namespace Microsoft.Kubernetes
Registering is still on-going. You can monitor using 'az provider show -n Microsoft.Kubernetes'
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az provider register --namespace Microsoft.KubernetesConfiguration
Registering is still on-going. You can monitor using 'az provider show -n Microsoft.KubernetesConfiguration'

Verification

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az provider show -n Microsoft.Kubernetes -o table
Namespace             RegistrationPolicy    RegistrationState
--------------------  --------------------  -------------------
Microsoft.Kubernetes  RegistrationRequired  Registered
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az provider show -n Microsoft.KubernetesConfiguration -o table
Namespace                          RegistrationPolicy    RegistrationState
---------------------------------  --------------------  -------------------
Microsoft.KubernetesConfiguration  RegistrationRequired  Registered

Install and update to the latest version of the extensions:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az extension add --name connectedk8s
The installed extension 'connectedk8s' is in preview.
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az extension add --name k8sconfiguration
The installed extension 'k8sconfiguration' is in preview.
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az extension update --name connectedk8s
No updates available for 'connectedk8s'. Use --debug for more information.
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az extension update --name k8sconfiguration
No updates available for 'k8sconfiguration'. Use --debug for more information.

Onboarding to Arc

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az connectedk8s connect -n myCIVOCluster -g myArcClusters
Command group 'connectedk8s' is in preview. It may be changed/removed in a future release.
Ensure that you have the latest helm version installed before proceeding.
This operation might take a while...

Connected cluster resource creation is supported only in the following locations: eastus, westeurope. Use the --location flag to specify one of these locations.

That would have been good to know.. Lets create a new Group in eastus and do it again…

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az group create --name myArcClusters2 --location eastus
{
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2",
  "location": "eastus",
  "managedBy": null,
  "name": "myArcClusters2",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Let’s add it!

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az connectedk8s connect -n myCIVOCluster -g myArcClusters2
Command group 'connectedk8s' is in preview. It may be changed/removed in a future release.
Ensure that you have the latest helm version installed before proceeding.
This operation might take a while...

{- Finished ..
  "aadProfile": {
    "clientAppId": "",
    "serverAppId": "",
    "tenantId": ""
  },
  "agentPublicKeyCertificate": "MIICCgKCAgEAsux97vWSl97JnRSsED6UyXq7TtIOBSR/eSy8fGAfGHL1wUCYQQeYI+8bgqSN0n7fvIELalrQ8efvCLdNE3C8ocBC/0RpfjRrIpdg7LPXrCCAA1hvR2PymNMWSyAJcPjyCPu5jbKkRSj4XSSLk1sVgVeJpWXjl4ZKZluwVEGD1EWA3Xwd43fwI8gYNKu55qpyO/knUJngoMV3jwnE0UJuEMzbVyoeKWNQJJe4g0vflA5j4cvU7QYBW6XRGGKiFYgWq1053kzvVuucZIl0QPt9hZU8NbbMnU/b8NPLIue4S/w7cZobpYf46CV2ZzySZ0OIe+kcZ7T8v0yw+i+77C76OcDwDaojov+DBEMLJ7AJtoQ/dwf9JebVO92ZuWFtHbdT3M3222xF2R1KWGVXxmOnqEtB3D+zsNpcSp8abKOH8du9l07Jts1ss2j24Lqf/QFVDrBHUTxidvLENFSCh4qQ2npfJq/v/ndLUv4qPXyRmbNLiFn5/+4shhSQBPBj+GopzMdkZn6Zoh4uPGX77svBS5jmBjyDYj0OIJobt/cfPwQzn9iV3/t2ED7IHauT5WGrGPeqFyF/43YAkanPOQUWDaWy2YLHyrR12Hh2KHLfVq/9k9VvwlcJ0Md96QOhqicBqGDZRR9YDoQjVkOP1Gb11AStvvMmeumWfeCqkVsBy9ECAwEAAQ==",
  "agentVersion": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myCIVOCluster",
  "identity": {
    "principalId": "23947e5a-0b1b-45fc-a265-7cad37625959",
    "tenantId": "d73a39db-6eda-495d-8000-7579f56d68b7",
    "type": "SystemAssigned"
  },
  "kubernetesVersion": null,
  "location": "eastus",
  "name": "myCIVOCluster",
  "provisioningState": "Succeeded",
  "resourceGroup": "myArcClusters2",
  "tags": null,
  "totalNodeCount": null,
  "type": "Microsoft.Kubernetes/connectedClusters"
}
Azure Arc for Kubernetes

The details are missing until we add in at least one tag in Tags.

Azure Arc for Kubernetes

We can see the pods are running fine:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get pods -n azure-arc
NAME                                         READY   STATUS    RESTARTS   AGE
flux-logs-agent-6989466ff6-f7znc             2/2     Running   0          20m
metrics-agent-bb65d876-gh7bd                 2/2     Running   0          20m
clusteridentityoperator-5649f66cf8-spngt     3/3     Running   0          20m
config-agent-dcf745b57-lsnj7                 3/3     Running   0          20m
controller-manager-98947d4f6-lg2hw           3/3     Running   0          20m
resource-sync-agent-75f6885587-d76mf         3/3     Running   0          20m
cluster-metadata-operator-5959f77f6d-7hqtf   2/2     Running   0          20m

Pods look good

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get deployments -n azure-arc
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
flux-logs-agent             1/1     1            1           30m
metrics-agent               1/1     1            1           30m
clusteridentityoperator     1/1     1            1           30m
config-agent                1/1     1            1           30m
controller-manager          1/1     1            1           30m
resource-sync-agent         1/1     1            1           30m
cluster-metadata-operator   1/1     1            1           30m


builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                                                                               APP VERSION
azure-arc       default         1               2020-05-20 08:45:13.2639004 -0500 CDT   deployed        azure-arc-k8sagents-0.2.2                                                                           1.0

Let’s stop here, and tear down this cluster.

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo kubernetes list
+--------------------------------------+---------------+---------+-----------+---------+--------+
| ID                                   | Name          | # Nodes | Size      | Version | Status |
+--------------------------------------+---------------+---------+-----------+---------+--------+
| 88f155ad-08e2-4935-8f2a-788deebaa451 | MyNextCluster | 2       | g2.medium | 1.0.0   | ACTIVE |
+--------------------------------------+---------------+---------+-----------+---------+--------+
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ civo kubernetes remove 88f155ad-08e2-4935-8f2a-788deebaa451
Removing Kubernetes cluster MyNextCluster

*update* the problem was not having Tags.  For some reason, Arc for K8s needs at least 1 tag to function properly.  Once i set a tag (i used cloudProvider = CIVO) then it loaded just fine

Azure Arc for Kubernetes

AWS EKS

Let’s keep it simple and use a Quickstart guide to launch EKS into a new VPC: https://aws.amazon.com/quickstart/architecture/amazon-eks/

Azure Arc for Kubernetes

I did set the Allow external access to 0.0.0.0/0:

Azure Arc for Kubernetes
Azure Arc for Kubernetes

And set EKS public access endpoint to Enabled

Azure Arc for Kubernetes

We can now see the EKS cluster via the CLI:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ aws eks list-clusters --region us-east-2
{
    "clusters": [
        "EKS-YVA1LL5Y"
    ]
}

We can now download the kubeconfig:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ aws --region us-east-2 eks update-kubeconfig --name EKS-YVA1LL5Y
Added new context arn:aws:eks:us-east-2:01234512345:cluster/EKS-YVA1LL5Y to /home/builder/.kube/config
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   73m
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get nodes
NAME                                       STATUS   ROLES    AGE   VERSION
ip-10-0-0-74.us-east-2.compute.internal    Ready    <none>   58m   v1.15.10-eks-bac369
ip-10-0-61-98.us-east-2.compute.internal   Ready    <none>   58m   v1.15.10-eks-bac369
ip-10-0-82-16.us-east-2.compute.internal   Ready    <none>   58m   v1.15.10-eks-bac369

Onboarding to Azure Arc

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az connectedk8s connect -n myEKSCluster -g myArcClusters2
Command group 'connectedk8s' is in preview. It may be changed/removed in a future release.
Ensure that you have the latest helm version installed before proceeding.
This operation might take a while...

{- Finished ..
  "aadProfile": {
    "clientAppId": "",
    "serverAppId": "",
    "tenantId": ""
  },
  "agentPublicKeyCertificate": "MIICCgKCAgEAlIaZLZ0eHd1zwSlXflD8L5tQcH8MAF/t3tHOewN4+6EsVW6wtwHL8rup+1i/1jbnArKHFrMCPpok+JCkxXvTgWsD5VSnZxDg/dkkACHw1I2RTq+9Q7E4SwT9QxreuxTejyLI/0w+kCnNwWlAvS6I4J/M709Bvz6KhxzFCWsX5fgRCoIO5zE0wjlxGG0s4c4f1hmHz3Mb4utPaX4gk6+Bo5MZs5EVNbhuHKdasDvJWe1itrtKVKQh7LfWICb1M34Kmo+TPhADKcoV4Ltaxb3VHnHLZpStGfcFI5SqTppn5dULOcGGob2wG8Mlw2INy17QUsyZ728qIKrBaNaJXgeQmZ93xUh7H0IBjV7VXuCOScsDBP7Cx5KZnXT7uWPRLjWCzYlOodCNeCMnBAabEGPECbUq3juvkmrbYiktjn6ZnWm32bDEYqiobqyFz3XMZVkK1zTnrVY0Bguc+n3RHXQeKKHZLyg84Cm1p83uadKFB7RZdCmBw5a8rS4PJEF+bpgPgS5FEcuNa5qp35BaWKTPifiDx9vo8uIADtEwSqpsrxqDnTf2gm8ORoww3vNfTO9WdiAzpBMDToJQamauVzGLB/HFO7pfrvjzRTdeKjamv8lzI/IyStr6Wf2QHAZVF95XfmmZ0iCHmkVHRwbQE4IABUeyZPzB6VKJD3yC9K8q4EsCAwEAAQ==",
  "agentVersion": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myEKSCluster",
  "identity": {
    "principalId": "e5c729a1-ef9c-4839-89cd-a850536a400a",
    "tenantId": "d73a39db-6eda-495d-8000-7579f56d68b7",
    "type": "SystemAssigned"
  },
  "kubernetesVersion": null,
  "location": "eastus",
  "name": "myEKSCluster",
  "provisioningState": "Succeeded",
  "resourceGroup": "myArcClusters2",
  "tags": null,
  "totalNodeCount": null,
  "type": "Microsoft.Kubernetes/connectedClusters"
}

Linode LKE

Since my last blog entry, LKE CLI now can create clusters.

First, let’s get a list of types:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ linode-cli linodes types
┌──────────────────┬──────────────────────────────────┬───────────┬─────────┬────────┬───────┬─────────────┬──────────┬────────┬─────────┬──────┐
│ id               │ label                            │ class     │ disk    │ memory │ vcpus │ network_out │ transfer │ hourly │ monthly │ gpus │
├──────────────────┼──────────────────────────────────┼───────────┼─────────┼────────┼───────┼─────────────┼──────────┼────────┼─────────┼──────┤
│ g6-nanode-1      │ Nanode 1GB                       │ nanode    │ 25600   │ 1024   │ 1     │ 1000        │ 1000     │ 0.0075 │ 5.0     │ 0    │
│ g6-standard-1    │ Linode 2GB                       │ standard  │ 51200   │ 2048   │ 1     │ 2000        │ 2000     │ 0.015  │ 10.0    │ 0    │
│ g6-standard-2    │ Linode 4GB                       │ standard  │ 81920   │ 4096   │ 2     │ 4000        │ 4000     │ 0.03   │ 20.0    │ 0    │
│ g6-standard-4    │ Linode 8GB                       │ standard  │ 163840  │ 8192   │ 4     │ 5000        │ 5000     │ 0.06   │ 40.0    │ 0    │
│ g6-standard-6    │ Linode 16GB                      │ standard  │ 327680  │ 16384  │ 6     │ 6000        │ 8000     │ 0.12   │ 80.0    │ 0    │
│ g6-standard-8    │ Linode 32GB                      │ standard  │ 655360  │ 32768  │ 8     │ 7000        │ 16000    │ 0.24   │ 160.0   │ 0    │
│ g6-standard-16   │ Linode 64GB                      │ standard  │ 1310720 │ 65536  │ 16    │ 9000        │ 20000    │ 0.48   │ 320.0   │ 0    │
│ g6-standard-20   │ Linode 96GB                      │ standard  │ 1966080 │ 98304  │ 20    │ 10000       │ 20000    │ 0.72   │ 480.0   │ 0    │
│ g6-standard-24   │ Linode 128GB                     │ standard  │ 2621440 │ 131072 │ 24    │ 11000       │ 20000    │ 0.96   │ 640.0   │ 0    │
│ g6-standard-32   │ Linode 192GB                     │ standard  │ 3932160 │ 196608 │ 32    │ 12000       │ 20000    │ 1.44   │ 960.0   │ 0    │
│ g6-highmem-1     │ Linode 24GB                      │ highmem   │ 20480   │ 24576  │ 1     │ 5000        │ 5000     │ 0.09   │ 60.0    │ 0    │
│ g6-highmem-2     │ Linode 48GB                      │ highmem   │ 40960   │ 49152  │ 2     │ 6000        │ 6000     │ 0.18   │ 120.0   │ 0    │
│ g6-highmem-4     │ Linode 90GB                      │ highmem   │ 92160   │ 92160  │ 4     │ 7000        │ 7000     │ 0.36   │ 240.0   │ 0    │
│ g6-highmem-8     │ Linode 150GB                     │ highmem   │ 204800  │ 153600 │ 8     │ 8000        │ 8000     │ 0.72   │ 480.0   │ 0    │
│ g6-highmem-16    │ Linode 300GB                     │ highmem   │ 348160  │ 307200 │ 16    │ 9000        │ 9000     │ 1.44   │ 960.0   │ 0    │
│ g6-dedicated-2   │ Dedicated 4GB                    │ dedicated │ 81920   │ 4096   │ 2     │ 4000        │ 4000     │ 0.045  │ 30.0    │ 0    │
│ g6-dedicated-4   │ Dedicated 8GB                    │ dedicated │ 163840  │ 8192   │ 4     │ 5000        │ 5000     │ 0.09   │ 60.0    │ 0    │
│ g6-dedicated-8   │ Dedicated 16GB                   │ dedicated │ 327680  │ 16384  │ 8     │ 6000        │ 6000     │ 0.18   │ 120.0   │ 0    │
│ g6-dedicated-16  │ Dedicated 32GB                   │ dedicated │ 655360  │ 32768  │ 16    │ 7000        │ 7000     │ 0.36   │ 240.0   │ 0    │
│ g6-dedicated-32  │ Dedicated 64GB                   │ dedicated │ 1310720 │ 65536  │ 32    │ 8000        │ 8000     │ 0.72   │ 480.0   │ 0    │
│ g6-dedicated-48  │ Dedicated 96GB                   │ dedicated │ 1966080 │ 98304  │ 48    │ 9000        │ 9000     │ 1.08   │ 720.0   │ 0    │
│ g1-gpu-rtx6000-1 │ Dedicated 32GB + RTX6000 GPU x1  │ gpu       │ 655360  │ 32768  │ 8     │ 10000       │ 16000    │ 1.5    │ 1000.0  │ 1    │
│ g1-gpu-rtx6000-2 │ Dedicated 64GB + RTX6000 GPU x2  │ gpu       │ 1310720 │ 65536  │ 16    │ 10000       │ 20000    │ 3.0    │ 2000.0  │ 2    │
│ g1-gpu-rtx6000-3 │ Dedicated 96GB + RTX6000 GPU x3  │ gpu       │ 1966080 │ 98304  │ 20    │ 10000       │ 20000    │ 4.5    │ 3000.0  │ 3    │
│ g1-gpu-rtx6000-4 │ Dedicated 128GB + RTX6000 GPU x4 │ gpu       │ 2621440 │ 131072 │ 24    │ 10000       │ 20000    │ 6.0    │ 4000.0  │ 4    │
└──────────────────┴──────────────────────────────────┴───────────┴─────────┴────────┴───────┴─────────────┴──────────┴────────┴─────────┴──────┘

Next, let’s create a cluster:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ linode-cli lke cluster-create --label mycluster --k8s_version 1.15 --node_pools.type g6-standard-2 --node_pools.count 2
┌──────┬───────────┬────────────┐
│ id   │ label     │ region     │
├──────┼───────────┼────────────┤
│ 5193 │ mycluster │ us-central │
└──────┴───────────┴────────────┘

The Kubeconfig view is still screwy so we can use our last hack to get it:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ linode-cli lke kubeconfig-view 5193 > t.o &&  cat t.o | tail -n2 | head -n1 | sed 's/.\{8\}$//' | sed 's/^.\{8\}//' | base64 --decode > ~/.kube/config
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get nodes
NAME                        STATUS     ROLES    AGE   VERSION
lke5193-6544-5ec54610dcac   NotReady   <none>   2s    v1.15.6
lke5193-6544-5ec54610e234   NotReady   <none>   9s    v1.15.6

Onboarding to Azure Arc

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ az connectedk8s connect -n myLKECluster -g myArcClusters2
Command group 'connectedk8s' is in preview. It may be changed/removed in a future release.
Ensure that you have the latest helm version installed before proceeding.
This operation might take a while...

{- Finished ..
  "aadProfile": {
    "clientAppId": "",
    "serverAppId": "",
    "tenantId": ""
  },
  "agentPublicKeyCertificate": "MIICCgKCAgEAqjk+8jskhpDa+O6cDqL37tad130qQCRBcCPimQ5wExwKPE0+BLrprfDGdXfAEbH9sQXTl6yWiXLiJkrV8PP/D+Qhkze4WINTY0pSu5yRT9x0MO/bu1E7ygcXjxuVvd7fJV0JF5fisWMaB/maY3FtmpoX1zOd0Sbz/yKg3pTrZE8R7QLDuNmKEkbHj5/ZmeZAmGYBUguKpqtnNM7++OE5BKjtHcI5H5A8fEV5xUJEA1+F7TTtrFTrFZdMPUR5QqQ+WQzsN3hrDakW51+SU5IkkppmzORHCehAn9TG/36mD7C8k05h/JKPzyU0UWlLb1yi7wOZqh9FksmEdqix6XcAMQCIp3CzdJc5M8KXa9rUNPmHf+KvPaqQtuZYPUiErMWHMiSqK/arZEj7N4NAToPjIjT1oPHB7g01yQFCmHcgs5gYt7H3b0ZSxWQJ78aZaYVc3HskegmYwL2qsg/P6uOIoS6+SqlPcb4U/ER+J+vKPLm71sOydDG2uMU02JiBJqrOqoxFRFtbrksOcL0zsFe3axjTrEJTz18NIhOJ98Af0en/q5r/h6zG7SR+xp/VkIk81UzYlCKPTMF01lbm1nKtrIdmYchp/ggC21bWqClhmOMPDw0eoiINvYWGoOfsugm0ceEbpxkKEv/avir+Z+krVnOrb/enVP5qli6leZRgSTsCAwEAAQ==",
  "agentVersion": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myLKECluster",
  "identity": {
    "principalId": "0f31b18c-743c-4697-9aa2-b1d3585a401c",
    "tenantId": "d73a39db-6eda-495d-8000-7579f56d68b7",
    "type": "SystemAssigned"
  },
  "kubernetesVersion": null,
  "location": "eastus",
  "name": "myLKECluster",
  "provisioningState": "Succeeded",
  "resourceGroup": "myArcClusters2",
  "tags": null,
  "totalNodeCount": null,
  "type": "Microsoft.Kubernetes/connectedClusters"
}

Once I set a tag in the Azure Portal:

Azure Arc for Kubernetes

We can now see Kubernetes clusters hosted in two other different Cloud providers now in my Azure Resource Group:

Azure Arc for Kubernetes

GitOps

First, we need a repo to store our YAML/Helm charts. We can create one in Azure Repos: https://princessking.visualstudio.com/_git/IAC-Demo

Azure Arc for Kubernetes

I’ll then initialize it with a README.md and Create new GIT Credentials:

Azure Arc for Kubernetes

We’ll clone the repo then add a directory for manifests.I could create a simple app, but i’ll just borrow from Azure’s Hello World Voting app and put that YAML in my manifests folder.

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ mkdir manifests
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ cd manifests/
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ curl -O https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:-- 100  1405  100  1405    0     0   9493      0 --:--:-- --:--:-- --:--:--  9493
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ cat azure-vote-all-in-one-redis.yaml | tail -n5
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: azure-vote-front

We can now launch a configuration that would deploy to the default namespace the YAMLs in manifests.. Now note - i have NOT yet pushed the manifest folder so there is nothing to deploy:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ az k8sconfiguration create --name iam-demo-config --cluster-name myEKSCluster --resource-group myArcClusters2 --operator-instance-name app-config --operator-namespace default --repository-url https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo --cluster-type connectedClusters --scope cluster --operator-params="--git-readonly --git-path=manifests"
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a future release.
{
  "complianceStatus": {
    "complianceState": "Pending",
    "lastConfigApplied": "0001-01-01T00:00:00",
    "message": "{\"OperatorMessage\":null,\"ClusterState\":null}",
    "messageLevel": "3"
  },
  "enableHelmOperator": "False",
  "helmOperatorProperties": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myEKSCluster/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/iam-demo-config",
  "name": "iam-demo-config",
  "operatorInstanceName": "app-config",
  "operatorNamespace": "default",
  "operatorParams": "--git-readonly --git-path=manifests",
  "operatorScope": "cluster",
  "operatorType": "Flux",
  "provisioningState": "Succeeded",
  "repositoryPublicKey": "",
  "repositoryUrl": "https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo",
  "resourceGroup": "myArcClusters2",
  "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations"
}
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$

And we see a deployment launched in EKS

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get deployments --all-namespaces
NAMESPACE     NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
azure-arc     cluster-metadata-operator   1/1     1            1           50m
azure-arc     clusteridentityoperator     1/1     1            1           50m
azure-arc     config-agent                1/1     1            1           50m
azure-arc     controller-manager          1/1     1            1           50m
azure-arc     flux-logs-agent             1/1     1            1           50m
azure-arc     metrics-agent               1/1     1            1           50m
azure-arc     resource-sync-agent         1/1     1            1           50m
default       app-config                  0/1     1            0           4s
default       memcached                   0/1     1            0           4s
kube-system   coredns                     2/2     2            2           125m
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl describe deployment app-config
Name:               app-config
Namespace:          default
CreationTimestamp:  Wed, 20 May 2020 12:12:10 -0500
Labels:             <none>
Annotations:        deployment.kubernetes.io/revision: 1
Selector:           instanceName=app-config,name=flux
Replicas:           1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           instanceName=app-config
                    name=flux
  Annotations:      prometheus.io/port: 3031
  Service Account:  app-config
  Containers:
   flux:
    Image:      docker.io/fluxcd/flux:1.18.0
    Port:       3030/TCP
    Host Port:  0/TCP
    Args:
      --k8s-secret-name=app-config-git-deploy
      --memcached-service=
      --ssh-keygen-dir=/var/fluxd/keygen
      --git-url=https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo
      --git-branch=master
      --git-path=manifests
      --git-label=flux
      --git-user=Flux
      --git-readonly
      --connect=ws://flux-logs-agent.azure-arc/default/iam-demo-config
      --listen-metrics=:3031
    Requests:
      cpu:        50m
      memory:     64Mi
    Liveness:     http-get http://:3030/api/flux/v6/identity.pub delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:    http-get http://:3030/api/flux/v6/identity.pub delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/fluxd/ssh from git-key (ro)
      /var/fluxd/keygen from git-keygen (rw)
  Volumes:
   git-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  app-config-git-deploy
    Optional:    false
   git-keygen:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   app-config-768cc4fc89 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  18s   deployment-controller  Scaled up replica set app-config-768cc4fc89 to 1
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
app-config-768cc4fc89-f5n66   1/1     Running   0          45s
memcached-86869f57fd-r2kgj    1/1     Running   0          45s
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl describe app-config-768cc4fc89-f5n66
error: the server doesn't have a resource type "app-config-768cc4fc89-f5n66"
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl describe pod app-config-768cc4fc89-f5n66
Name:           app-config-768cc4fc89-f5n66
Namespace:      default
Priority:       0
Node:           ip-10-0-82-16.us-east-2.compute.internal/10.0.82.16
Start Time:     Wed, 20 May 2020 12:12:10 -0500
Labels:         instanceName=app-config
                name=flux
                pod-template-hash=768cc4fc89
Annotations:    kubernetes.io/psp: eks.privileged
                prometheus.io/port: 3031
Status:         Running
IP:             10.0.84.119
Controlled By:  ReplicaSet/app-config-768cc4fc89
Containers:
  flux:
    Container ID:  docker://d638ec677da94ca4d4c32fcd5ab9e35b390d33fa433b350976152f744f105244
    Image:         docker.io/fluxcd/flux:1.18.0
    Image ID:      docker-pullable://fluxcd/flux@sha256:8fcf24dccd7774b87a33d87e42fa0d9233b5c11481c8414fe93a8bdc870b4f5b
    Port:          3030/TCP
    Host Port:     0/TCP
    Args:
      --k8s-secret-name=app-config-git-deploy
      --memcached-service=
      --ssh-keygen-dir=/var/fluxd/keygen
      --git-url=https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo
      --git-branch=master
      --git-path=manifests
      --git-label=flux
      --git-user=Flux
      --git-readonly
      --connect=ws://flux-logs-agent.azure-arc/default/iam-demo-config
      --listen-metrics=:3031
    State:          Running
      Started:      Wed, 20 May 2020 12:12:18 -0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        50m
      memory:     64Mi
    Liveness:     http-get http://:3030/api/flux/v6/identity.pub delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:    http-get http://:3030/api/flux/v6/identity.pub delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/fluxd/ssh from git-key (ro)
      /var/fluxd/keygen from git-keygen (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from app-config-token-rpzr5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  git-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  app-config-git-deploy
    Optional:    false
  git-keygen:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  app-config-token-rpzr5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  app-config-token-rpzr5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                               Message
  ----    ------     ----  ----                                               -------
  Normal  Scheduled  63s   default-scheduler                                  Successfully assigned default/app-config-768cc4fc89-f5n66 to ip-10-0-82-16.us-east-2.compute.internal
  Normal  Pulling    62s   kubelet, ip-10-0-82-16.us-east-2.compute.internal  Pulling image "docker.io/fluxcd/flux:1.18.0"
  Normal  Pulled     55s   kubelet, ip-10-0-82-16.us-east-2.compute.internal  Successfully pulled image "docker.io/fluxcd/flux:1.18.0"
  Normal  Created    55s   kubelet, ip-10-0-82-16.us-east-2.compute.internal  Created container flux
  Normal  Started    55s   kubelet, ip-10-0-82-16.us-east-2.compute.internal  Started container flux

Let’s push it:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git add manifests/
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        new file:   manifests/azure-vote-all-in-one-redis.yaml

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git commit -m "push changes for a redis app"
[master f9f49cc] push changes for a redis app
 1 file changed, 78 insertions(+)
 create mode 100644 manifests/azure-vote-all-in-one-redis.yaml
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git push
Counting objects: 4, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 815 bytes | 815.00 KiB/s, done.
Total 4 (delta 0), reused 0 (delta 0)
remote: Analyzing objects... (4/4) (41 ms)
remote: Storing packfile... done (143 ms)
remote: Storing index... done (51 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo
   d11c1e8..f9f49cc  master -> master

However i did not see it change. That’s when i realized i needed to pack the credentials into the URL:

Azure Arc for Kubernetes

Once saved, I saw it start to update:

Azure Arc for Kubernetes
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
app-config-8c4b5fd6f-jj995          1/1     Running             0          13s
azure-vote-back-5775d78ff5-s2mh9    0/1     ContainerCreating   0          4s
azure-vote-front-559d85d4f7-jhb84   0/1     ContainerCreating   0          4s
memcached-86869f57fd-r2kgj          1/1     Running             0          20m

And we can see it launched:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc$ kubectl get svc
NAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
azure-vote-back    ClusterIP      172.20.46.228    <none>                                                                    6379/TCP       6m55s
azure-vote-front   LoadBalancer   172.20.208.101   a2863a6eaeda34906a8583a1b4175842-1241817646.us-east-2.elb.amazonaws.com   80:30262/TCP   6m55s
kubernetes         ClusterIP      172.20.0.1       <none>                                                                    443/TCP        153m
memcached          ClusterIP      172.20.171.75    <none>                                                                    11211/TCP      27m
Azure Arc for Kubernetes

Let’s then launch the same app on LKE:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ az k8sconfiguration create --name iam-demo-config --cluster-name myLKECluster  --resource-group myArcClusters2 --operator-instance-name app-config --operator-namespace default --repository-url https://isaac.johnson:***********************************@princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo --cluster-type connectedClusters --scope cluster --operator-params="--git-readonly --git-path=manifests"
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a future release.
{
  "complianceStatus": {
    "complianceState": "Pending",
    "lastConfigApplied": "0001-01-01T00:00:00",
    "message": "{\"OperatorMessage\":null,\"ClusterState\":null}",
    "messageLevel": "3"
  },
  "enableHelmOperator": "False",
  "helmOperatorProperties": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myLKECluster/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/iam-demo-config",
  "name": "iam-demo-config",
  "operatorInstanceName": "app-config",
  "operatorNamespace": "default",
  "operatorParams": "--git-readonly --git-path=manifests",
  "operatorScope": "cluster",
  "operatorType": "Flux",
  "provisioningState": "Succeeded",
  "repositoryPublicKey": "",
  "repositoryUrl": "https://isaac.johnson:abcdef12345abcdef12345abcdef12345abcdef12345@princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo",
  "resourceGroup": "myArcClusters2",
  "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations"
}

Verification:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ !1675
linode-cli lke kubeconfig-view 5193 > t.o &&  cat t.o | tail -n2 | head -n1 | sed 's/.\{8\}$//' | sed 's/^.\{8\}//' | base64 --decode > ~/.kube/config
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ kubectl get svc
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
azure-vote-back    ClusterIP      10.128.43.37   <none>         6379/TCP       2m
azure-vote-front   LoadBalancer   10.128.85.35   45.79.243.16   80:32408/TCP   2m
kubernetes         ClusterIP      10.128.0.1     <none>         443/TCP        160m
memcached          ClusterIP      10.128.82.70   <none>         11211/TCP      2m24s
Azure Arc for Kubernetes

Updating

Let’s now change the values:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ git diff
diff --git a/manifests/azure-vote-all-in-one-redis.yaml b/manifests/azure-vote-all-in-one-redis.yaml
index 80a8edb..d64cfce 100644
--- a/manifests/azure-vote-all-in-one-redis.yaml
+++ b/manifests/azure-vote-all-in-one-redis.yaml
@@ -20,6 +20,9 @@ spec:
         ports:
         - containerPort: 6379
           name: redis
+        env:
+        - name: VOTE1VALUE
+          value: "Elephants"
 ---
 apiVersion: v1
 kind: Service
@@ -65,6 +68,8 @@ spec:
         env:
         - name: REDIS
           value: "azure-vote-back"
+        - name: VOTE1VALUE
+          value: "Elephants"
 ---
 apiVersion: v1
 kind: Service
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ git add azure-vote-all-in-one-redis.yaml
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ git commit -m "Change the Cats option to Elephants"
[master 46cdb15] Change the Cats option to Elephants
 1 file changed, 5 insertions(+)
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ git push
Counting objects: 4, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 430 bytes | 430.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0)
remote: Analyzing objects... (4/4) (169 ms)
remote: Storing packfile... done (175 ms)
remote: Storing index... done (204 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo
   f9f49cc..46cdb15  master -> master

Deleting

Azure Arc for Kubernetes

I watched for a  while, but deleting the configuration did not remove the former configurations

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
app-config         1/1     1            1           52m
azure-vote-back    1/1     1            1           32m
azure-vote-front   1/1     1            1           32m
memcached          1/1     1            1           52m

What happens when we relaunch?

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ az k8sconfiguration create --name iam-demo-config --cluster-name myEKSCluster --resource-group myArcClusters2 --operator-instance-name app-config --operator-namespace default --repository-url https://isaac.johnson:abcdef12345abcdef12345abcdef12345abcdef12345@princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo --cluster-type connectedClusters --scope cluster --operator-params="--git-path=manifests"
Command group 'k8sconfiguration' is in preview. It may be changed/removed in a future release.
{
  "complianceStatus": {
    "complianceState": "Pending",
    "lastConfigApplied": "0001-01-01T00:00:00",
    "message": "{\"OperatorMessage\":null,\"ClusterState\":null}",
    "messageLevel": "3"
  },
  "enableHelmOperator": "False",
  "helmOperatorProperties": null,
  "id": "/subscriptions/70b42e6a-asdf-asdf-asdf-9f3995b1aca8/resourceGroups/myArcClusters2/providers/Microsoft.Kubernetes/connectedClusters/myEKSCluster/providers/Microsoft.KubernetesConfiguration/sourceControlConfigurations/iam-demo-config",
  "name": "iam-demo-config",
  "operatorInstanceName": "app-config",
  "operatorNamespace": "default",
  "operatorParams": "--git-readonly --git-path=manifests",
  "operatorScope": "cluster",
  "operatorType": "Flux",
  "provisioningState": "Succeeded",
  "repositoryPublicKey": "",
  "repositoryUrl": "https://isaac.johnson:abcdef12345abcdef12345abcdef12345abcdef12345@princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo",
  "resourceGroup": "myArcClusters2",
  "type": "Microsoft.KubernetesConfiguration/sourceControlConfigurations"
}

Within moments, i did see the former deployment get removed

Every 2.0s: kubectl get deployments                                                                                         DESKTOP-JBA79RT: Wed May 20 13:08:36 2020
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
azure-vote-back    1/1     1            1           36m
azure-vote-front   1/1     1            1           36m

I noticed an error in the portal

{"OperatorMessage":"unable to add the configuration with configId {iam-demo-config} due to error: {error while adding the CRD configuration: error {object is being deleted: gitconfigs.clusterconfig.azure.com \"iam-demo-config\" already exists}}","ClusterState":"Failed","LastGitCommitInformation":"","ErrorsInTheLastSynced":""}
very 2.0s: kubectl get deployments                                                                                         DESKTOP-JBA79RT: Wed May 20 13:09:36 2020
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
app-config         1/1     1            1           31s
azure-vote-back    1/1     1            1           37m
azure-vote-front   1/1     1            1           37m
memcached          1/1     1            1           31s

It seemed that while it did clean up old deployments, it did not relaunch and got hung up on the --readonly flag.

No matter what, it keeps that flag in there. I recreated in new namespaces, and ensured i just set --operator-params="--git-path=manifests" but then it shows in Azure “--git-readonly --git-path=manifests

Azure Arc for Kubernetes

I tried namepaces, deleting deployments, updating the token for git and applying it.. Nothing seemed to relaunch.  This is in preview so I would expect these issues to get sorted out.

Slow… eventually it came back:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
azure-arc     cluster-metadata-operator-57fc656db4-zdf5p   2/2     Running   0          157m
azure-arc     clusteridentityoperator-7598664f64-5cvk2     3/3     Running   0          157m
azure-arc     config-agent-798648fbd5-g74gb                3/3     Running   0          157m
azure-arc     controller-manager-64985bcb6b-7kwl9          3/3     Running   0          157m
azure-arc     flux-logs-agent-6f8fdb7d58-4hbmc             2/2     Running   0          157m
azure-arc     metrics-agent-5f55ccddfd-pmcc8               2/2     Running   0          157m
azure-arc     resource-sync-agent-95c997779-rnpf8          3/3     Running   0          157m
default       app-config-55c7974448-rwdkb                  1/1     Running   0          15m
default       azure-vote-back-567c66c8d8-5xqp8             1/1     Running   0          30s
default       azure-vote-front-fd7d8d4d7-wvhnx             1/1     Running   0          30s
default       memcached-86869f57fd-t7g5k                   1/1     Running   0          49m

Indeed i scan verify i passed in the SHOWHOSTS var:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ kubectl describe pod azure-vote-front-fd7d8d4d7-wvhnx
Name:           azure-vote-front-fd7d8d4d7-wvhnx
Namespace:      default
Priority:       0
Node:           ip-10-0-61-98.us-east-2.compute.internal/10.0.61.98
Start Time:     Wed, 20 May 2020 13:58:23 -0500
Labels:         app=azure-vote-front
                pod-template-hash=fd7d8d4d7
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Running
IP:             10.0.52.147
Controlled By:  ReplicaSet/azure-vote-front-fd7d8d4d7
Containers:
  azure-vote-front:
    Container ID:   docker://41ddabfcab73e3e4d9c1e8e0d78aac5427cdf91df3d8fc56167b3e55e3fe71a1
    Image:          microsoft/azure-vote-front:v1
    Image ID:       docker-pullable://microsoft/azure-vote-front@sha256:9ace3ce43db1505091c11d15edce7b520cfb598d38402be254a3024146920859
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 20 May 2020 13:58:24 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:  500m
    Requests:
      cpu:  250m
    Environment:
      REDIS:     azure-vote-back
      SHOWHOST:  true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7c4vq (ro)

Let’s make an AKS cluster in the same RG:

Azure Arc for Kubernetes
Azure Arc for Kubernetes

I then created a new YAML pipeline and verified that indeed, i cannot see Arc clusters listed via ARM connections:

Azure Arc for Kubernetes

One last check. I added another helloworld yaml (different app).

version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helloworld
      version: v2
  template:
    metadata:
      labels:
        app: helloworld
        version: v2
    spec:
      containers:
      - name: helloworld
        image: docker.io/istio/examples-helloworld-v2
        resources:
          requests:
            cpu: "100m"
        imagePullPolicy: IfNotPresent #Always
        ports:
        - containerPort: 5000

I saw that indeed it was deployed, albeit to the first configuration we had applied

EKS:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ kubectl get pods | grep hello
helloworld-v1-6757db4ff5-26h26     1/1     Running   0          23m
helloworld-v2-85bc988875-lh7cg     1/1     Running   0          23m

LKE:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ kubectl get pods --all-namespaces | grep hello
default       helloworld-v1-6757db4ff5-vw7zq               1/1     Running   0          27m
default       helloworld-v2-85bc988875-k698z               1/1     Running   0          27m

I then removed it:

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git commit -m "remove helloworld"
[master 2c12de4] remove helloworld
 1 file changed, 1 insertion(+), 1 deletion(-)
 rename manifests/helloworld.yaml => helloworld.md (98%)
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo$ git push
Counting objects: 4, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 704 bytes | 704.00 KiB/s, done.
Total 4 (delta 1), reused 0 (delta 0)
remote: Analyzing objects... (4/4) (189 ms)
remote: Storing packfile... done (252 ms)
remote: Storing index... done (92 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://princessking.visualstudio.com/IAC-Demo/_git/IAC-Demo
   5b88894..2c12de4  master -> master

However, it was not removed from the EKS or LKE clusters.

Azure Arc for Kubernetes

I did one more test. I added sleep deployment to ensure content was still getting pushed.  Within about 4 minutes on both clusters, i could see the Sleep deployment, but importantly, the helloworld deployments still existed.

Azure Arc for Kubernetes

So it’s clearly additive (ie. kubectl create) and not deleting anything.  

Cleanup

LKE

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ linode-cli lke clusters-list
┌──────┬───────────┬────────────┐
│ id   │ label     │ region     │
├──────┼───────────┼────────────┤
│ 5193 │ mycluster │ us-central │
└──────┴───────────┴────────────┘
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ linode-cli lke cluster-delete 5193
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ linode-cli lke clusters-list
┌────┬───────┬────────┐
│ id │ label │ region │
└────┴───────┴────────┘

EKS

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ aws cloudformation list-stacks --region us-east-2 | grep Amazon-EKS
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC-BastionStack-75ZZA7D9Y8VZ/5df70170-9aad-11ea-aa5c-02582149edd0",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC-BastionStack-75ZZA7D9Y8VZ",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC-NodeGroupStack-14OM4SDFRQ1HZ/5dd48550-9aad-11ea-856e-063c30684966",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC-NodeGroupStack-14OM4SDFRQ1HZ",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC-EKSControlPlane-RLECTP00VA5Z/4c73eb70-9aa8-11ea-bc6c-026789fff09e",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC-EKSControlPlane-RLECTP00VA5Z",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC-FunctionStack-J0MM85F3VKX3/ec211b30-9aa7-11ea-9fe1-0696b95a38c4",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC-FunctionStack-J0MM85F3VKX3",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC-IamStack-144D49GJXUSOB/8bb803d0-9aa7-11ea-a843-0a9fc5c4303c",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC-IamStack-144D49GJXUSOB",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-EKSStack-SDZCUE9193SC/7ab4d9f0-9aa7-11ea-81b8-064e3beebd9c",
            "StackName": "Amazon-EKS-EKSStack-SDZCUE9193SC",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS-VPCStack-1LUZPCD2BF1J/e78197e0-9aa6-11ea-9392-062398501aa6",
            "StackName": "Amazon-EKS-VPCStack-1LUZPCD2BF1J",
            "ParentId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "RootId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackId": "arn:aws:cloudformation:us-east-2:01234512345:stack/Amazon-EKS/e35fb390-9aa6-11ea-9ffc-021e2396e2cc",
            "StackName": "Amazon-EKS",

builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ aws cloudformation delete-stack --stack-name "Amazon-EKS" --region us-east-2
Azure Arc for Kubernetes
builder@DESKTOP-JBA79RT:~/Workspaces/AzureArc/IAC-Demo/manifests$ aws cloudformation list-stacks --region us-east-2 | grep StackStatus
            "StackStatus": "DELETE_IN_PROGRESS",
            "StackStatus": "DELETE_IN_PROGRESS",
            "StackStatus": "UPDATE_COMPLETE",
            "StackStatus": "CREATE_COMPLETE",
            "StackStatus": "CREATE_COMPLETE",
            "StackStatus": "DELETE_IN_PROGRESS",
            "StackStatus": "CREATE_COMPLETE",
            "StackStatus": "DELETE_IN_PROGRESS",
            "StackStatus": "DELETE_COMPLETE",
            "StackStatus": "DELETE_COMPLETE",
            "StackStatus": "DELETE_COMPLETE",

Cleaning up the Azure RG:

Azure Arc for Kubernetes

Summary

Azure Arc for Kubernetes is a really solid offering from Microsoft to manage all your Kubernetes Clusters’ configurations in one place.  It is in public preview so there are some issues I experienced - namely handling bad YAML and no method to remove deployments (Add and Update, but not Deprecation).

We did not test Helm support but one can enable Helm chart deployments as well.

]]>
<![CDATA[ClickUp and Gitlab: Let's do OAuth2 (Part 2)]]>In our first blog entry we showed how we could create a ClickUp project, Gitlab repo and setup basic notifications.  We now want to actually create a basic Oauth2 flow using NodeJS.  We will create a ClickUp Integration App while showcasing the features of CU, Gitab (with GL CICD) and

]]>
http://localhost:2368/clickup-and-gitabl-part-2/5ec2747d29b70c02f476a5d1Mon, 18 May 2020 12:24:13 GMT

In our first blog entry we showed how we could create a ClickUp project, Gitlab repo and setup basic notifications.  We now want to actually create a basic Oauth2 flow using NodeJS.  We will create a ClickUp Integration App while showcasing the features of CU, Gitab (with GL CICD) and Google App Engine.

Creating an App in ClickUp

Let's go to Clickup / Create App under Custom Apps.

You can find it in Integration's ClickUp API - Create an App:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Clicking Create App will give us a Client ID and Secret (if you don't have a GCP app yet, you can put what you want. Our first test just needs the code we'll get later)

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

We will want to store these details for later. While the Client ID cannot be changed (without deleting), the Secret can be rotated at any point:

Client ID: 7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6
Client secret: 4HQTDF1W4UQMEP2JQ77H0T2PTVDC2A17V84KJNGWKC8NNHII51HHEOXXF8SWC6SU

Testing

Let’s now onboard this app to our specific project (This would represent the flow from our final application)

https://app.clickup.com/api?client_id=7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6&redirect_uri=https://myexpressappnew.uc.r.appspot.com/

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Select your Workspace (for me, click Princess King)
ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Then click connect

This code returned (which we can see on the URL) identifies this particular onboarding. We can pair the code with our App ID and Secret to generate a session access token.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
basic nodejs app returning querydata

Don't worry if at this point you don't have a GCP App running. You might very well get a 404 or error without a server. but the URL has the code= code we need.

Now we can test with that code:

$ curl -d code=A3TL5FN76U7U0DXSBFFG6WYBZWJKEXY4 -d client_id=7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6 -d client_secret=4HQTDF1W4UQMEP2JQ77H0T2PTVDC2A17V84KJNGWKC8NNHII51HHEOXXF8SWC6SU https://app.clickup.com/api/v2/oauth/token
{"access_token":"6311817_e762b30fe6c9a0684b4b4fc83594371bae9297f5"}

That Access Token should now grant us access.

$ curl -X GET --header "Accept: application/json" --header "x-api-key: 7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6" --header "Authorization: 6311817_e762b30fe6c9a0684b4b4fc83594371bae9297f5" https://app.clickup.com/api/v2/user
{"user":{"id":6311817,"username":"Isaac Johnson","email":"isaac@freshbrewed.science","color":"#ff00fc","profilePicture":"https://attachments.clickup.com/profilePictures/6311817_ZzW.jpg","initials":"IJ","week_start_day":null,"global_font_support":false,"timezone":"America/Chicago"}}

Now that we see some data we can pull down, let’s update our node app to do something.

builder@DESKTOP-2SQ9NQM:~/Workspaces/myapp_express$ cat app.js
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
const port = process.env.PORT || 8080;
const debug = 1

// app.js

// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))

// parse application/json
app.use(bodyParser.json())

app.use(function (req, res) {
  res.setHeader('Content-Type', 'text/plain')
  res.write('you posted:\n')
  var postedData= JSON.stringify(req.body, null, 2);
  console.log("req.body");
  console.log(req.body);
  var queryData= JSON.stringify(req.query, null, 2);
  console.log("req.query");
  console.log(req.query);
  res.end(postedData + "\n QueryData: " + queryData);


})

app.get('/', function(req, res){
            //console.log(req.query.name);
            res.send('Response send to client::'+req.query.name);

});

app.listen(port, () => {
        if (debug == 1) {
                const request = require('request');

                // var apiKey = process.env.API_KEY;
                var apiKey = "7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6";
                // Code back from API Server
                var authz = "6311817_e762b30fe6c9a0684b4b4fc83594371bae9297f5";

                const options = {
                    url: 'https://app.clickup.com/api/v2/user',
                    method: 'GET',
                    headers: {
                            'Accept': 'application/json',
                            'X-Api-Key': apiKey,
                            'Authorization': authz
                        }
                };

                request(options, function(err, res, body) {
                            let json = JSON.parse(body);
                            console.log(json);
                });

        }

        console.log(`Example app listening on port ${port}!`)

})
ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Once we get our code good enough to test, we can commit it to git.

Gitignore

It’s a good time to set your .gitignore file to avoid committing binary, generated and temporary files.  I use https://gitignore.io. And since my candidate code expected secrets set as env vars, i avoided checking in a setup.sh file.  See my gitignore here: https://gitlab.com/princessking/gcpclickupdemo/-/blob/master/.gitignore

CICD in Gitlab

Gitlab has a great Continuous Integration / Deployment system.  I first became familiar with it as gitlab runners, a plugin, that is now part of their standard offering.  

To use CICD, you need to create a .gitlab-ci.yml file:

image: node:latest
 
stages:
  - build
  - test
 
cache:
  paths:
    - node_modules/
 
install_dependencies:
  stage: build
  script:
    - npm install
  artifacts:
    paths:
      - node_modules/
 
testing_testing:
  stage: test
  script: npm test

This should build our app.js and run the basic unit tests in our package.json.

{
  "name": "myapp-express",
  "description": "Simple Hello World Node.js sample for GCP App Engine Flexible Environment.",
  "version": "0.0.1",
  "private": true,
  "license": "MIT",
  "author": "Isaac Johnson",
  "repository": {
    "type": "git",
    "url": "https://gitlab.com/princessking/gcpclickupdemo.git"
  },
  "engines": {
    "node": ">=8.0.0"
  },
  "scripts": {
    "deploy": "gcloud app deploy",
    "start": "node app.js",
    "system-test": "repo-tools test app",
    "test": "npm run system-test",
    "e2e-test": "repo-tools test deploy"
  },
  "dependencies": {
    "express": "^4.16.3"
  },
  "devDependencies": {
    "@google-cloud/nodejs-repo-tools": "^3.3.0",
    "body-parser": "^1.19.0",
    "request": "^2.88.2"
  },
  "cloud-repo-tools": {
    "test": {
      "app": {
        "msg": "Hello, world from Express!"
      }
    },
    "requiresKeyFile": true,
    "requiresProjectId": true
  }
}

Once you commit the gitlab-ci file, it will fire a build almost immediately:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://gitlab.com/princessking/gcpclickupdemo/pipelines

As you would expect, you can click any stage and get the logs from that stage:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

We can also download the artifacts as defined in our gitlab ci file:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

When you have multiple stages, you can see each stage as it’s run and get at the logs:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
click "testing_testing" to see those logs

Deploying to GCP

We’ll need to set up a GCP Project for App Engine Express (if you haven’t already).

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Next we’ll want to add an iam user (service account) we can use to deploy from the iam-admin page.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Next we add some Roles (Permissions) - namely app engine Admin and Deployer.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Note: Ultimately i had to expand these. I used Project Owner at one point. So expect to play around on Roles.

Next create a JSON key we can use in our Runner:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Next we head to our storage buckets for the App Service to grant permissions to this service user.  Pick the 3 buckets and choose “+Add Member” from the right.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

We’ll then add our service account with creator and reader permissions:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Note: if you neglected to save the user id to the clipboard as i had, the email is in the json you downloaded as the “client_email” field.

We can then see it reflected it those roles:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Lastly, we need to enable the App Engine Admin API against our Project.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://console.developers.google.com/apis/library/appengine.googleapis.com?project=myexpressappnew

Gitlab Secrets

Next we go to GL Settings > CICD and variables to add PROJECT_ID and SERVICE_ACCOUNT

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

SERVICE_ACCOUNT should be the contents of the JSON.. e.g.

{
  "type": "service_account",
  "project_id": "myexpressappnew",
 ....

Likely as you validate the gitlab file, you’ll want to review current and past jobs.  Job details are listed under “Jobs” in CICD:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://gitlab.com/princessking/gcpclickupdemo/-/jobs

I needed to tweak the Permissions of my service user.  You can edit that later.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://console.cloud.google.com/iam-admin/iam?project=myexpressappnew

When it’s working, you can see success in the build output:

$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
Services to deploy:
 
descriptor:      [/builds/princessking/gcpclickupdemo/app.yaml]
source:          [/builds/princessking/gcpclickupdemo]
target project:  [myexpressappnew]
target service:  [default]
target version:  [20200517t101805]
target url:      [https://myexpressappnew.uc.r.appspot.com]
 
 
Beginning deployment of service [default]...
Building and pushing image for service [default]
Started cloud build [2cca1953-538f-4b52-b83f-52ceed61b801].
To see logs in the Cloud Console: https://console.cloud.google.com/cloud-build/builds/2cca1953-538f-4b52-b83f-52ceed61b801?project=542481619083
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "2cca1953-538f-4b52-b83f-52ceed61b801"
 
FETCHSOURCE
Fetching storage object: gs://staging.myexpressappnew.appspot.com/us.gcr.io/myexpressappnew/appengine/default.20200517t101805:latest#1589710687030855
Copying gs://staging.myexpressappnew.appspot.com/us.gcr.io/myexpressappnew/appengine/default.20200517t101805:latest#1589710687030855...
/ [0 files][    0.0 B/ 60.2 KiB]                                                
/ [1 files][ 60.2 KiB/ 60.2 KiB]                                                
Operation completed over 1 objects/60.2 KiB.                                     
BUILD
Starting Step #0
Step #0: Pulling image: gcr.io/gcp-runtimes/nodejs/gen-dockerfile@sha256:d5293a583629252c12c478b572103b557ee26235c6b03a57d1f18d52882fb937
Step #0: sha256:d5293a583629252c12c478b572103b557ee26235c6b03a57d1f18d52882fb937: Pulling from gcp-runtimes/nodejs/gen-dockerfile
Step #0: c4272b5b18a1: Pulling fs layer
Step #0: 99f31d2d2127: Pulling fs layer
Step #0: 3c2cba919283: Pulling fs layer
Step #0: e83dbc3576f7: Pulling fs layer
Step #0: 9efe1b0c1e82: Pulling fs layer
Step #0: 83001cb10871: Pulling fs layer
Step #0: 1162305be692: Pulling fs layer
Step #0: 9519340f6b21: Pulling fs layer
Step #0: 85ea7b697da2: Pulling fs layer
Step #0: 6403c14e19e4: Pulling fs layer
Step #0: d9919eba9859: Pulling fs layer
Step #0: 8665cabad70f: Pulling fs layer
Step #0: e83dbc3576f7: Waiting
Step #0: 9efe1b0c1e82: Waiting
Step #0: 83001cb10871: Waiting
Step #0: 1162305be692: Waiting
Step #0: 9519340f6b21: Waiting
Step #0: 85ea7b697da2: Waiting
Step #0: 6403c14e19e4: Waiting
Step #0: d9919eba9859: Waiting
Step #0: 8665cabad70f: Waiting
Step #0: 3c2cba919283: Verifying Checksum
Step #0: 3c2cba919283: Download complete
Step #0: 99f31d2d2127: Verifying Checksum
Step #0: 99f31d2d2127: Download complete
Step #0: 9efe1b0c1e82: Verifying Checksum
Step #0: 9efe1b0c1e82: Download complete
Step #0: c4272b5b18a1: Verifying Checksum
Step #0: c4272b5b18a1: Download complete
Step #0: 83001cb10871: Verifying Checksum
Step #0: 83001cb10871: Download complete
Step #0: 1162305be692: Verifying Checksum
Step #0: 1162305be692: Download complete
Step #0: e83dbc3576f7: Verifying Checksum
Step #0: e83dbc3576f7: Download complete
Step #0: 85ea7b697da2: Verifying Checksum
Step #0: 85ea7b697da2: Download complete
Step #0: 9519340f6b21: Verifying Checksum
Step #0: 9519340f6b21: Download complete
Step #0: 6403c14e19e4: Verifying Checksum
Step #0: 6403c14e19e4: Download complete
Step #0: d9919eba9859: Verifying Checksum
Step #0: d9919eba9859: Download complete
Step #0: 8665cabad70f: Verifying Checksum
Step #0: 8665cabad70f: Download complete
Step #0: c4272b5b18a1: Pull complete
Step #0: 99f31d2d2127: Pull complete
Step #0: 3c2cba919283: Pull complete
Step #0: e83dbc3576f7: Pull complete
Step #0: 9efe1b0c1e82: Pull complete
Step #0: 83001cb10871: Pull complete
Step #0: 1162305be692: Pull complete
Step #0: 9519340f6b21: Pull complete
Step #0: 85ea7b697da2: Pull complete
Step #0: 6403c14e19e4: Pull complete
Step #0: d9919eba9859: Pull complete
Step #0: 8665cabad70f: Pull complete
Step #0: Digest: sha256:d5293a583629252c12c478b572103b557ee26235c6b03a57d1f18d52882fb937
Step #0: Status: Downloaded newer image for gcr.io/gcp-runtimes/nodejs/gen-dockerfile@sha256:d5293a583629252c12c478b572103b557ee26235c6b03a57d1f18d52882fb937
Step #0: gcr.io/gcp-runtimes/nodejs/gen-dockerfile@sha256:d5293a583629252c12c478b572103b557ee26235c6b03a57d1f18d52882fb937
Step #0: Checking for Node.js.
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/kaniko-project/executor@sha256:f87c11770a4d3ed33436508d206c584812cd656e6ed08eda1cff5c1ee44f5870
Step #1: [36mINFO[0m[0000] Removing ignored files from build context: [node_modules .dockerignore Dockerfile npm-debug.log yarn-error.log .git .hg .svn app.yaml] 
Step #1: [36mINFO[0m[0000] Downloading base image gcr.io/google-appengine/nodejs@sha256:ef8be7b4dc77c3e71fbc85ca88186b13214af8f83b8baecc65e8ed85bb904ad5 
Step #1: [36mINFO[0m[0016] Taking snapshot of full filesystem...        
Step #1: [36mINFO[0m[0027] Using files from context: [/workspace]       
Step #1: [36mINFO[0m[0027] COPY . /app/                                 
Step #1: [36mINFO[0m[0027] Taking snapshot of files...                  
Step #1: [36mINFO[0m[0027] RUN /usr/local/bin/install_node '>=8.0.0'    
Step #1: [36mINFO[0m[0027] cmd: /bin/sh                                 
Step #1: [36mINFO[0m[0027] args: [-c /usr/local/bin/install_node '>=8.0.0'] 
Step #1: [36mINFO[0m[0028] Taking snapshot of full filesystem...        
Step #1: [36mINFO[0m[0032] No files were changed, appending empty layer to config. No layer added to image. 
Step #1: [36mINFO[0m[0032] RUN npm install --unsafe-perm ||   ((if [ -f npm-debug.log ]; then       cat npm-debug.log;     fi) && false) 
Step #1: [36mINFO[0m[0032] cmd: /bin/sh                                 
Step #1: [36mINFO[0m[0032] args: [-c npm install --unsafe-perm ||   ((if [ -f npm-debug.log ]; then       cat npm-debug.log;     fi) && false)] 
Step #1: added 50 packages from 37 contributors and audited 689 packages in 3.265s
Step #1: found 1 low severity vulnerability
Step #1:   run `npm audit fix` to fix them, or `npm audit` for details
Step #1: [36mINFO[0m[0036] Taking snapshot of full filesystem...        
Step #1: [36mINFO[0m[0041] CMD npm start                                
Step #1: 2020/05/17 10:19:21 existing blob: sha256:95016fdb07dce7dc6d6ff2746f985d6787da05314ff8180602b868a53b437c46
Step #1: 2020/05/17 10:19:21 existing blob: sha256:912fe0956b48e9e5d140e4947e09be36c1bb2d2186a5ca7b6e57f31e423a4404
Step #1: 2020/05/17 10:19:21 existing blob: sha256:81b9a32df3efd40d591bf4f1257c68c2de442c43466c8b5b151561d62fee1da2
Step #1: 2020/05/17 10:19:21 existing blob: sha256:81bb7127678a82e8e9131e2f47bbae5853ba33b89c6ade91201c0fca8a1c8dd9
Step #1: 2020/05/17 10:19:21 existing blob: sha256:40a5c2875f881a931f7b7340904bbc939b8bcebe131e058501659ee24597571f
Step #1: 2020/05/17 10:19:21 existing blob: sha256:72be9390242a715ac08d82becdbae8c3e13d7b893a384156d63dad87c099478b
Step #1: 2020/05/17 10:19:21 existing blob: sha256:4ad0e69275819c37c5a3ff6f9c3579f4ae19657c0ea40580a347cfa3fc32d7f6
Step #1: 2020/05/17 10:19:21 existing blob: sha256:537932092991e0c9f705cdfad7cdc5bb6fe83fda36af0452de2f5d2fac6eb6b1
Step #1: 2020/05/17 10:19:21 existing blob: sha256:3c2cba919283a210665e480bcbf943eaaf4ed87a83f02e81bb286b8bdead0e75
Step #1: 2020/05/17 10:19:21 existing blob: sha256:3597d82c78f6eeb6cd66bd1ec35f5e8bcdad9f8cd392cd3dc6cbfc0111aff2da
Step #1: 2020/05/17 10:19:23 pushed blob sha256:62e107a936646d81157986785657dbb973c9e72187378b2c8e9394f16b721298
Step #1: 2020/05/17 10:19:23 pushed blob sha256:549080961992845c22e4059918c18189a86e36a76ec37efee821daeffe30cae8
Step #1: 2020/05/17 10:19:23 pushed blob sha256:08e7e16a828f73d5ce2bc664b5e3f25f7962d24464061b9ee86e4bd969bd07d4
Step #1: 2020/05/17 10:19:24 us.gcr.io/myexpressappnew/appengine/default.20200517t101805:latest: digest: sha256:e13ceb06967fa4bd2fc4c5a3bb8f5e95b6ee8a37141caa9bdb72528c0055b29a size: 2219
Finished Step #1
PUSH
DONE
--------------------------------------------------------------------------------
 

Let’s finish our app.  

We need to:

  1. Set some secrets for our API Key and Secret (so we can set it in the app.yml)
  2. Update our build steps to do that
  3. Move some dependencies from build to runtime.

First, we can go to Gitlab settings, CICD to add a Clickup APIKey and Secret variable (cuAPIKey, cuAPISecret, respectively):

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Next, we need to refer to them in the app.yml (last 3 lines):

runtime: nodejs
api_version: '1.0'
env: flexible
threadsafe: true
manual_scaling:
  instances: 1
network: {}
resources:
  cpu: 1
  memory_gb: 0.5
  disk_size_gb: 10
liveness_check:
  initial_delay_sec: 300
  check_interval_sec: 30
  timeout_sec: 4
  failure_threshold: 4
  success_threshold: 2
readiness_check:
  check_interval_sec: 5
  timeout_sec: 4
  failure_threshold: 2
  success_threshold: 2
  app_start_timeout_sec: 300
env_variables:
  cuAPIKey: "MYCUAPIKEY"
  cuAPISecret: "MYCUAPISECRET"

Next, we’ll update the .gitlab-ci.yaml to sed the values in at production deployment time:

image: node:latest
 
stages:
  - build
  - test
 
cache:
  paths:
    - node_modules/
 
install_dependencies:
  stage: build
  script:
    - npm install
  artifacts:
    paths:
      - node_modules/
 
testing_testing:
  stage: test
  script: npm test
 
deploy_production:
  image: google/cloud-sdk:alpine
  stage: .post
  environment: Production
  only:
  - master
  script:
  - cp app.yaml app.bak
  - sed -i "s/MYCUAPIKEY/`echo $cuAPIKey`/g" app.yaml
  - sed -i "s/MYCUAPISECRET/`echo $cuAPISecret`/g" app.yaml
  - echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
  - gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
  - gcloud --quiet --project $PROJECT_ID app deploy app.yaml
  - cp -f app.bak app.yaml
 
after_script:
- rm /tmp/$CI_PIPELINE_ID.json

Lastly, we should ensure our request module is moved into production (runtime) dependencies.

{
  "name": "myapp-express",
  "description": "Simple Hello World Node.js sample for GCP App Engine Flexible Environment.",
  "version": "0.0.1",
  "private": true,
  "license": "MIT",
  "author": "Isaac Johnson",
  "repository": {
    "type": "git",
    "url": "https://gitlab.com/princessking/gcpclickupdemo.git"
  },
  "engines": {
    "node": ">=8.0.0"
  },
  "scripts": {
    "deploy": "gcloud app deploy",
    "start": "node app.js",
    "system-test": "repo-tools test app",
    "test": "npm run system-test",
    "e2e-test": "repo-tools test deploy"
  },
  "dependencies": {
    "express": "^4.16.3",
    "request": "^2.88.2"
  },
  "devDependencies": {
    "@google-cloud/nodejs-repo-tools": "^3.3.0",
    "body-parser": "^1.19.0"
  },
  "cloud-repo-tools": {
    "test": {
      "app": {
        "msg": "Hello, world from Express!"
      }
    },
    "requiresKeyFile": true,
    "requiresProjectId": true
  }
}

And let us revisit once more the app.js that does the work. You’ll see that we catch the query params and for code, we’ll use our secret to come up with the client authtoken.  And with the authtoken, we can pair it with our app id to run queries against the API.

const express = require('express')
const bodyParser = require('body-parser')
const app = express()
const port = process.env.PORT || 8080;
const apiKey = process.env.cuAPIKey;
const apiSecret = process.env.cuAPISecret;
const debug = 0
 
const request = require('request');
// app.js
 
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))
 
// parse application/json
app.use(bodyParser.json())
 
app.use(function (req, res) {
  res.setHeader('Content-Type', 'text/plain')
  res.write('you posted:\n')
  var postedData= JSON.stringify(req.body, null, 2);
  console.log("req.body");
  console.log(req.body);
  var queryData= JSON.stringify(req.query, null, 2);
  console.log("req.query");
  console.log(req.query);
  // check for Auth Code
  for (const key in req.query) {
     if (key == "authtoken") {
      var options = {
        url: 'https://app.clickup.com/api/v2/user',
        method: 'GET',
        headers: {
          'Accept': 'application/json',
          'X-Api-Key': apiKey,
          'Authorization': req.query[key]
        }
      }
      request(options, function(err, res, body) {
        let json = JSON.parse(body);
        console.log(json);
        });
      console.log(key, req.query[key])
    }
    if (key == "code") {
      request.post(
        'https://app.clickup.com/api/v2/oauth/token',
        {
          json: {
            client_id: apiKey,
            client_secret: apiSecret,
            code: req.query[key],
          }
        },
        (error, res, body) => {
          if (error) {
            console.error(error)
            return
          }
          console.log(`statusCode: ${res.statusCode}`)
          console.log(body)
        }
      )
      console.log(key, req.query[key])
    }
  };
  res.end(postedData + "\n QueryData: " + queryData);
})
 
app.get('/', function(req, res){
            //console.log(req.query.name);
            res.send('Response send to client::'+req.query.name);
 
});
 
app.listen(port, () => {
 
  console.log(`Example app listening on port ${port}!`)
 
})

And we can see it work locally first:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Once you have your nodejs code working, it should confirm the deployment to gcp app service:

Updating service [default] (this may take several minutes)...
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.
Setting traffic split for service [default]...
..................................done.
Stopping version [myexpressappnew/default/20200510t211122].
Sent request to stop version [myexpressappnew/default/20200510t211122]. This operation may take some time to complete. If you would like to verify that it succeeded, run:
  $ gcloud app versions describe -s default 20200510t211122
until it shows that the version has stopped.
Deployed service [default] to [https://myexpressappnew.uc.r.appspot.com]

You can stream logs from the command line by running:
  $ gcloud app logs tail -s default

To view your application in the web browser run:
  $ gcloud app browse --project=myexpressappnew
[32;1m$ cp -f app.bak app.yaml[0;m
section_end:1589744046:build_script
[0Ksection_start:1589744046:after_script
[0K[0K[36;1mRunning after_script[0;m
[0;m[32;1mRunning after script...[0;m
[32;1m$ rm /tmp/$CI_PIPELINE_ID.json[0;m
section_end:1589744048:after_script
[0Ksection_start:1589744048:archive_cache
[0K[0K[36;1mSaving cache[0;m
[0;m[32;1mCreating cache default...[0;m
node_modules/: found 10386 matching files         [0;m 
Uploading cache.zip to https://storage.googleapis.com/gitlab-com-runners-cache/project/18732837/default[0;m 
[32;1mCreated cache[0;m
section_end:1589744052:archive_cache
[0Ksection_start:1589744052:upload_artifacts_on_success
[0K[0K[36;1mUploading artifacts for successful job[0;m
[0;msection_end:1589744054:upload_artifacts_on_success
[0K[32;1mJob succeeded
[0;m

We can confirm it works:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://myexpressappnew.uc.r.appspot.com/?authtoken=6311817_e762b30fe6c9a0684b4b4fc83594371bae9297f5

And we can verify in the logs that it pulled user details. You’ll find “Logs” by the current running version:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
See drop down on far right

But you can also get there from Operations/Logs in your project:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
ClickUp and Gitlab: Let's do OAuth2 (Part 2)

We can see the results of my user (zooming in on the block above)

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

I should take a moment to mention that this manual flow is clearly good for demoing how OAuth2 works.  In a real world app we would likely use one of many prebuilt libraries such as node-openid-client. e.g.

const { Issuer } = require('openid-client');
Issuer.discover('https://app.clickup.com/') // => Promise
  .then(function (cuIssuer) {
    console.log('Discovered issuer %s %O', cuIssuer.issuer, cuIssuer.metadata);
  });

const client = new cuIssuer.Client({
  client_id: '7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6',
  client_secret: '4HQTDF1W4UQMEP2JQ77H0T2PTVDC2A17V84KJNGWKC8NNHII51HHEOXXF8SWC6SU',
  redirect_uris: ['https://myexpressappnew.uc.r.appspot.com/'],
  response_types: ['code'],
}); // => Client

Cleanup

Resetting our GCP App token, in settings/My Apps/Apps, you can regenerate your API token:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://app.clickup.com/2365098/settings/apps

We can also reset our Client Secret in Settings / Integrations / ClickUp API:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Just click Regenerate

Also, even though we’ve reset the secret, the pairing of API Client ID and Auth token would still give access to the API.  To reset that granted access, i need to remove (and re-add to keep working on this) in Settings / My Apps / Apps:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
ClickUp and Gitlab: Let's do OAuth2 (Part 2)

And we can verify it’s now been expired by re-running our URL and checking the logs:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://myexpressappnew.uc.r.appspot.com/?authtoken=6311817_e762b30fe6c9a0684b4b4fc83594371bae9297f5
ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Note the OAUTH_019 error proving our Tokens revoked

If we want to re-add, we can just authorize with the Client ID again and select our project: https://app.clickup.com/api?client_id=7NIVSWQA16KPF6SSX5JPB6X5R0E6WUR6&redirect_uri=https://myexpressappnew.uc.r.appspot.com

I checked my GCP costs.  For the week or so of running this version, i totalled up $9.24 in charges.  I like App Engine Standard over Flex for costs, but that would add up over time:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

To prevent further charges, you need to just stop the running version (and you’ll get only minor pennies charges for storage):

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
Stopping a running version to stop incuring charges

Lastly, just in case we echoed any secrets or left any dirty files, we can clean up gitlabl runner caches in the pipelines page of gitlab:

Documentation

We can write docs in Markdown for all users (such as the Readme), but we can also write guides within Click Up.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Go to docs to use the builtin UI.

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

You can the share to the team or even publicly (say to embed in user guides or websites):

https://doc.clickup.com/d/h/285na-189/77b1a2b5c0878ca

Gitlab metrics

I should take a moment to revisit Gitlab and the metrics they can provide.  For instance, we can to go CICD metrics to view our build success metrics: https://gitlab.com/princessking/gcpclickupdemo/pipelines/charts

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Gitlab also offers an alternative to Clickup for Issue tracking.  We can have a simple Kanban board and track issues in Gitlab itself:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

GCP App Engine Flex vs Standard

I sat on it a bit then thought, what would it take to change from App Engine Flex to the much cheaper App Engine Standard?  It was surprisingly simple.  All it took to change from flex to standard was changing the app.yaml:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
https://gitlab.com/princessking/gcpclickupdemo/-/commit/30a0d22384b00caf3a499735ce5279da0dce5960

Which fired a build:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

And deployed as a standard app.  Note the type changed from Flexible to Standard:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)
See change in Environment Column

One advantage of standard besides cost, is under tools, we can view source:

ClickUp and Gitlab: Let's do OAuth2 (Part 2)

Summary

We had a simple goal.  We wanted to use Clickup to create a project and manage work.  We then needed a revision control tool with CICD and for that, we leveraged a public project on Gitlab.com.  We leveraged the flexibility of Google App Engine Flex to host our API endpoint. Considering GCP pricing, we switched to GCP App Engine Standard by simply changing our app.yaml.  Lastly, we looked at Documentation and Metrics and touched on Issues that can be handled in Gitlab itself.

I believe ClickUp shows promise for a workstream management tool.  It has a lot of integrations and for me to really commit to it, I would want tighter CICD integration.  I plan to explore it more.  Gitlab as well impressed me with how far it’s come in the last couple years.  I compare it to how we used it at ThomsonReuters and it has so much more in its current offering.  I might do a full blog just on Gitlab.

I don’t think I’ll continue on with FB @ Workplace, however.  The free offering is too limited and perhaps I’m just old and not with it anymore, but I don't see the value outside yet-another video chat and chat board.

Referenced Guides:

  1. Clickup API: https://clickup.com/api / https://docs.clickup.com/en/articles/2171168-api-create-your-own-app
  2. Medium post on deploying to GAE from GL CI (was a bit older so i updated steps in this guide): https://medium.com/google-cloud/automatically-deploy-to-google-app-engine-with-gitlab-ci-d1c7237cbe11
]]>
<![CDATA[ClickUp, FB Workplace and Gitlab: (Part 1)]]>While I’ve made no bones I’m an Azure DevOps fan and use it for all my work, not everyone has the warm feelies for Microsoft as I do.  So what other options are there outside the Microsoft-verse for development?

ClickUp caught my attention recently at a little webinar

]]>
http://localhost:2368/clickup-fb-workplace-and-gitlab-free-cheap-azdo-solutions-part-1/5ebbee70bfcc8338561b27d4Wed, 13 May 2020 15:29:18 GMT

While I’ve made no bones I’m an Azure DevOps fan and use it for all my work, not everyone has the warm feelies for Microsoft as I do.  So what other options are there outside the Microsoft-verse for development?

ClickUp caught my attention recently at a little webinar on Front + Clickup.  Front is their new email solution and it ties nicely into ClickUp, their work management suite.  Since it was free and seemed pretty slick, I felt it warranted a deeper dive.

The other tool that caught me in a facebook add is Facebook@Workplace, their work portal that feels a bit like Google Wave merged with Slack.  It’s a bit all over (feels a lot like something Yahoo! Would have put out in the mid 2000s).

And just to round things out, we will use Gitlab instead of Github for some of the source indexing projects.

Let’s dig in!

Setting Up

First, I want to gather all this work around new email addresses.  Gandi.net, which I’ve been using for about 20 years now, now includes a basic mailserver and webmail.

ClickUp, FB Workplace and Gitlab: (Part 1)
Gandi Mail

But as you know, I host the site out of AWS and use Route53.  This caused a bit of trouble.  In case others do the same, to route emails back to Gandi, you’ll need to update some settings in R53:

I changed 10 inbound-smtp.us-east-1.amazonaws.com to 50 fb.mail.gandi.net and then changed the old GCP verification stamp "google-site-verification=moJg-tawlSzIpg0epZ7f1caWc1BPdRAyxAhh4PXuaNA" to "v=spf1 include:_mailcust.gandi.net ?all"

ClickUp, FB Workplace and Gitlab: (Part 1)
A Snippet from my Route53 settings on freshbrewed.com

We can now get emails either in our own mail client or their webmail:

ClickUp, FB Workplace and Gitlab: (Part 1)
Gandi offers SOGo and Roundcube
ClickUp, FB Workplace and Gitlab: (Part 1)
Snippet for the SOGo webmail

ClickUp

The next step is to signup for ClickUp.

ClickUp, FB Workplace and Gitlab: (Part 1)

Once you get past the basics it will take you through the features. You can find those training videos again under Videos in Help. You can also bookmark docs (I did so on the API Docs):

ClickUp, FB Workplace and Gitlab: (Part 1)
Where to find help and videos

Invite Others

We can go to People and add others

ClickUp, FB Workplace and Gitlab: (Part 1)

Users can then accept the invite and join your workspace:

ClickUp, FB Workplace and Gitlab: (Part 1)

When they accept you can see they switch from Pending and see the date they logged in:

ClickUp, FB Workplace and Gitlab: (Part 1)

The Integrations section is pretty expansive.

I added Slack and Github. However, Teams required the O365 level version of Teams (not just a user token) so i needed to skip that:

ClickUp, FB Workplace and Gitlab: (Part 1)

Let’s test that notification.  I’ve enabled Slack notifications on all activities:

ClickUp, FB Workplace and Gitlab: (Part 1)

Note: you could be more restrictive on notifications:

ClickUp, FB Workplace and Gitlab: (Part 1)

But for now, i’ll leave it as “All”.

ClickUp, FB Workplace and Gitlab: (Part 1)
Loading some Sprint stories and assignments (also to trigger notifications)

As you see I added 4 tasks and assigned one to myself and I see that reflected in Slack:

ClickUp, FB Workplace and Gitlab: (Part 1)

Gitlab integration

Choose Gitlab and Add integration:

ClickUp, FB Workplace and Gitlab: (Part 1)
ClickUp, FB Workplace and Gitlab: (Part 1)
You'll find the activate in lower right

Tip: Now the first time through, I realized it’s best to have set up a “Group” (akin to an AzDO Organization) and a Project (akin to “Repo”).

So I did that, creating https://gitlab.com/princessking/mygcpclickupdemo

ClickUp, FB Workplace and Gitlab: (Part 1)

I also made a Public repo (which we’ll use) you can view: https://gitlab.com/princessking/gcpclickupdemo

ClickUp, FB Workplace and Gitlab: (Part 1)

We can now add them to ClickUp.

Note, if you don’t see freshly made repos, you may need to remove authorization and re-add them (I did):

ClickUp, FB Workplace and Gitlab: (Part 1)
Where to remove auth to re-add

Adding is a two-step process: you’ll want to add them to Clickup then tie to the right “Spaces” area.  This is where I’m liking ClickUp’s organization as I could easily see myself breaking projects into various spaces (some for home, some for church, some for blog, etc).

ClickUp, FB Workplace and Gitlab: (Part 1)
Linking Repos to Spaces

Google Calendar

Google Calendar integration is nice because we can tie to various calendars. This way I can avoid polluting my family cal. For now, i’ll use my main cal.

ClickUp, FB Workplace and Gitlab: (Part 1)
Picking which of my Google Calendars to which to sync

You can also pick what tasks to sync:

ClickUp, FB Workplace and Gitlab: (Part 1)

ClickUp, FB Workplace and Gitlab: (Part 1)

And i can see that now (not sure I’ll want to keep this as it’s now how I work):

ClickUp, FB Workplace and Gitlab: (Part 1)

Facebook at Workplace:

First we have to use a real business email to create. Gmail, yahoo, etc won’t suffice;

ClickUp, FB Workplace and Gitlab: (Part 1)

Once we verify our email, we set the groups to create:


ClickUp, FB Workplace and Gitlab: (Part 1)

You can add more people in your domain (e.g. user_chuck@fb.s).  However, only users that have an email that end in your TLD can join.  This prevents outside entities from easily gaining access.   The consequence, of course, is everyone needs to have an @yourdomain address to get in.

ClickUp, FB Workplace and Gitlab: (Part 1)
Error you get if you try to use a non @yourdomain address

You can add other domains, but they will be mastered to this “Workplace” so adding gmail, for instance, won’t work:

ClickUp, FB Workplace and Gitlab: (Part 1)

FB@Workpace isn’t necessarily free, however.

You can see that when you enroll you are in a 30d advanced trial:

ClickUp, FB Workplace and Gitlab: (Part 1)

The “Essential” free one includes a lot of things:

ClickUp, FB Workplace and Gitlab: (Part 1)

But notably, the API for bots is lacking which deflates my sails:

ClickUp, FB Workplace and Gitlab: (Part 1)
Boo! No Bots?

Creating an integration is easy. Just go to integrations and create a new app and create:

ClickUp, FB Workplace and Gitlab: (Part 1)

You then create an Access Token you can use:

ClickUp, FB Workplace and Gitlab: (Part 1)

We can also create a Publishing Bot. This is the easiest way to create some build notifications.

ClickUp, FB Workplace and Gitlab: (Part 1)
ClickUp, FB Workplace and Gitlab: (Part 1)

These post like this:

ClickUp, FB Workplace and Gitlab: (Part 1)

and show up in the news feed as such:

ClickUp, FB Workplace and Gitlab: (Part 1)

The notification will then come through your email:

ClickUp, FB Workplace and Gitlab: (Part 1)

Working in ClickUp

Let’s move that task into In Progress and populate a repo to start.

First, i’ll clone the repo and set the License type in the Readme.md as a simple change:

$ git clone https://gitlab.com/princessking/gcpclickupdemo.git
Cloning into 'gcpclickupdemo'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (3/3), 226 bytes | 113.00 KiB/s, done.
$ cd gcpclickupdemo/
$ ls
README.md
$ vi README.md 
$ git add README.md 
$ git commit -m "set License Type to MIT"
[master c93c8a4] set License Type to MIT
 1 file changed, 3 insertions(+), 1 deletion(-)
$ git push
Username for 'https://gitlab.com': isaac.johnson@gmail.com
Password for 'https://isaac.johnson@gmail.com@gitlab.com': 
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 12 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 301 bytes | 301.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://gitlab.com/princessking/gcpclickupdemo.git
   41670fe..c93c8a4  master -> master

Now i can go to the task and manually associate the Commit:

ClickUp, FB Workplace and Gitlab: (Part 1)

Which will now show under the story:

ClickUp, FB Workplace and Gitlab: (Part 1)

We can also mention our ticket ID (SHA):

$ vi CONTRIBUTING.md
$ git add CONTRIBUTING.md 
$ git commit -m "#6ya7d7 : Create Contributing"
[master e623339] #6ya7d7 : Create Contributing
 1 file changed, 92 insertions(+)
 create mode 100644 CONTRIBUTING.md
$ git push
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 12 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 2.15 KiB | 2.15 MiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://gitlab.com/princessking/gcpclickupdemo.git
   c93c8a4..e623339  master -> master

Which will automatically add it to the Repo.  You can get the ticket IT from the Add window (upper right, see the manual image above) or from the URL:

ClickUp, FB Workplace and Gitlab: (Part 1)

Summary

We took some time to get an overview of ClickUp and it's integrations with Google Calendar, Gitlab and Slack.  We also reviewed Facebooks' "@Workplace" suite and showed what it can do.  We wrapped up by creating some stories in a Sprint and checking in code associated to a Story.

In our next post we'll dig into ClickUp integrations, Gitlab build pipelines and show an end-to-end build and deploy with GCP Flex App service.

]]>
<![CDATA[Operationalization in Day 2 Operations]]>In the world of Operations, there is a tenant of supportability called Operationalization.  This is a method of measurement used when a tool or system moves from Proof of Concept (POC), past Early Adoption/Pilot onto “Supported”.  To dig into Day 2 considerations, we first need to lay the groundwork

]]>
http://localhost:2368/operationalization-in-day-2-operations/5eb00e80489df9937cd956fcMon, 04 May 2020 13:17:24 GMT

In the world of Operations, there is a tenant of supportability called Operationalization.  This is a method of measurement used when a tool or system moves from Proof of Concept (POC), past Early Adoption/Pilot onto “Supported”.  To dig into Day 2 considerations, we first need to lay the groundwork by discussing ROI considerations and the Solution Lifecycle.

Return on Investment (ROI) Considerations:

When considering software (or hardware) solutions, fundamentally there are three things we must consider that form the basis for ROI:

  • Cost: What is the Total Cost of Ownership (TCO). We will dig into this, but cost goes beyond sticker price.
  • Benefit: How does this solution improve business processes. Whom does it affect (both positively and negatively).
  • Lifespan: All solutions have a lifespan - what does this one look like and how can we maintain data portability and avoid vendor lockin.

Cost

First let’s tackle cost.  This is where developers can easily get caught in the concept of “free is best”.  While there is a lot of fantastic Open Source software out there, and general purpose hardware, there is a cost associated to rolling it out and maintaining it.

An Example:  Frank is a fantastic developer and loves working on Acme’s latest line of southwestern landbased bird elimination devices, however the source code repository, Source Safe, is long past it’s prime and only supports older Windows operating systems. Current laptops cannot use it and he would like to move the company to using the popular GIT based tooling.

Frank proposes the company adopt Open Source GIT and host it on a linux computer locally.  His boss Chuck insists on a pilot and shortly therein has the following observations:

  1. GIT server requires Frank to create new linux accounts with SSH access
  2. The Server requires holes punched in the company's firewall for remote development (Frank suggests using a cloud based computer, which would cost $20/month for their current scale)
  3. The current linux VM Frank used is on older hardware not backed up.

Chuck determines that the annual cost would be half of Frank’s salary (assuming it takes half of Frank’s time) and at least $40 a month in hosting costs (as Frank tends to underestimate).  There is also the security risk of hand-maintaining the access (both for users’ attrition and general SSH security access).

Comparatively, Github.com would be $9/user/month and Azure DevOps would be $8/user/month and could tie to their AAD and/or SAML backends.

Benefit

The second characteristic we consider is Benefit - who benefits from this solution. There are the obvious direct impacts, but there are downstream effects with any solution we consider.

In the case of source code control, one could argue the developers, but the benefit goes further - The IT staff who have to find and manage deprecated software will benefit by having a reduced maintenance effort.  

Using an option that is a SaaS takes hosting and networking out of the picture reducing load on the datacenter.  The CSO knows a large company (like Microsoft in the case of Github and AzDO) will manage the security vulnerabilities and that benefit goes directly to lowering risk.  Lastly, HR benefits - finding talent for the company willing to use Source Safe has proven difficult.  Having current technology attracts talent.

An Example: Chuck, Frank’s boss suggests Github - it’s popular and well supported.  But Frank bulks at a product now owned by Microsoft. Many of the developers have a negative view of MS and would prefer to stay clear of their properties, furthermore, Frank points out that the company currently has JIRA for Ticketting and Crowd for user management.  Adding Atlassian bitbucket might tie in nicely with the rest of their tool suite. Chuck starts a conversation with sourcing to determine the pricing of various options of Bitbucket to see if it’s feasible.

Lifespan

The third component we need to discuss is lifespan.  After a product is rolled out from a pilot into early adoption, it will likely grow legs and take off on its own.  However, all tools eventually get old (such as our source safe) and need to be replaced.  It’s key to understand what can be done for data portability.

In the case of GIT, the repository is self-contained with branches and history - but the policies that manage Pull Requests/Merge Requests vary by vendor.  Chuck ensures proper process will be put in place (operationalization) to track the policies and ensure they can be reapplied.

Solution Lifecycles

Before we can tackle Operationalization, we also must review the solution lifecycle.  There are many variants of this, but we argue the basics hold true: first a reason to change comes in - be it a fresh idea (do it different), a problem that is affecting the business or an opportunity (like a cost savings deal on premium equipment).  

Operationalization in Day 2 Operations
Solution Lifecycle

This then often translates into a proposal, but sometimes if it’s straightforward enough, it will be a proposal and a POC to demonstrate the usefulness at the same time.  This Proof of Concept will get stakeholder buy-in enough to drive a pilot - a real working example.  

Once the pilot has sign off by interested parties (and often paperwork is in place and a Purchase Order approved), we move into the Phased Rollout - where the solution is put in place while the prior system (if relevant) is retired. Lastly, when the solution is eventually deemed End of Life, we retire and archive any data.  If relevant, we migrate old data (as would be the case in revision control systems).

Operationalization

The term Operationalization is actually borrowed from social research.  In social research, when there is a concept one wishes to test, such as Socioeconomic Status or Parenting Abilities, one uses Operationalization to measure it.  Whether it’s measured in a nominal, ordinal, interval or ratio levels, the point is to measure in a consistent way.  

Operationalization has another meaning as well.  To operationalize something is to make it operational, functional, running.  The process of doing so is operationalization.  In our lifecycle above, we see operationalization come into play in the Day 2 side of operations:

Operationalization in Day 2 Operations
Solution Lifecycle lined up with Day 0/1/2

In Day 2 Operationalization we need to focus on:

  1. Who is doing the work - and how are they supported (usually over time - that is, often a cadre of consultants start a project, but frontline support and operations teams must maintain the systems after rollout).
  2. Security and Maintenance - who is monitoring security and applying patches? What is the process for upgrades and what does our support contract details out (do we have 24/7 live support or community BBS?)
  3. What is our Support Level Agreement (SLA)?  What is our promised uptime, performance, response times - and who handles that, and how?
  4. Who pays for expansion and scaling out/up- often as in the case for software solutions, other departments can see ‘the shiney new’ and want in - who will pay for their seats, is there a charge back model, does the solutions scale (both in licensing and performance)

Who is doing the work:

Often the pilot is well staffed and the vendors (or engaged developers around Open Source solutions) drive the pilot, but it’s in the phased rollout things get interesting.  Will the key developers who drove the OSS pilot help onboard other teams?

Who is handling teaching and training the consumers of the solution - be it the business, developers or operations?  Perhaps the solution was dead simple to the Erlang and Rust developers, but the Source Safe VisualBasic users are completely confused. Perhaps the developers have a handle on the new MarkDown syntax, but the requirements writers only know MS Word.  

When considering training, look to existing online resources and vendor classes.  Often a level of training can be bundled in with more expensive solution build-outs.  

Security and Upgrades

With cloud solutions and SaaS/PaaS offerings, there is the concept of the shared responsibility model.  Cloud vendors like to hammer home, that they are responsible for the cloud while you are responsible for what's in the cloud.  

It’s still important to know who will be monitoring for downtimes, unexpected usage that could be intrusions, CVAs and security patches. Too often security vulnerabilities go unpatched simply because there isn't a defined process to monitor for and subsequently plan and address security issues.

Documentation

When looking at rolling out, all users of the system must be considered.  A plan must be put in place on how to onboard and train them.  And if training is not direct and in-house, materials must be sourced and presented (this is a good use of recorded training with live followups).

Once rolled out, Front Line Support teams will need to maintain an SLA and the only way to do so is with Runbooks and Maintenance Guides.  Runbooks are a critical requirement of any phased rollout.  They are the solutions equivalent of FAQs.  During the pilot, the implementation team should take a critical ear to all issues and write documentation as they onboard.  

Documentation needs to be created for security and maintenance updates (who do we call, how and when can we reach them)? Documentation must also be created for the user base of the system.  Live documentation systems such as wiki, markdown and confluence are great tools to capture and address issues with the new solution.

SLAs and Scaling

We must define what support the operations team will be providing. SLAs define what is covered (and what is not), when and how the FLS and Operations team can be reached and methods for escalation.  Using an email DL is not enough.  SLAs should have scoping statements and clear pointers to documentation. SLAs must include time frames, ticketing system backends and methods to escalate.
In the long tail of Day 2, we need to lastly consider Expansions and how we Scale both up/down and in/out.  That is, if the solution takes legs and needs to scale out (expand to other BUs/Departments), how can we address that. Will we support it (and if so, do we charge back those business units).  All the questions we raised in our own rollout apply - SLAs, support, documentation, users of the system.  We also should consider how we scale in.  That is, if our solution usage goes down, how do we consolidate hardware or VMs.  Can we reduce our license in future versions?

Scaling up refers to demands on the system.  Did we budget properly? Is our usage driving larger VMs, or requiring more robust hardware. How do we measure performance?  Are we meeting that performance? If we need to scale up to address performance, how do we do that - what is the process and who must sign off?  

With the case of cloud providers, knowing your estimated usage can save the business money by buying in advance (reserved) or scaling up during off peak (spot instances).

Scaling down is similar, if we realize we overprovisioned, or chose a class of equipment that is too large, how do we know and how do we scale down.  Focusing on these things saves the company considerably and will often lead to more trust within the business by considering cost-saving measures in advance.

End of Life

Lastly, when it comes time to end of life a solution, are we able to migrate our data out? Often the question of data persistence and portability comes up when we examine our solution for Disaster Recovery (HA/DR).  Where are our backups going, what format and how can we transform them to a new solution?

An example, Rational ClearCase was ahead of it’s time.  It was (is) a fantastic revision control system that could handle all sorts of source code, document and binary types. It was so good, IBM bought them and increased the license cost beyond 10k a seat.  Companies for which I was involved sought to move their data out to save money.

But trying to export from complicated configspecs and a multitude of VOBs, especially if CC UCM was involved with Clearquest was a huge undertaking.  More than once, I saw a migration to Subversion over similar tools to CC (like Perforce and its Workspace specifications) because Subversion was Open Sourced and portable.  

It was because of this lack of data portability and high license cost, I put forward that Clearcase lost market share over time.  In fact, there is a well known term for when a superior solution is overtaken just because of marketing and cost; Betamaxed.

Summary

We covered a lot of ground in this article, but hopefully we laid some fundamentals you can take away as you consider solutions.  We first covered the three considerations of ROI - cost, benefit and lifespan. Next we dug into the solution lifecycle from reasons for change, through pilot into phased rollout and lastly, how we end of life solutions.  

We talked about key considerations for Day 2 operations: Who is implementing, updating and supporting the tool. Support considerations covered Security, Maintenance and Updates.  We also talked about Performance monitoring, SLAs and scaling (both out/in and up/down).  Lastly we spoke about Data Portability and the need for Documentation at all phases of a solution.

Topics we didn’t cover but should be on deck for highly efficient operations: Automated runbooks and monitoring (solutions like Big Panda and Data Dog), alerting and escalation (PagerDuty and VictorOps) and how to apply the CMMI and Lean Engineering principles for continuous improvement.

]]>
<![CDATA[Azure DevOps : Templates and Integrating Teams Notifications]]>Now that we have YAML pipelines, one powerful feature we’ll want to leverage is the use of templates.  Templates are the YAML equivalent of Task Groups - you can take common sets of functionality and move them into sources in files.  This enables code-reuse and can make pipelines more

]]>
http://localhost:2368/azure-devops-templates-and-integrating-teams-notifications/5eac0bfbad9f5806b8ad282fFri, 01 May 2020 12:09:58 GMT

Now that we have YAML pipelines, one powerful feature we’ll want to leverage is the use of templates.  Templates are the YAML equivalent of Task Groups - you can take common sets of functionality and move them into sources in files.  This enables code-reuse and can make pipelines more manageable.  

The other thing we want to do is tackle Teams.  Microsoft Teams is free (either as an individual or bundled in your organization’s O365 subscription).  I will dig into both ways of integrating MS Teams into your pipelines.

Teams

You’ll want to bring up Teams and go to your “Team”.  By default, you’ll have a “General” Channel.  What we will want to do is add a Build_Notifications channel:

Azure DevOps : Templates and Integrating Teams Notifications
Adding a channel
Azure DevOps : Templates and Integrating Teams Notifications

Next we are going to show how to add notifications two ways.

Connectors

The first is through connectors:

Azure DevOps : Templates and Integrating Teams Notifications
Connectors in menu on Channel in Teams

If you have an Office365 subscription, you’ll see Azure DevOps as a connector:

Azure DevOps : Templates and Integrating Teams Notifications
Channel selector

If you do not, Azure DevOps won’t be listed:

Azure DevOps : Templates and Integrating Teams Notifications

We could go through the steps of marrying our AzDO with AAD - but i shared my AzDO outside my organization and I don’t want to yank access for the non AD members:

Azure DevOps : Templates and Integrating Teams Notifications
AD Setup in Azure DevOps

However, I am a member in a different Teams/AzDO so I can take you through the integration as one might experience in an O365/AAD environment.

Azure DevOps : Templates and Integrating Teams Notifications
Adding Azure DevOps connector

There is a nuance though.  Configure will always launch a new wizard:

Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps : Templates and Integrating Teams Notifications

Despite being blank above, i know the Integration is working as I see PR notifications just fine:

Azure DevOps : Templates and Integrating Teams Notifications
Shows that even though configure above suggests we havent set it up, my notifications in Teams prove it is working

If we wish to see the connector that was created, we will find the Teams integration actually under Project Settings / Service Hooks in AzDO:

Azure DevOps : Templates and Integrating Teams Notifications

So can we add the Service Hook to Teams from the AzDO side if we are blocked on the Teams side?

While so far the answer is no, I can take you though what I tried:

Azure DevOps : Templates and Integrating Teams Notifications
Adding Service Hook

First, in service hooks, click the “+” and navigate to Microsoft Teams, and choose Next.. and...

Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps blocks adding the Teams Service Hook from the AzDO side...

Nuts.  But it does make sense.

If i dig into the WebHook URL, the part that is red underline actually matches my AAD tenant:

Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps : Templates and Integrating Teams Notifications
zoom in on the connector part that matters

Teams Notifications with Web Hooks

Another way we can do this is using webhooks in Teams

Azure DevOps : Templates and Integrating Teams Notifications
Use Incoming Webhook connector in Teams Channels menu

Click Add

Azure DevOps : Templates and Integrating Teams Notifications

Next, we’ll add an image (any icon works) and give it a name:

Azure DevOps : Templates and Integrating Teams Notifications
Note i just used an AzDO icon - you can make your notifications icon anything you like

When we click create, we’ll get a URL

Azure DevOps : Templates and Integrating Teams Notifications
Copy Webhook URL under icon

And we can click done..

That URL looks like :

https://outlook.office.com/webhook/782a3e08-de5f-437c-98ec-6b52458908a2@782a3e08-de5f-437c-98ec-6b52458908a2/IncomingWebhook/24cb02989ad246fba5d6e19ceade1fad/782a3e08-de5f-437c-98ec-6b52458908a2

That URL has no security so it’s worth protecting…

Next we'll need a "card" which is how we craft our message. To fashion a card, you can use this messagecard playground:

Let’s try an example:

builder@DESKTOP-JBA79RT:~$ cat post.json
{
        "@type": "MessageCard",
        "@context": "https://schema.org/extensions",
        "summary": "1 new build message",
        "themeColor": "0078D7",
        "sections": [
                {
                        "activityImage": "https://ca.slack-edge.com/T4AQPQN8M-U4AL2JFC3-g45c8854734a-48",
                        "activityTitle": "Isaac Johnson",
                        "activitySubtitle": "2 hours ago",
                        "facts": [
                                {
                                        "name": "Keywords:",
                                        "value": "AzDO Ghost-Blog Pipeline"
                                },
                                {
                                        "name": "Group:",
                                        "value": "Build 123456.1 Succeeded"
                                }
                        ],
                        "text": "Build 123456.1 has completed.  Freshbrewed.com is updated.  The world now has just that much more joy in it.\n<br/>\n  Peace and love.",
                        "potentialAction": [
                                {
                                        "@type": "OpenUri",
                                        "name": "View conversation"
                                }
                        ]
                }
        ]
}
builder@DESKTOP-JBA79RT:~$ curl -X POST -H "Content-Type: application/json" -d @post.json https://outlook.office.com/webhook/782a3e08-de5f-437c-98ec-6b52458908a2@782a3e08-de5f-437c-98ec-6b52458908a2/IncomingWebhook/24cb02989ad246fba5d6e19ceade1fad/782a3e08-de5f-437c-98ec-6b52458908a2
Azure DevOps : Templates and Integrating Teams Notifications
example notification

Now let’s add that into our Azure-Pipelines yaml:

  - stage: build_notification
    jobs:
      - job: release
        pool:
          vmImage: 'ubuntu-latest'
        steps:
        - bash: |
           #!/bin/bash
           set -x
           umask 0002

           cat > ./post.json <<'endmsg'
           {
                   "@type": "MessageCard",
                   "@context": "https://schema.org/extensions",
                   "summary": "1 new build message",
                   "themeColor": "0078D7",
                   "sections": [
                           {
                                   "activityImage": "https://ca.slack-edge.com/T4AQPQN8M-U4AL2JFC3-g45c8854734a-48",
                                   "activityTitle": "$(Build.SourceVersionAuthor)",
                                   "activitySubtitle": "$(Build.SourceVersionMessage) - $(Build.SourceBranchName) - $(Build.SourceVersion)",
                                   "facts": [
                                           {
                                                   "name": "Keywords:",
                                                   "value": "$(System.DefinitionName)"
                                           },
                                           {
                                                   "name": "Group:",
                                                   "value": "$(Build.BuildNumber)"
                                           }
                                   ],
                                   "text": "Build $(Build.BuildNumber) has completed. Pipeline started at $(System.PipelineStartTime). Freshbrewed.com is updated.  The world now has just that much more joy in it.\n<br/>\n  Peace and love.",
                                   "potentialAction": [
                                           {
                                                   "@type": "OpenUri",
                                                   "name": "View conversation"
                                           }
                                   ]
                           }
                   ]
           }
           endmsg

           curl -X POST -H "Content-Type: application/json" -d @post.json https://outlook.office.com/webhook/782a3e08-de5f-437c-98ec-6b52458908a2@782a3e08-de5f-437c-98ec-6b52458908a2/IncomingWebhook/24cb02989ad246fba5d6e19ceade1fad/782a3e08-de5f-437c-98ec-6b52458908a2
          displayName: 'Bash Script'

That should now call the Teams REST API when our build completes:

Azure DevOps : Templates and Integrating Teams Notifications

Which produced:

Azure DevOps : Templates and Integrating Teams Notifications

And note, our REST Call for slack is still in our Service Hooks:

Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps : Templates and Integrating Teams Notifications

I should also point out, calling slack or teams can be done via a maven pom.xml as well.  I have some golang projects today that use mvn-golang-wrapper from com.igormaznitsa.  There is a notification profile I use on those builds to update slack directly:

<profile>
  <id>slack</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>exec-maven-plugin</artifactId>
        <version>1.5.0</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>exec</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <executable>wget</executable>
          <!-- optional -->
          <arguments>
            <argument>https://slack.com/api/chat.postMessage?token=${slacktoken}&amp;channel=%23builds&amp;text=Just%20completed%20a%20new%20build:%20${env.SYSTEM_TEAMPROJECT}%20${env.BUILD_DEFINITIONNAME}%20build%20${env.BUILD_BUILDNUMBER}%20${env.AGENT_JOBSTATUS}%20of%20branch%20${env.BUILD_SOURCEBRANCHNAME}%20(${env.BUILD_SOURCEVERSION}).%20get%20audl-go-client-${env.BUILD_BUILDNUMBER}.zip%20from%20https://drive.google.com/open?id=0B_fxXZ0azjzTLWZTcGFtNGIzMG8&amp;icon_url=http%3A%2F%2F68.media.tumblr.com%2F69189dc4f6f5eb98477f9e72cc6c6a81%2Ftumblr_inline_nl3y29qlqD1rllijo.png&amp;pretty=1</argument>
            <!-- argument>"https://slack.com/api/chat.postMessage?token=xoxp-123123123-12312312312-123123123123-123asd123asd123asd123asd&channel=%23builds&text=just%20a%20test3&icon_url=http%3A%2F%2F68.media.tumblr.com%2F69189dc4f6f5eb98477f9e72cc6c6a81%2Ftumblr_inline_nl3y29qlqD1rllijo.png&pretty=1"</argument -->
          </arguments>
        </configuration>
      </plugin>
   </plugins>
 </build>
</profile>

Using YAML Templates

The next step we will want to do with YAML pipelines is start to leverage the equivalent of task groups, template files.

Template files are how we create reusable subroutines - bundled common functionality that can be used by more than one pipeline and managed separately.

Let’s take for instance my release sections:

 - stage: release_prod
    dependsOn: build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
      - job: release
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: DownloadBuildArtifacts@0
            inputs:
              buildType: "current"
              downloadType: "single"
              artifactName: "drop"
              downloadPath: "_drop"
          - task: ExtractFiles@1
            displayName: 'Extract files '
            inputs:
              archiveFilePatterns: '**/drop/*.zip'
              destinationFolder: '$(System.DefaultWorkingDirectory)/out'
 
          - bash: |
              #!/bin/bash
              export
              set -x
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install tree
              cd ..
              pwd
              tree .
            displayName: 'Bash Script Debug'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science html'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed.science
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              globExpressions: '**/*.html'
              filesAcl: 'public-read'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science rest'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed.science
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              filesAcl: 'public-read'
 
 
  - stage: release_test
    dependsOn: build
    condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
      - job: release
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: DownloadBuildArtifacts@0
            inputs:
              buildType: "current"
              downloadType: "single"
              artifactName: "drop"
              downloadPath: "_drop"
          - task: ExtractFiles@1
            displayName: 'Extract files '
            inputs:
              archiveFilePatterns: '**/drop/*.zip'
              destinationFolder: '$(System.DefaultWorkingDirectory)/out'
 
          - bash: |
              #!/bin/bash
              export
              set -x
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install tree
              cd ..
              pwd
              tree .
            displayName: 'Bash Script Debug'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science rest'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed-test
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              filesAcl: 'public-read'

You’ll notice that they are similar - really just bucket names are different. We could bundle those into templates.  First, let’s create a task;

Azure DevOps : Templates and Integrating Teams Notifications

Next we can make a branch and create the files we need:

builder@DESKTOP-JBA79RT:~/Workspaces/ghost-blog$ git checkout -b feature/56-make-template

The first thing we’ll do is to create a templates folder and create files for uploading to AWS and Notifications as these are the parts we want to re-use right away.

templates/aws-release.yaml:

jobs:
- job:
  pool:
    vmImage: 'ubuntu-latest'
 
  steps:
  - task: DownloadBuildArtifacts@0
    inputs:
      buildType: "current"
      downloadType: "single"
      artifactName: "drop"
      downloadPath: "_drop"
  - task: ExtractFiles@1
    displayName: 'Extract files '
    inputs:
      archiveFilePatterns: '**/drop/*.zip'
      destinationFolder: '$(System.DefaultWorkingDirectory)/out'
 
  - bash: |
      #!/bin/bash
      export
      set -x
      export DEBIAN_FRONTEND=noninteractive
      sudo apt-get -yq install tree
      cd ..
      pwd
      tree .
    displayName: 'Bash Script Debug'
 
  - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
    displayName: 'S3 Upload: ${{ parameters.awsBucket }} html'
    inputs:
      awsCredentials: ${{ parameters.awsCreds }}
      regionName: 'us-east-1'
      bucketName: ${{ parameters.awsBucket }}
      sourceFolder: '$(System.DefaultWorkingDirectory)/o'
      globExpressions: '**/*.html'
      filesAcl: 'public-read'
 
  - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
    displayName: 'S3 Upload: ${{ parameters.awsBucket }} rest'
    inputs:
      awsCredentials: ${{ parameters.awsCreds }}
      regionName: 'us-east-1'
      bucketName: ${{ parameters.awsBucket }}
      sourceFolder: '$(System.DefaultWorkingDirectory)/o'
      filesAcl: 'public-read'

templates/notifications.yaml:

jobs:
- job:
  pool:
    vmImage: 'ubuntu-latest'
    
  steps:
  - bash: |
      #!/bin/bash
      set -x
      umask 0002
      
      cat > ./post.json <<'endmsg'
      {
              "@type": "MessageCard",
              "@context": "https://schema.org/extensions",
              "summary": "1 new build message",
              "themeColor": "0078D7",
              "sections": [
                      {
                              "activityImage": "https://ca.slack-edge.com/T4AQPQN8M-U4AL2JFC3-g45c8854734a-48",
                              "activityTitle": "$(Build.SourceVersionAuthor)",
                              "activitySubtitle": "$(Build.SourceVersionMessage) - $(Build.SourceBranchName) - $(Build.SourceVersion)",
                              "facts": [
                                      {
                                              "name": "Keywords:",
                                              "value": "$(System.DefinitionName)"
                                      },
                                      {
                                              "name": "Group:",
                                              "value": "$(Build.BuildNumber)"
                                      }
                              ],
                              "text": "Build $(Build.BuildNumber) has completed. Pipeline started at $(System.PipelineStartTime). ${{ parameters.awsBucket }} bucket updated and ${{ parameters.siteUrl }} is live.  The world now has just that much more joy in it.\n<br/>\n  Peace and love.",
                              "potentialAction": [
                                      {
                                              "@type": "OpenUri",
                                              "name": "View conversation"
                                      }
                              ]
                      }
              ]
      }
      endmsg
      
      curl -X POST -H "Content-Type: application/json" -d @post.json https://outlook.office.com/webhook/ad403e91-2f21-4c3c-92e0-02d7dce003ff@ad403e91-2f21-4c3c-92e0-02d7dce003ff/IncomingWebhook/ad403e91-2f21-4c3c-92e0-02d7dce003ff/ad403e91-2f21-4c3c-92e0-02d7dce003ff
    displayName: 'Bash Script'

Now we can change the bottom of our YAML file to use those files:

trigger:
  branches:
    include:
    - master
    - develop
    - feature/*
 
pool:
  vmImage: 'ubuntu-latest'
 
stages:
  - stage: build
    jobs:
      - job: start_n_sync
        displayName: start_n_sync
        continueOnError: false
        steps:
          - task: DownloadPipelineArtifact@2
            displayName: 'Download Pipeline Artifact'
            inputs:
              buildType: specific
              project: 'f83a3d41-3696-4b1a-9658-88cecf68a96b'
              definition: 12
              buildVersionToDownload: specific
              pipelineId: 484
              artifactName: freshbrewedsync
              targetPath: '$(Pipeline.Workspace)/s/freshbrewed5b'
 
          - task: NodeTool@0
            displayName: 'Use Node 8.16.2'
            inputs:
              versionSpec: '8.16.2'
 
          - task: Npm@1
            displayName: 'npm run setup'
            inputs:
              command: custom
              verbose: false
              customCommand: 'run setup'
 
          - task: Npm@1
            displayName: 'npm install'
            inputs:
              command: custom
              verbose: false
              customCommand: 'install --production'
 
          - bash: |
              #!/bin/bash
              set -x
              npm start &
              wget https://freshbrewed.science
              which httrack
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install httrack tree curl || true
              wget https://freshbrewed.science
              which httrack
              httrack https://freshbrewed.science  -c16 -O "./freshbrewed5b" --quiet -%s
            displayName: 'install httrack and sync'
            timeoutInMinutes: 120
 
          - task: PublishBuildArtifacts@1
            displayName: 'Publish artifacts: sync'
            inputs:
              PathtoPublish: '$(Build.SourcesDirectory)/freshbrewed5b'
              ArtifactName: freshbrewedsync
 
          - task: Bash@3
            displayName: 'run static fix script'
            inputs:
              targetType: filePath
              filePath: './static_fix.sh'
              arguments: freshbrewed5b
 
          - task: ArchiveFiles@2
            displayName: 'Archive files'
            inputs:
              rootFolderOrFile: '$(Build.SourcesDirectory)/freshbrewed5b'
              includeRootFolder: false
 
          - task: PublishBuildArtifacts@1
            displayName: 'Publish artifacts: drop'
 
  - stage: release_prod
    dependsOn: build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
    - template: templates/aws-release.yaml
      parameters:
        awsCreds: freshbrewed
        awsBucket: freshbrewed.science
 
  - stage: build_notification_prod
    dependsOn: release_prod
    jobs:
    - template: templates/notifications.yaml
      parameters:
        siteUrl: https://freshbrewed.com
        awsBucket: freshbrewed.com
        
  - stage: release_test
    dependsOn: build
    condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
    - template: templates/aws-release.yaml
      parameters:
        awsCreds: freshbrewed
        awsBucket: freshbrewed-test
 
  - stage: build_notification_test
    dependsOn: release_test
    jobs:
    - template: templates/notifications.yaml
      parameters:
        siteUrl: http://freshbrewed-test.s3-website-us-east-1.amazonaws.com/
        awsBucket: freshbrewed-test

You’ll also notice the addition of dependsOn in the notifications so we can fork properly.

Now let's test

builder@DESKTOP-JBA79RT:~/Workspaces/ghost-blog$ git commit -m "Update Azure Pipelines to use Templates"
[feature/56-make-template af7f3759] Update Azure Pipelines to use Templates
 3 files changed, 217 insertions(+), 224 deletions(-)
 rewrite azure-pipelines.yml (65%)
 create mode 100644 templates/aws-release.yaml
 create mode 100644 templates/notifications.yaml

builder@DESKTOP-JBA79RT:~/Workspaces/ghost-blog$ git push --set-upstream origin feature/56-make-template
Counting objects: 8, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (8/8), 1.58 KiB | 1.58 MiB/s, done.
Total 8 (delta 5), reused 0 (delta 0)
remote: Analyzing objects... (8/8) (1699 ms)
remote: Storing packfile... done (215 ms)
remote: Storing index... done (107 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
To https://princessking.visualstudio.com/ghost-blog/_git/ghost-blog
 * [new branch]        feature/56-make-template -> feature/56-make-template
Branch 'feature/56-make-template' set up to track remote branch 'feature/56-make-template' from 'origin'.

We can immediately see our push is working:

Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps : Templates and Integrating Teams Notifications
Azure DevOps : Templates and Integrating Teams Notifications

Summary

Teams is making significant headway against others like Slack today (as most of us are home during the COVID-19 crisis).  Integrating automated build notifications is a great way to keep your development team up to date without having to rely on emails.  

YAML Templates are quite powerful.  In fact, there is a lot more we can do with templates (see full documentation here) such as referring to another GIT repo and tag:

resources:
  repositories:
  - repository: templates
    name: PrincessKing/CommonTemplates
    endpoint: myServiceConnection # AzDO service connection
jobs:
- template: common.yml@templates

The idea we can use Non-public templates cross-org is a very pratical way to scale Azure DevOps.

]]>
<![CDATA[DataDog: Kubernetes and AzDO]]>In our exploration of Logging and Metrics tooling, one company/product worth checking out is Datadog. It's been around since 2010 and is one of the larger offerings with over 350 integrations out of the box.  

Let's see what it can do on a variety of systems.

Create a Kubernetes

]]>
https://freshbrewed.science/datadog-kubernetes-and-azdo/5e9fabeb45cd380538d24217Wed, 22 Apr 2020 12:00:36 GMT

In our exploration of Logging and Metrics tooling, one company/product worth checking out is Datadog. It's been around since 2010 and is one of the larger offerings with over 350 integrations out of the box.  

Let's see what it can do on a variety of systems.

Create a Kubernetes Cluster (AKS)

builder@DESKTOP-2SQ9NQM:~$ az aks list
[]
builder@DESKTOP-2SQ9NQM:~$ az group create --name idjaks06rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks06rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks06rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
builder@DESKTOP-2SQ9NQM:~$ az ad sp create-for-rbac -n idjaks06sp --skip-assignment --output json > my_dev6_sp.json
Changing "idjaks06sp" to a valid URI of "http://idjaks06sp", which is the required format used for service principal names

We need to check the current versions available to us in Azure

builder@DESKTOP-2SQ9NQM:~$ az aks get-versions --location centralus -o table
KubernetesVersion    Upgrades
-------------------  -----------------------
1.17.3(preview)      None available
1.16.7               1.17.3(preview)
1.15.10              1.16.7
1.15.7               1.15.10, 1.16.7
1.14.8               1.15.7, 1.15.10
1.14.7               1.14.8, 1.15.7, 1.15.10

Now we can create


builder@DESKTOP-2SQ9NQM:~$ export SP_ID=`cat my_dev6_sp.json | jq -r .appId`
builder@DESKTOP-2SQ9NQM:~$ export SP_PW=`cat my_dev6_sp.json | jq -r .password`

builder@DESKTOP-2SQ9NQM:~$ az aks create --resource-group idjaks06rg --name idjaks06 --location centralus --kubernetes-version 1.15.10 --enable-rbac --node-count 3 --enable-cluster-autoscaler --min-count 2 --max-count 5 --generate-ssh-keys --network-plugin azu
re --service-principal $SP_ID --client-secret $SP_PW
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": true,
      "enableNodePublicIp": null,
      "maxCount": 5,
      "maxPods": 30,
      "minCount": 2,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.15.10",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks06-idjaks06rg-70b42e",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks06-idjaks06rg-70b42e-6cede587.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourcegroups/idjaks06rg/providers/Microsoft.ContainerService/managedClusters/idjaks06",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.15.10",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHZ3iOnMMLkiltuikXSjqudfCHmQvIjBGMOuGk6wedwG8Xai3uv0M/X3Z2LS6Ac8tComKEKg7Zje2KFBnvBJvU5JqkTwNHnmp682tXf15EYgn4tB7MDz5DUARpcUXJbYfUg8yPUDveYHw8PEm1n+1MvLJN0ftvdORG5CQQEl/m7jErbJJQI70xg7C8/HG5GmJpIQjDl7UVsJANKab/2/bbUlG1Sqp4cQ/LwxKxQ6/QK/HVauxDkudoTkFLqukLWVjHvNZD37MC/wygSsEVYF+yrkNJySlNbMk4ZNmMwva1yLX8Shhr8G4wWe8QI9Ska8B0keSIu8fzRWxXAv2gB3xB"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks06",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/MC_idjaks06rg_idjaks06_centralus/providers/Microsoft.Network/publicIPAddresses/1fb4eb02-0fb6-49cc-82a9-a3b17faf6a2f",
          "resourceGroup": "MC_idjaks06rg_idjaks06_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "azure",
    "networkPolicy": null,
    "outboundType": "loadBalancer",
    "podCidr": null,
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaks06rg_idjaks06_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaks06rg",
  "servicePrincipalProfile": {
    "clientId": "e6ea5845-0c01-4ccb-854b-70f946aeeccd",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": {
    "adminPassword": null,
    "adminUsername": "azureuser"
  }
}

We can now see it's been created, then login

builder@DESKTOP-2SQ9NQM:~$ az aks list -o table
Name      Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
--------  ----------  ---------------  -------------------  -------------------  -----------------------------------------------------------
idjaks06  centralus   idjaks06rg       1.15.10              Succeeded            idjaks06-idjaks06rg-70b42e-6cede587.hcp.centralus.azmk8s.io
builder@DESKTOP-2SQ9NQM:~$ az aks get-credentials -n idjaks06 -g idjaks06rg --admin
Merged "idjaks06-admin" as current context in /home/builder/.kube/config

Next we need to add some content.  We'll add an NGinx Ingress controller

builder@DESKTOP-2SQ9NQM:~$ kubectl create namespace ingress-basic
namespace/ingress-basic created
builder@DESKTOP-2SQ9NQM:~$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~$ helm install nginx-ingress stable/nginx-ingress \
> --namespace ingress-basic \
> --set controller.replicaCount=2 \
> --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
> --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
NAME: nginx-ingress
LAST DEPLOYED: Mon Apr 13 21:28:21 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls


builder@DESKTOP-2SQ9NQM:~$ kubectl get service -l app=nginx-ingress --namespace ingress-basic
NAME                            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
nginx-ingress-controller        LoadBalancer   10.0.140.79    13.89.106.195   80:30160/TCP,443:32342/TCP   92s
nginx-ingress-default-backend   ClusterIP      10.0.234.153   <none>          80/TCP                       92s

Next we can add a sample app and expose it via the ingress

builder@DESKTOP-2SQ9NQM:~$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories

builder@DESKTOP-2SQ9NQM:~$ helm install aks-helloworld azure-samples/aks-helloworld --namespace ingress-basic
NAME: aks-helloworld
LAST DEPLOYED: Mon Apr 13 21:42:29 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None

builder@DESKTOP-2SQ9NQM:~$ helm install aks-helloworld-two azure-samples/aks-helloworld \
>     --namespace ingress-basic \
>     --set title="AKS Ingress Demo" \
>     --set serviceName="aks-helloworld-two"
NAME: aks-helloworld-two
LAST DEPLOYED: Mon Apr 13 21:43:15 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None


builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f hello-world-ingress.yaml
ingress.extensions/hello-world-ingress unchanged
ingress.extensions/hello-world-ingress-static unchanged

We can verify the app

DataDog: Kubernetes and AzDO

Installing Datadog

We can use the Helm chart to install it

builder@DESKTOP-2SQ9NQM:~$ helm install idjaks06 --set datadog.apiKey=d7xxxxxxxxxxxxxxxxxxxxxxxxx1d stable/datadog
NAME: idjaks06
LAST DEPLOYED: Mon Apr 13 22:09:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Datadog agents are spinning up on each node in your cluster. After a few
minutes, you should see your agents starting in your event stream:
    https://app.datadoghq.com/event/stream

we can immediately see it's working in Datadog

DataDog: Kubernetes and AzDO

We can look up our Infrastructure List.

DataDog: Kubernetes and AzDO

We can filter by containers:

DataDog: Kubernetes and AzDO
you can filter on any tag

This should handle logging

builder@DESKTOP-2SQ9NQM:~$ helm delete idjaks06
release "idjaks06" uninstalled
builder@DESKTOP-2SQ9NQM:~$ helm install idjaks06 --set datadog.apiKey=d7xxxxxxxxxxxxxxxxx21d --set logs.enabled=true --s
et apm.enabled=true --set systemProbe.enabled=true stable/datadog
NAME: idjaks06
LAST DEPLOYED: Mon Apr 13 22:23:51 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Datadog agents are spinning up on each node in your cluster. After a few
minutes, you should see your agents starting in your event stream:
    https://app.datadoghq.com/event/stream

However, try as I might, the helm settings would not enable the logging. So i followed the guide in the logging page to create the yaml and apply it directly

builder@DESKTOP-2SQ9NQM:~$ kubectl create -f datadog-agent.yaml
daemonset.apps/datadog-agent created

After a few minutes, I could start to see logs rolling in from the ingress-basic namespace that has the helloworld app.

DataDog: Kubernetes and AzDO

Let’s get a closer look:

DataDog: Kubernetes and AzDO

I let the cluster and DataDog run three and half days to collect some details. I had a simple unsecured app running. What would DD find?

The first dashboard I want to dig into is the Container Dashboard (“Docker Dashboard”):

DataDog: Kubernetes and AzDO

There is a lot to break down.

The midsection shows us our cluster is oversized for our needs. We can see that our largest container, by RAM usage, is NGinx and our CPU never really spikes above 6% by image:

DataDog: Kubernetes and AzDO

The top half shows us events.  All containers are running (nothing hung or scheduled) and of the 38 running containers, most of them (19) are just “Pause” containers.  The only events happened yesterday (thursday) when AKS killed some Pause containers:

DataDog: Kubernetes and AzDO

Not all dashboards were useful.  The NGinx dashboard (and likely it’s a configuration issue) clearly wasn’t watching the ingress controllers i had launched in ingress-basic’s namespace.

I even tried murdering the pods and still no data showed up:

builder@DESKTOP-JBA79RT:~$ kubectl delete pod nginx-ingress-controller-5c64f4c5d4-28xc8 -n ingress-basic && kubectl delete pod nginx-ingress-controller-5c64f4c5d4-8nw5s -n ingress-basic && kubectl delete pod nginx-ingress-default-backend-76b8f499cb-6jflx -n ingress-basic
pod "nginx-ingress-controller-5c64f4c5d4-28xc8" deleted
pod "nginx-ingress-controller-5c64f4c5d4-8nw5s" deleted
pod "nginx-ingress-default-backend-76b8f499cb-6jflx" deleted

And the dashboard picker confirmed that suspicion

DataDog: Kubernetes and AzDO
empty nginx dashboard
DataDog: Kubernetes and AzDO
no data received

Let’s revisit the Infra dashboard

DataDog: Kubernetes and AzDO

We can pick a node and look at metrics over time:

DataDog: Kubernetes and AzDO

We can see the largest space in processes (purple) is “free” and the CPU barely registers above 1%.  

Cloud Integrations

Another way we can look at hour hosts is via Azure integrations.  We can monitor the VMSS behind our cluster directly as well:

DataDog: Kubernetes and AzDO

What we see above are the combined metrics over our entire VMSS as captured by DataDog.

Another great feature is log capturing. One thing my user base often asks me for is to see the logs of their containers.  Really what they want is to see a feed in their namespace while they do activities.  We can watch a live feed on any container right in Datadog (no need for the k8s dashboard):

DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
Live tail on logs to view them as they happen

Monitor Azure DevOps

We can follow this guide to install hooks into our AzDO.

Let’s create a service connection in our project:

DataDog: Kubernetes and AzDO

We can then click + to add and pick datadog:

DataDog: Kubernetes and AzDO

Let’s pick to update on PR updates:

DataDog: Kubernetes and AzDO

Then just enter your API key to test and add:

DataDog: Kubernetes and AzDO

If you need to get your key, its under Integrations in DD:

DataDog: Kubernetes and AzDO

I added PR, Build and Work Item service hooks:

DataDog: Kubernetes and AzDO

Let’s try it out.  We can make a work item and assign it to ourselves:

DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO

We can then make a meaningless change and test it:

DataDog: Kubernetes and AzDO

That will also trigger some builds:

DataDog: Kubernetes and AzDO

Right away we can see events being captured:

DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
close up of events

How about tracking of builds?

DataDog: Kubernetes and AzDO

ARM

Let’s log into my local PI cluster:

pi@raspberrypi:~ $ kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
retropie      Ready    <none>   81d    v1.17.2+k3s1
raspberrypi   Ready    master   237d   v1.17.2+k3s1

Apply the first set of YAMLs from the setup page.

Then we’ll need to use an ARM64 image from https://hub.docker.com/r/datadog/agent-arm64/tags

That means changing the default x64 container to an ARM based one:

      containers:
      - image: datadog/agent:7

to

      containers:
      - image: datadog/agent-arm64:7.19.0-rc.5

I also tweaked it based on this page to enable the network map.

However, on launch, i couldn’t schedule the pods due to insufficient memory:

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 2 Insufficient memory.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 2 Insufficient memory.

I tried other agents as well.

The “official” arm64 gave me:

pi@raspberrypi:~ $ kubectl logs datadog-agent-ftpp8 -c datadog-agent
standard_init_linux.go:211: exec user process caused "exec format error"

Which usually means the wrong binary.

This one https://hub.docker.com/r/gridx/datadog-agent-arm32v7/tags gave me

  Warning  Failed     21m (x3 over 21m)    kubelet, raspberrypi  Error: failed to create containerd task: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/opt/datadog-agent/embedded/bin/system-probe\": stat /opt/datadog-agent/embedded/bin/system-probe: no such file or directory": unknown
  Warning  Unhealthy  20m (x3 over 21m)    kubelet, raspberrypi  Liveness probe failed: Get http://10.42.0.107:5555/health: dial tcp 10.42.0.107:5555: connect: connection refused
  Normal   Killing    20m                  kubelet, raspberrypi  Container datadog-agent failed liveness probe, will be restarted

I tried a different guide:

https://docs.datadoghq.com/agent/kubernetes/?tab=daemonset

And just changed the image:

      containers:
        - image: 'gridx/datadog-agent-arm32v7'

Alas, that failed as well

datadog-agent-mhl9c                                 0/1     CrashLoopBackOff   7          16m
datadog-agent-kfm6g                                 0/1     CrashLoopBackOff   7          16m
pi@raspberrypi:~ $ kubectl logs datadog-agent-mhl9c
No handlers could be found for logger "utils.dockerutil"
2020-04-18 13:41:27,594 CRIT Supervisor running as root (no user in config file)
2020-04-18 13:41:27,649 INFO RPC interface 'supervisor' initialized
2020-04-18 13:41:27,650 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-04-18 13:41:27,651 INFO supervisord started with pid 1
2020-04-18 13:41:28,656 INFO spawned: 'dogstatsd' with pid 23
2020-04-18 13:41:28,661 INFO spawned: 'forwarder' with pid 24
2020-04-18 13:41:28,667 INFO spawned: 'collector' with pid 25
2020-04-18 13:41:28,678 INFO spawned: 'jmxfetch' with pid 26
2020-04-18 13:41:31,019 INFO success: collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2020-04-18 13:41:31,919 INFO success: dogstatsd entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 13:41:31,919 INFO success: forwarder entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 13:41:31,923 INFO success: jmxfetch entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 13:41:34,924 INFO exited: jmxfetch (exit status 0; expected)
2020-04-18 13:42:23,852 WARN received SIGTERM indicating exit request
2020-04-18 13:42:23,854 INFO waiting for dogstatsd, forwarder, collector to die
2020-04-18 13:42:24,005 INFO stopped: forwarder (exit status 0)
2020-04-18 13:42:24,013 INFO stopped: collector (exit status 0)
2020-04-18 13:42:24,047 INFO stopped: dogstatsd (exit status 0)

Worked

In the end, what did work was the vanilla yaml with the image change (note: image:)

pi@raspberrypi:~ $ cat datadog-agent-vanilla.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: datadog-agent
spec:
  selector:
    matchLabels:
      app: datadog-agent
  template:
    metadata:
      labels:
        app: datadog-agent
      name: datadog-agent
    spec:
      serviceAccountName: datadog-agent
      containers:
      - image: 'gridx/datadog-agent-arm32v7'
        imagePullPolicy: Always
        name: datadog-agent
        ports:
          - containerPort: 8125
            # Custom metrics via DogStatsD - uncomment this section to enable custom metrics collection
            # hostPort: 8125
            name: dogstatsdport
            protocol: UDP
          - containerPort: 8126
            # Trace Collection (APM) - uncomment this section to enable APM
            # hostPort: 8126
            name: traceport
            protocol: TCP
        env:
          - name: DD_API_KEY
            valueFrom:
              secretKeyRef:
                name: datadog-secret
                key: api-key
          - name: DD_COLLECT_KUBERNETES_EVENTS
            value: "true"
          - name: DD_LEADER_ELECTION
            value: "true"
          - name: KUBERNETES
            value: "true"
          - name: DD_HEALTH_PORT
            value: "5555"
          - name: DD_KUBERNETES_KUBELET_HOST
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP

          ## For secure communication with the Cluster Agent (required to use the Cluster Agent)
          # - name: DD_CLUSTER_AGENT_AUTH_TOKEN
          #
          ## If using a simple env var uncomment this section
          #
          #   value: <THIRTY_2_CHARACTERS_LONG_TOKEN>
          #
          ## If using a secret uncomment this section
          #
          #   valueFrom:
          #     secretKeyRef:
          #       name: datadog-auth-token
          #       key: token

        ## Note these are the minimum suggested values for requests and limits.
        ## The amount of resources required by the Agent varies depending on:
        ## * The number of checks
        ## * The number of integrations enabled
        ## * The number of features enabled
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "256Mi"
            cpu: "200m"
        volumeMounts:
          - name: dockersocket
            mountPath: /var/run/docker.sock
          - name: procdir
            mountPath: /host/proc
            readOnly: true
          - name: cgroups
            mountPath: /host/sys/fs/cgroup
            readOnly: true
        livenessProbe:
          httpGet:
            path: /health
            port: 5555
          initialDelaySeconds: 15
          periodSeconds: 15
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      volumes:
        - hostPath:
            path: /var/run/docker.sock
          name: dockersocket
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
DataDog: Kubernetes and AzDO

Now one issue I found myself having was that I now had a mix of Pi cluster nodes and AKS in my dashboard:

DataDog: Kubernetes and AzDO

I wanted to tag resources in a reliable way so i could distinguish.

I added a block to the “env” section of the yaml:

DataDog: Kubernetes and AzDO
   - name: DD_TAGS
            value: "cluster-name:pi-cluster cluster-type:arm"

Then I did an apply

pi@raspberrypi:~ $ vi datadog-agent-vanilla.yaml
pi@raspberrypi:~ $ kubectl apply -f datadog-agent-vanilla.yaml
daemonset.apps/datadog-agent configured

The pods recycled:

pi@raspberrypi:~ $ kubectl get pods
NAME                                                READY   STATUS             RESTARTS   AGE
svclb-nginxingress-nginx-ingress-controller-x6chs   0/2     Pending            0          81d
nginxingress-nginx-ingress-controller-v8ck9         0/1     Pending            0          82d
svclb-nginxingress-nginx-ingress-controller-v9rht   2/2     Running            12         81d
nginxingress-nginx-ingress-controller-sd24n         0/1     CrashLoopBackOff   22924      81d
datadog-agent-tf5tw                                 1/1     Running            3          3m56s
datadog-agent-q6ffk                                 1/1     Running            3          3m49s

And now we can filter on cluster-name:

DataDog: Kubernetes and AzDO


I did try an enable container tracing following the steps here: https://docs.datadoghq.com/infrastructure/livecontainers/?tab=kubernetes

However the pods started to crash:

datadog-agent-gzx58                                 0/1     CrashLoopBackOff   7          14m
datadog-agent-swmt4                                 0/1     CrashLoopBackOff   7          14m
pi@raspberrypi:~ $ kubectl describe pod datadog-agent-gzx58 | tail -n 5
  Normal   Started    14m (x3 over 16m)   kubelet, retropie  Started container datadog-agent
  Normal   Killing    13m (x3 over 15m)   kubelet, retropie  Container datadog-agent failed liveness probe, will be restarted
  Normal   Pulling    13m (x4 over 16m)   kubelet, retropie  Pulling image "gridx/datadog-agent-arm32v7"
  Warning  Unhealthy  10m (x16 over 15m)  kubelet, retropie  Liveness probe failed: Get http://10.42.1.26:5555/health: dial tcp 10.42.1.26:5555: connect: connection refused
  Warning  BackOff    67s (x37 over 10m)  kubelet, retropie  Back-off restarting failed container
pi@raspberrypi:~ $ kubectl logs datadog-agent-gzx58 -c datadog-agent
No handlers could be found for logger "utils.dockerutil"
2020-04-18 14:23:42,119 CRIT Supervisor running as root (no user in config file)
2020-04-18 14:23:42,180 INFO RPC interface 'supervisor' initialized
2020-04-18 14:23:42,181 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2020-04-18 14:23:42,182 INFO supervisord started with pid 1
2020-04-18 14:23:43,186 INFO spawned: 'dogstatsd' with pid 25
2020-04-18 14:23:43,193 INFO spawned: 'forwarder' with pid 26
2020-04-18 14:23:43,199 INFO spawned: 'collector' with pid 27
2020-04-18 14:23:43,214 INFO spawned: 'jmxfetch' with pid 28
2020-04-18 14:23:45,219 INFO success: collector entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2020-04-18 14:23:46,302 INFO success: dogstatsd entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 14:23:46,302 INFO success: forwarder entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 14:23:46,304 INFO success: jmxfetch entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2020-04-18 14:23:49,888 INFO exited: jmxfetch (exit status 0; expected)
2020-04-18 14:24:33,713 WARN received SIGTERM indicating exit request
2020-04-18 14:24:33,714 INFO waiting for dogstatsd, forwarder, collector to die
2020-04-18 14:24:33,839 INFO stopped: forwarder (exit status 0)
2020-04-18 14:24:33,861 INFO stopped: collector (exit status 0)
2020-04-18 14:24:33,956 INFO stopped: dogstatsd (exit status 0)

I also tried the JMX image , but that is one large container (over 220mb) so it took my wee pi cluster a few and did not work:

pi@raspberrypi:~ $ kubectl logs datadog-agent-jtr7w
standard_init_linux.go:211: exec user process caused "exec format error"

I tried a few more containers i thought might be a current arm32 image such as careof/datadog-agent. That worked to spin agent, but did not collect container info, just metrics, possibly due to it's version

# ./agent version
Agent 6.14.1 - Commit: fa227f0 - Serialization version: 4.12.0 - Go version: go1.13.1

I could move onto building from source myself but I didn’t want to take it that far (yet).

Monitoring Windows

I applied the Datadog agent to my desktop.

It runs in the system tray:

DataDog: Kubernetes and AzDO

And we can see it collects some basic details:

DataDog: Kubernetes and AzDO

Alerting.

We can define monitors on any metric and then set alerts that send emails.

For instance we can monitor events tracked by Azure DevOps:

DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
sample email

Summary

Datadog is a seasoned player.  They have a myriad of integrations, we barely touched the surface. Of the APM / ALM tools I’ve looked at thus far, they are the first to have a solid app for mobile as well:

DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
DataDog: Kubernetes and AzDO
]]>
<![CDATA[Azure DevOps: Debt, Pipelines and YAML]]>Let’s talk about why I spent $40 this month because of DevOps debt.  That’s right, even yours truly can kick the can on some processes for a while and eventually that bit me in the tuchus.  

What it isn't

First, let’s cover what it isn't: The releases.

]]>
http://localhost:2368/azure-devops-debt-pipelines-and-yaml/5e96186b33c01602ef80c8e6Tue, 14 Apr 2020 20:43:49 GMT

Let’s talk about why I spent $40 this month because of DevOps debt.  That’s right, even yours truly can kick the can on some processes for a while and eventually that bit me in the tuchus.  

What it isn't

First, let’s cover what it isn't: The releases.  The releases have been cake taking between 1 and 2 minutes.  

Azure DevOps: Debt, Pipelines and YAML

All they really have to do is unpack a zip of a website rendered and upload it to AWS.

Azure DevOps: Debt, Pipelines and YAML

And yes, while I really do put a lot of love out there for Azure and Linode, the fact is, to host a fast, global website with HTTPS and proper DNS, I do use AWS.  Seriously, can you beat these costs to host the volume of content this blog has?

Azure DevOps: Debt, Pipelines and YAML

And lest you think this site gets little traffic since I don’t monetize. I’ll gladly share some details on traffic.

This is just for the first half of the month:

Azure DevOps: Debt, Pipelines and YAML

That’s roughly 2Gb of data and 20k requests for half of April - for $2.  Find me similar hosting elsewhere...

Okay, enough of that.  As I said, the problem was not my release pipelines.

So where did I tuck debt under the rug?  

The CI pipeline.. The fact is Azure DevOps gives you so many free hours of compute a month in your pipelines, but with a couple tiny weeny itsy bitsy caveats.  The one that snuck up slowly but surely was that time limit.

Azure DevOps: Debt, Pipelines and YAML

Each push of my blog was calling httrack to fully since the website using httrack.  Back when i had a few articles, that just took a few minutes.  But now, with around 68 articles linked off the main page, each with lengthy writeups and images, that’s just not scaled:

Azure DevOps: Debt, Pipelines and YAML

So I had one of three choices to make.  

  1. Redesign the website and archive old content
  2. Pay Microsoft for a pipeline agent ($40/mo/agent)
  3. Fix the pipeline

At first, I planned the last option, but with COVID and Easter, i just wanted the blog out.  Also, I like my layout and I like having all this content easy to find.

Once I was WFH’ed like the rest with a bunch of PTO to use up, I took the time to fix the thing.

Azure DevOps: Debt, Pipelines and YAML
Pipeline runs dropped from over 1hr to 8m

As you can see, with a few httrack tricks (and i had to flesh them out with other pipelines), I managed to reduce the time from 1h 1m 32s to 4m 39s.

But how, you may ask?

First, let’s look at my old steps:

Azure DevOps: Debt, Pipelines and YAML
Original ghost blog build

One issue here is that we started ghost ( npm start & ) then synced all the things to a new folder (freshbrewed5b).  We then, as a post step, crawl that and changed from http://localhost:2368 to https://freshbrewed.com in the “run static fix script”.

This meant with every single build, i slowly used a webscrawler to sync and sync and sync everything over and over - even though i rarely updated the older blog writeups.

The first step I realized I needed was to archive a sync.  I would need to do this before I crawled and modified every page for the URL.

Azure DevOps: Debt, Pipelines and YAML
snapping the crawl after

The next step would be to download it at the start of the next build.  The part that honestly threw me for a loop was the “Pipeline.Workspace” variable.  VSTS variables have changed over time as Microsoft is moving to a more unified pipeline approach.  We used to talk about source and artifact dirs.  I assumed this was the source dir and was mistaken - it’s the root of both so i needed the “/s/” in there.

Azure DevOps: Debt, Pipelines and YAML

You will see above an area I’ll need to optimize later - i’ll want to change the fixed source build to last released. This means I’ll need to add some tagging and change the above step (Build version to download) from “Specific version” to specific branch and build tags:

Azure DevOps: Debt, Pipelines and YAML
Proposed future change

The step I’ll gloss over is the “cache of old build” which is some bash debug I used to try and figure out where the heck my files were going and how they were laid out.

And if you want an idea of how much a sync we are speaking about, the website as a whole is about 100mb:

Azure DevOps: Debt, Pipelines and YAML
a sample pipeline artifact set

The final change was to work out some optimizations on httrack:

Azure DevOps: Debt, Pipelines and YAML
added "-%s"

We can look more carefully at those steps

#!/bin/bash

set -x

npm start &

wget https://freshbrewed.science

which httrack

export DEBIAN_FRONTEND=noninteractive

sudo apt-get -yq install httrack tree curl || true

wget https://freshbrewed.science

which httrack

httrack https://freshbrewed.science  -c16 -O "./freshbrewed5b" --quiet -%s
# httrack https://freshbrewed.science  -c16 -O "$BUILD_SOURCESDIRECTORY/freshbrewed5b" --quite -i

#tree .

cd freshbrewed
pwd

So what is really going on here?  It’s that “%s” combined with pointing at a sync folder pre-populated with an old sync.

%s is also “--updatehack”.  We can see all the options on httrack here: https://www.httrack.com/html/fcguide.html

Azure DevOps: Debt, Pipelines and YAML
httrack documentation

So to summarize, what we changed was:

  1. Save a full sync of the site
  2. Use that sync as a basis in subsequent pipelines
  3. Update httrack, the website syncer, to avoid redownloading things (for us, that is images)

We can see the results both in the times and bash output:

Azure DevOps: Debt, Pipelines and YAML

Since we are fixing pipelines, let's tackle some Release Pipeline debt while we are at it.

Release Pipelines

First let’s fix some Agent issues.  Sometimes Azure DevOps depreciates agents or, in our case, changes how they structure the pools. Slightly.

The little red bangs tell us that Azure DevOps needs a bit of TLC:

Azure DevOps: Debt, Pipelines and YAML

The first thing I’ll do is a quick check of the last run and see what agent was used.

Azure DevOps: Debt, Pipelines and YAML

We see that we’ve been using an older Ubuntu Xenial LTS instance.

Now, back in my release pipeline, i’ll find that “settings that need attention” and change the pool:

Azure DevOps: Debt, Pipelines and YAML

Here we just need to pick the Ubuntu 16.04 agent specification in the Azure Pipelines agent pool.

Azure DevOps: Debt, Pipelines and YAML

When done, we’ll just save and make a comment (for the pipeline history):

Azure DevOps: Debt, Pipelines and YAML

And now our pipeline graphical view is happy again:

Azure DevOps: Debt, Pipelines and YAML

Switching to YAML

We will now do something I swore I wouldn’t - move our pipelines to YAML.  

Azure DevOps: Debt, Pipelines and YAML

We’ll need to go through our steps and view the YAML on each step:

Azure DevOps: Debt, Pipelines and YAML

So what does this translate to when done?

# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/javascript
 
trigger:
- master
 
pool:
  vmImage: 'ubuntu-latest'
 
steps:
- task: DownloadPipelineArtifact@2
  displayName: 'Download Pipeline Artifact'
  inputs:
    buildType: specific
    project: 'f83a3d41-3696-4b1a-9658-88cecf68a96b'
    definition: 12
    buildVersionToDownload: specific
    pipelineId: 484
    artifactName: freshbrewedsync
    targetPath: '$(Pipeline.Workspace)/s/freshbrewed5b'
 
- task: NodeTool@0
  displayName: 'Use Node 8.16.2'
  inputs:
    versionSpec: '8.16.2'
 
- task: Npm@1
  displayName: 'npm run setup'
  inputs:
    command: custom
    verbose: false
    customCommand: 'run setup'
 
- task: Npm@1
  displayName: 'npm install'
  inputs:
    command: custom
    verbose: false
    customCommand: 'install --production'
 
- bash: |
   #!/bin/bash
   set -x
   npm start &
   wget https://freshbrewed.science
   which httrack
   export DEBIAN_FRONTEND=noninteractive
   sudo apt-get -yq install httrack tree curl || true
   wget https://freshbrewed.science
   which httrack
   httrack https://freshbrewed.science  -c16 -O "./freshbrewed5b" --quiet -%s
   
  displayName: 'install httrack and sync'
  timeoutInMinutes: 120
 
- task: PublishBuildArtifacts@1
  displayName: 'Publish artifacts: sync'
  inputs:
    PathtoPublish: '$(Build.SourcesDirectory)/freshbrewed5b'
    ArtifactName: freshbrewedsync
 
- task: Bash@3
  displayName: 'run static fix script'
  inputs:
    targetType: filePath
    filePath: './static_fix.sh'
    arguments: freshbrewed5b
 
- task: ArchiveFiles@2
  displayName: 'Archive files'
  inputs:
    rootFolderOrFile: '$(Build.SourcesDirectory)/freshbrewed5b'
    includeRootFolder: false
 
- task: PublishBuildArtifacts@1
  displayName: 'Publish artifacts: drop'

It’s a pretty simple azure-pipelines.yaml file and built without issue:

Azure DevOps: Debt, Pipelines and YAML

But what if we want to really make this complete? Handle Build and Release in a single pipeline. It would also be nice if we could support branches - no “master only”. (i’m a fan of gitflow).

Something that might look like this:

Azure DevOps: Debt, Pipelines and YAML

We can and we are able to do this with multi-stage pipelines.

trigger:
  branches:
    include:
    - master
    - develop
    - feature/*
 
pool:
  vmImage: 'ubuntu-latest'
 
stages:
  - stage: build
    jobs:
      - job: start_n_sync
        displayName: start_n_sync
        continueOnError: false
        steps:
          - task: DownloadPipelineArtifact@2
            displayName: 'Download Pipeline Artifact'
            inputs:
              buildType: specific
              project: 'f83a3d41-3696-4b1a-9658-88cecf68a96b'
              definition: 12
              buildVersionToDownload: specific
              pipelineId: 484
              artifactName: freshbrewedsync
              targetPath: '$(Pipeline.Workspace)/s/freshbrewed5b'
 
          - task: NodeTool@0
            displayName: 'Use Node 8.16.2'
            inputs:
              versionSpec: '8.16.2'
 
          - task: Npm@1
            displayName: 'npm run setup'
            inputs:
              command: custom
              verbose: false
              customCommand: 'run setup'
 
          - task: Npm@1
            displayName: 'npm install'
            inputs:
              command: custom
              verbose: false
              customCommand: 'install --production'
 
          - bash: |
              #!/bin/bash
              set -x
              npm start &
              wget https://freshbrewed.science
              which httrack
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install httrack tree curl || true
              wget https://freshbrewed.science
              which httrack
              httrack https://freshbrewed.science  -c16 -O "./freshbrewed5b" --quiet -%s
            displayName: 'install httrack and sync'
            timeoutInMinutes: 120
 
          - task: PublishBuildArtifacts@1
            displayName: 'Publish artifacts: sync'
            inputs:
              PathtoPublish: '$(Build.SourcesDirectory)/freshbrewed5b'
              ArtifactName: freshbrewedsync
 
          - task: Bash@3
            displayName: 'run static fix script'
            inputs:
              targetType: filePath
              filePath: './static_fix.sh'
              arguments: freshbrewed5b
 
          - task: ArchiveFiles@2
            displayName: 'Archive files'
            inputs:
              rootFolderOrFile: '$(Build.SourcesDirectory)/freshbrewed5b'
              includeRootFolder: false
 
          - task: PublishBuildArtifacts@1
            displayName: 'Publish artifacts: drop'
 
 
  - stage: release_prod
    dependsOn: build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
      - job: release
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: DownloadBuildArtifacts@0
            inputs:
              buildType: "current"
              downloadType: "single"
              artifactName: "drop"
              downloadPath: "_drop"
          - task: ExtractFiles@1
            displayName: 'Extract files '
            inputs:
              archiveFilePatterns: '**/drop/*.zip'
              destinationFolder: '$(System.DefaultWorkingDirectory)/out'
 
          - bash: |
              #!/bin/bash
              export
              set -x
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install tree
              cd ..
              pwd
              tree .
            displayName: 'Bash Script Debug'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science html'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed.science
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              globExpressions: '**/*.html'
              filesAcl: 'public-read'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science rest'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed.science
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              filesAcl: 'public-read'
 
 
  - stage: release_test
    dependsOn: build
    condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/master'))
    jobs:
      - job: release
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: DownloadBuildArtifacts@0
            inputs:
              buildType: "current"
              downloadType: "single"
              artifactName: "drop"
              downloadPath: "_drop"
          - task: ExtractFiles@1
            displayName: 'Extract files '
            inputs:
              archiveFilePatterns: '**/drop/*.zip'
              destinationFolder: '$(System.DefaultWorkingDirectory)/out'
 
          - bash: |
              #!/bin/bash
              export
              set -x
              export DEBIAN_FRONTEND=noninteractive
              sudo apt-get -yq install tree
              cd ..
              pwd
              tree .
            displayName: 'Bash Script Debug'
 
          - task: AmazonWebServices.aws-vsts-tools.S3Upload.S3Upload@1
            displayName: 'S3 Upload: freshbrewed.science rest'
            inputs:
              awsCredentials: freshbrewed
              regionName: 'us-east-1'
              bucketName: freshbrewed-test
              sourceFolder: '$(System.DefaultWorkingDirectory)/o'
              filesAcl: 'public-read'


trigger:  branches:    include:    - develop    - feature/*

So what we’ve done is created three stages…

stages:
  - stage: build
    jobs:
      - job: start_n_sync
        displayName: start_n_sync
        continueOnError: false

...

- stage: release_prod
    dependsOn: build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))

...

  - stage: release_test
    dependsOn: build
    condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/master'))

While it’s technically a “fan out” pipeline.  The release and release_test are logically mutually exclusive - either the branch is master or is not.

We can see this behaviour in action by manually triggering a pipeline:

Azure DevOps: Debt, Pipelines and YAML
(see “azure-pipelines” branch forked to “release_test”)

I also, as denoted in the YAML above, created a “test” site to receive content:

Azure DevOps: Debt, Pipelines and YAML

With a public ACL and index file, this means the “pre-validation” site shows up here: http://freshbrewed-test.s3-website-us-east-1.amazonaws.com/

I might re-organize later, but this proves out the point - I can combine pipelines to make a live one and let the build definition live in code

I can now complete my PR to make the change live (after disabling the old jobs triggers):

Azure DevOps: Debt, Pipelines and YAML

Note: once completed:

Azure DevOps: Debt, Pipelines and YAML
Azure DevOps: Debt, Pipelines and YAML
Completed PR

This will of course trigger a build/release:

Azure DevOps: Debt, Pipelines and YAML

which includes our slack notifications

Azure DevOps: Debt, Pipelines and YAML

Summary

Ignoring your problems doesn’t make them go away.  DevOps pipelines are not Ronco machines - you don’t ‘set it and forget it’.  They require a certain amount of TLC and hydration.  In my case, I let some debt build until I had to pay to work through it.  

Additionally, as much as we love the graphical tools Azure DevOps has to create and pipelines, the world has moved to YAML.  And if we want to stay on top of the latest, we need to move as well.  It has allowed us to merge from two to one single pipeline and made it easier to share with others the steps in our pipeline.

]]>
<![CDATA[AKS and NewRelic]]>As we look at various APM and Logging tools, one suite that deserves attention is NewRelic.  New Relic is not a new player, having been founded in 2008 by Lew Cirne (anagram of 'new relic') and going public in 2014.  

Let's do two things - let's check out the very

]]>
http://localhost:2368/aks-and-newrelic/5e94a4922ca1ae2cc5888b9dMon, 13 Apr 2020 18:42:02 GMT

As we look at various APM and Logging tools, one suite that deserves attention is NewRelic.  New Relic is not a new player, having been founded in 2008 by Lew Cirne (anagram of 'new relic') and going public in 2014.  

Let's do two things - let's check out the very latest Kubernetes Azure offers, v1.17.3 (in preview) and see if we can apply New Relic monitoring.

AKS Setup.

Since we've covered setting up Azure Kubernetes Service a multitude of times, we'll just keep it summary below:

builder@DESKTOP-2SQ9NQM:~$ az aks list
[]
builder@DESKTOP-2SQ9NQM:~$ az group create --name idjaks05rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks05rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks05rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}
builder@DESKTOP-2SQ9NQM:~$ az ad sp create-for-rbac -n idjaks05sp --skip-assignment --output json > my_sp.json
Changing "idjaks05sp" to a valid URI of "http://idjaks05sp", which is the required format used for service principal names
builder@DESKTOP-2SQ9NQM:~$ cat my_sp.json | jq -r .appId
1878c818-33e7-4ea9-996c-afc80d57001c

builder@DESKTOP-2SQ9NQM:~$ export SP_ID=`cat my_sp.json | jq -r .appId`
builder@DESKTOP-2SQ9NQM:~$ export SP_PASS=`cat my_sp.json | jq -r .password`

builder@DESKTOP-2SQ9NQM:~$ az aks create --resource-group idjaks05rg --name idjaks05 --location centralus --kubernetes-version 1.17.3 --enable-rbac --node-count 3 --enable-cluster-autoscaler --min-count 2 --max-count 5 --generate-ssh-
keys --network-plugin azure --network-policy azure --service-principal $SP_ID --client-secret $SP_PASS
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
 - Running ..

Some of the things you'll notice above is we explicitly are using k8s version 1.17.3.

New Relic Prerequisites

We'll need to create a New Relic account if we haven't already.  What we need is the License Key which will tie our agents to this account.

You can find your New Relic key in the account settings page:

AKS and NewRelic

Setting up Agents in AKS:

Next, let’s log into our cluster

builder@DESKTOP-JBA79RT:~$ az aks list -o table
Name      Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
--------  ----------  ---------------  -------------------  -------------------  -----------------------------------------------------------
idjaks05  centralus   idjaks05rg       1.17.3               Succeeded            idjaks05-idjaks05rg-70b42e-8037c484.hcp.centralus.azmk8s.io
builder@DESKTOP-JBA79RT:~$ az aks get-credentials -n idjaks05 -g idjaks05rg  --admin
Merged "idjaks05-admin" as current context in /home/builder/.kube/config

Following this guide, we’re going to install the newrelic agents.

A quick note. The guide will have you download releases to match your cluster version:

AKS and NewRelic

For master, what you’ll want to do is clone the repo and use the examples/standard folder.  This is necessary for the very latest (1.17).

builder@DESKTOP-JBA79RT:~/Workspaces$ git clone https://github.com/kubernetes/kube-state-metrics.git
Cloning into 'kube-state-metrics'...
remote: Enumerating objects: 429, done.
remote: Counting objects: 100% (429/429), done.
remote: Compressing objects: 100% (318/318), done.
remote: Total 18587 (delta 103), reused 238 (delta 87), pack-reused 18158
Receiving objects: 100% (18587/18587), 16.47 MiB | 16.81 MiB/s, done.
Resolving deltas: 100% (11491/11491), done.
builder@DESKTOP-JBA79RT:~/Workspaces$ cd kube-state-metrics
builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl delete -f examples/standard
clusterrolebinding.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrole.rbac.authorization.k8s.io "kube-state-metrics" deleted
deployment.apps "kube-state-metrics" deleted
serviceaccount "kube-state-metrics" deleted
service "kube-state-metrics" deleted
builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl apply -f examples/standard
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created

Pro-tip: If you don’t first remove (kubectl delete -f) the existing, you’ll have errors (as some of this is already on your cluster). E.g.

builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl apply -f examples/standard
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics configured
clusterrole.rbac.authorization.k8s.io/kube-state-metrics configured
serviceaccount/kube-state-metrics configured
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"kube-state-metrics\",\"app.kubernetes.io/version\":\"1.9.5\"},\"name\":\"kube-state-metrics\",\"namespace\":\"kube-system\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app.kubernetes.io/name\":\"kube-state-metrics\"}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/name\":\"kube-state-metrics\",\"app.kubernetes.io/version\":\"1.9.5\"}},\"spec\":{\"containers\":[{\"image\":\"quay.io/coreos/kube-state-metrics:v1.9.5\",\"livenessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":8080},\"initialDelaySeconds\":5,\"timeoutSeconds\":5},\"name\":\"kube-state-metrics\",\"ports\":[{\"containerPort\":8080,\"name\":\"http-metrics\"},{\"containerPort\":8081,\"name\":\"telemetry\"}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/\",\"port\":8081},\"initialDelaySeconds\":5,\"timeoutSeconds\":5},\"securityContext\":{\"runAsUser\":65534}}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"serviceAccountName\":\"kube-state-metrics\"}}}}\n"},"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"1.9.5","k8s-app":null}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/name":"kube-state-metrics","k8s-app":null}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"1.9.5","k8s-app":null}},"spec":{"$setElementOrder/containers":[{"name":"kube-state-metrics"}],"containers":[{"image":"quay.io/coreos/kube-state-metrics:v1.9.5","livenessProbe":{"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"kube-state-metrics","readinessProbe":{"httpGet":{"path":"/","port":8081}},"securityContext":{"runAsUser":65534}}],"nodeSelector":{"kubernetes.io/os":"linux"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "kube-state-metrics", Namespace: "kube-system"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kube-state-metrics\"},\"name\":\"kube-state-metrics\",\"namespace\":\"kube-system\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"k8s-app\":\"kube-state-metrics\"}},\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"kube-state-metrics\"}},\"spec\":{\"containers\":[{\"image\":\"quay.io/coreos/kube-state-metrics:v1.7.2\",\"name\":\"kube-state-metrics\",\"ports\":[{\"containerPort\":8080,\"name\":\"http-metrics\"},{\"containerPort\":8081,\"name\":\"telemetry\"}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":8080},\"initialDelaySeconds\":5,\"timeoutSeconds\":5}}],\"serviceAccountName\":\"kube-state-metrics\"}}}}\n"] "creationTimestamp":"2020-04-11T17:47:46Z" "generation":'\x01' "labels":map["k8s-app":"kube-state-metrics"] "name":"kube-state-metrics" "namespace":"kube-system" "resourceVersion":"149430" "selfLink":"/apis/apps/v1/namespaces/kube-system/deployments/kube-state-metrics" "uid":"46e8c97c-6dd1-408d-a7cb-6dc8329b65f2"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x01' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["k8s-app":"kube-state-metrics"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["k8s-app":"kube-state-metrics"]] "spec":map["containers":[map["image":"quay.io/coreos/kube-state-metrics:v1.7.2" "imagePullPolicy":"IfNotPresent" "name":"kube-state-metrics" "ports":[map["containerPort":'\u1f90' "name":"http-metrics" "protocol":"TCP"] map["containerPort":'\u1f91' "name":"telemetry" "protocol":"TCP"]] "readinessProbe":map["failureThreshold":'\x03' "httpGet":map["path":"/healthz" "port":'\u1f90' "scheme":"HTTP"] "initialDelaySeconds":'\x05' "periodSeconds":'\n' "successThreshold":'\x01' "timeoutSeconds":'\x05'] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"kube-state-metrics" "serviceAccountName":"kube-state-metrics" "terminationGracePeriodSeconds":'\x1e']]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2020-04-11T17:48:04Z" "lastUpdateTime":"2020-04-11T17:48:04Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"] map["lastTransitionTime":"2020-04-11T17:47:46Z" "lastUpdateTime":"2020-04-11T17:48:04Z" "message":"ReplicaSet \"kube-state-metrics-85cf88c7fb\" has successfully progressed." "reason":"NewReplicaSetAvailable" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]}
for: "examples/standard/deployment.yaml": Deployment.apps "kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"kube-state-metrics\",\"app.kubernetes.io/version\":\"1.9.5\"},\"name\":\"kube-state-metrics\",\"namespace\":\"kube-system\"},\"spec\":{\"clusterIP\":\"None\",\"ports\":[{\"name\":\"http-metrics\",\"port\":8080,\"targetPort\":\"http-metrics\"},{\"name\":\"telemetry\",\"port\":8081,\"targetPort\":\"telemetry\"}],\"selector\":{\"app.kubernetes.io/name\":\"kube-state-metrics\"}}}\n","prometheus.io/scrape":null},"labels":{"app.kubernetes.io/name":"kube-state-metrics","app.kubernetes.io/version":"1.9.5","k8s-app":null}},"spec":{"$setElementOrder/ports":[{"port":8080},{"port":8081}],"clusterIP":"None","ports":[{"port":8080,"protocol":null},{"port":8081,"protocol":null}],"selector":{"app.kubernetes.io/name":"kube-state-metrics","k8s-app":null}}}
to:
Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
Name: "kube-state-metrics", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"Service" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"prometheus.io/scrape\":\"true\"},\"labels\":{\"k8s-app\":\"kube-state-metrics\"},\"name\":\"kube-state-metrics\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"name\":\"http-metrics\",\"port\":8080,\"protocol\":\"TCP\",\"targetPort\":\"http-metrics\"},{\"name\":\"telemetry\",\"port\":8081,\"protocol\":\"TCP\",\"targetPort\":\"telemetry\"}],\"selector\":{\"k8s-app\":\"kube-state-metrics\"}}}\n" "prometheus.io/scrape":"true"] "creationTimestamp":"2020-04-11T17:47:47Z" "labels":map["k8s-app":"kube-state-metrics"] "name":"kube-state-metrics" "namespace":"kube-system" "resourceVersion":"149366" "selfLink":"/api/v1/namespaces/kube-system/services/kube-state-metrics" "uid":"94b7ceb7-a14c-424c-8a90-7acd858bf494"] "spec":map["clusterIP":"10.0.215.149" "ports":[map["name":"http-metrics" "port":'\u1f90' "protocol":"TCP" "targetPort":"http-metrics"] map["name":"telemetry" "port":'\u1f91' "protocol":"TCP" "targetPort":"telemetry"]] "selector":map["k8s-app":"kube-state-metrics"] "sessionAffinity":"None" "type":"ClusterIP"] "status":map["loadBalancer":map[]]]}
for: "examples/standard/service.yaml": Service "kube-state-metrics" is invalid: spec.clusterIP: Invalid value: "None": field is immutable

Then download the NR DS Yaml

curl -O https://download.newrelic.com/infrastructure_agent/integrations/kubernetes/newrelic-infrastructure-k8s-latest.yaml

Once downloaded, change that yaml to have your license key (should end in NRAL) and give the “CLUSTER_NAME” a name you’ll remember.

AKS and NewRelic
newrelic-infrastructure-k8s-latest.yaml

Once applied:

kubectl create -f newrelic-infrastructure-k8s-latest.yaml

We can log into NewRelic to check out some of our metrics.  This is a 3 node cluster by default and we can see the load already tracked:

AKS and NewRelic

Clicking on hosts gives us details on each host as well:

AKS and NewRelic

One detail in the tables that is really useful, that my developers are often seeking out of the Kubernetes dashboard, is per container CPU and Memory usage.  Going to processes we can look at those stats by process:

AKS and NewRelic

We can also click on “Kubernetes” to examine pods by node:

AKS and NewRelic

Let’s see how we can really see some changes.

First, let’s go ahead and limit our view to just the kubernetes-dashboard deployment. This will filter to the node(s) and pod(s) that were deployed with it:

AKS and NewRelic

Next, in my shell, let’s check if it’s the old 1.x dashboard or the newer 2.x:

builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl describe pod kubernetes-dashboard-7f7676f7b5-whdwc -n kube-system | grep Image
    Image:         mcr.microsoft.com/oss/kubernetes/dashboard:v2.0.0-beta8
    Image ID:      docker-pullable://mcr.microsoft.com/oss/kubernetes/dashboard@sha256:85fbaa5c8fd7ffc5723965685d9467a89e2148206019d1a8979cec1f2d25faef

Which we can confirm is the 8443 port (not the old 9090):

builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl describe pod kubernetes-dashboard-7f7676f7b5-whdwc -n kube-system | grep Port:
    Port:          8443/TCP
    Host Port:     0/TCP

I’ll copy over my kubeconfig (which ill use to login) and then launch a proxy:

builder@DESKTOP-JBA79RT:~$ cp ~/.kube/config /mnt/c/Users/isaac/Downloads/aksdev5config

builder@DESKTOP-JBA79RT:~/Workspaces/kube-state-metrics$ kubectl port-forward kubernetes-dashboard-7f7676f7b5-whdwc -n kube-system 8443:8443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443

I usually use firefox for this (since Chrome hates self signed websites);

AKS and NewRelic

Use kubeconfig and pick the file you copied:

AKS and NewRelic

Let’s now browse to our Kubernetes-Dashboard deployment and scale up to something high - like a 100 replicas - to see how it impacts ourcluster:

AKS and NewRelic
AKS and NewRelic

While eventually AKS caught wind and scaled us back down to one replica for the dashboard, we see that when the cluster attempted to fulfill our scaling request, it did indeed scale out the cluster by two more nodes.   The events shows it at the top:

AKS and NewRelic

And if we click on the event, we can get more details:

AKS and NewRelic

Let’s try another change.. Let’s say we want to save some money and scale our cluster all the way down to 1 node:

builder@DESKTOP-JBA79RT:/tmp$ az aks scale -g idjaks05rg -n idjaks05 --node-count 1
 - Running ..

We will now see a new event (light blue) showing agents disconnecting:

AKS and NewRelic


builder@DESKTOP-JBA79RT:/tmp$ kubectl get nodes
NAME                                STATUS     ROLES   AGE   VERSION
aks-nodepool1-25027797-vmss000000   Ready      agent   42h   v1.17.3
aks-nodepool1-25027797-vmss000001   Ready      agent   42h   v1.17.3
aks-nodepool1-25027797-vmss000002   Ready      agent   42h   v1.17.3
aks-nodepool1-25027797-vmss000003   NotReady   agent   18m   v1.17.3
aks-nodepool1-25027797-vmss000004   NotReady   agent   18m   v1.17.3

Unfortunately, AKS won’t let us scale below our minimum from the command line:

builder@DESKTOP-JBA79RT:/tmp$ kubectl get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-25027797-vmss000000   Ready    agent   43h   v1.17.3
aks-nodepool1-25027797-vmss000001   Ready    agent   43h   v1.17.3
aks-nodepool1-25027797-vmss000002   Ready    agent   43h   v1.17.3

However, we can force a change to manual to reduce to less in the portal.

AKS and NewRelic


Dropping to two nodes:

builder@DESKTOP-JBA79RT:/tmp$ kubectl get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-25027797-vmss000000   Ready    agent   43h   v1.17.3
aks-nodepool1-25027797-vmss000001   Ready    agent   43h   v1.17.3

We can see that event captured as well:

AKS and NewRelic

We can check the version of the NR Agent

builder@DESKTOP-JBA79RT:/tmp$ kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
newrelic-infra-m89l5   1/1     Running   0          3h49m
newrelic-infra-z5dmr   1/1     Running   0          3h49m
builder@DESKTOP-JBA79RT:/tmp$ kubectl exec -it newrelic-infra-z5dmr /bin/sh
/ # newrelic-infra --version
New Relic Infrastructure Agent version: 1.11.20

And see we are up to date: https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes

Note: I could not figure out why (perhaps because I’m using the latest 1.17 preview), but i was unable to look at container details specifically.

AKS and NewRelic

NewRelic also has APM via NewRelic ONE:

AKS and NewRelic

At the bottom are pods in latest deployments:

AKS and NewRelic

We can pipe logs into NewRelic using logstash, fluentbit, etc.

Setting up Fluent Logging to NewRelic

Following (to a degree) their guide: https://docs.newrelic.com/docs/logs/enable-logs/enable-logs/fluent-bit-plugin-logs#fluentbit-plugin

builder@DESKTOP-JBA79RT:~/Workspaces$ git clone https://github.com/newrelic/newrelic-fluent-bit-output.git
Cloning into 'newrelic-fluent-bit-output'...
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 1118 (delta 0), reused 1 (delta 0), pack-reused 1113
Receiving objects: 100% (1118/1118), 2.35 MiB | 9.53 MiB/s, done.
Resolving deltas: 100% (410/410), done.
builder@DESKTOP-JBA79RT:~/Workspaces$ cd newrelic-fluent-bit-output/

We need go binaries first

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ sudo apt-get update
[sudo] password for builder:
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:2 https://packages.microsoft.com/repos/azure-cli bionic InRelease
Get:3 https://packages.microsoft.com/ubuntu/18.04/prod bionic InRelease [4003 B]
Hit:4 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:5 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [692 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:7 https://packages.microsoft.com/ubuntu/18.04/prod bionic/main amd64 Packages [105 kB]
Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [221 kB]
Get:9 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [34.3 kB]
Get:10 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [8924 B]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [657 kB]
Get:12 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [218 kB]
Get:13 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [7176 B]
Get:14 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:15 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [2764 B]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [914 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [314 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [43.9 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [11.0 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1065 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [330 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [10.8 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [4728 B]
Fetched 4896 kB in 3s (1780 kB/s)
Reading package lists... Done
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ sudo apt-get -y upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
  netplan.io
The following packages will be upgraded:
  apport azure-cli containerd gcc-8-base libatomic1 libcc1-0 libgcc1 libgomp1 libitm1 liblsan0 libmpx2 libquadmath0 libstdc++6 libtsan0 linux-libc-dev python3-apport
  python3-problem-report
17 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
Need to get 70.6 MB of archives.
After this operation, 15.5 MB of additional disk space will be used.
Get:1 https://packages.microsoft.com/repos/azure-cli bionic/main amd64 azure-cli all 2.3.1-1~bionic [46.5 MB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libquadmath0 amd64 8.4.0-1ubuntu1~18.04 [134 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libitm1 amd64 8.4.0-1ubuntu1~18.04 [27.9 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 gcc-8-base amd64 8.4.0-1ubuntu1~18.04 [18.7 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libstdc++6 amd64 8.4.0-1ubuntu1~18.04 [400 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmpx2 amd64 8.4.0-1ubuntu1~18.04 [11.6 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 liblsan0 amd64 8.4.0-1ubuntu1~18.04 [133 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libtsan0 amd64 8.4.0-1ubuntu1~18.04 [288 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcc1-0 amd64 8.4.0-1ubuntu1~18.04 [39.4 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libatomic1 amd64 8.4.0-1ubuntu1~18.04 [9192 B]
Get:11 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgomp1 amd64 8.4.0-1ubuntu1~18.04 [76.5 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgcc1 amd64 1:8.4.0-1ubuntu1~18.04 [40.6 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 python3-problem-report all 2.20.9-0ubuntu7.14 [10.7 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 python3-apport all 2.20.9-0ubuntu7.14 [82.1 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apport all 2.20.9-0ubuntu7.14 [124 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 containerd amd64 1.3.3-0ubuntu1~18.04.2 [21.7 MB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-96.97 [1021 kB]
Fetched 70.6 MB in 5s (15.1 MB/s)
(Reading database ... 77910 files and directories currently installed.)
Preparing to unpack .../libquadmath0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../libitm1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libitm1:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../gcc-8-base_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking gcc-8-base:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Setting up gcc-8-base:amd64 (8.4.0-1ubuntu1~18.04) ...
(Reading database ... 77910 files and directories currently installed.)
Preparing to unpack .../libstdc++6_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libstdc++6:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Setting up libstdc++6:amd64 (8.4.0-1ubuntu1~18.04) ...
(Reading database ... 77910 files and directories currently installed.)
Preparing to unpack .../0-libmpx2_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libmpx2:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../1-liblsan0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking liblsan0:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../2-libtsan0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libtsan0:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../3-libcc1-0_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../4-libatomic1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libatomic1:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../5-libgomp1_8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libgomp1:amd64 (8.4.0-1ubuntu1~18.04) over (8.3.0-26ubuntu1~18.04) ...
Preparing to unpack .../6-libgcc1_1%3a8.4.0-1ubuntu1~18.04_amd64.deb ...
Unpacking libgcc1:amd64 (1:8.4.0-1ubuntu1~18.04) over (1:8.3.0-26ubuntu1~18.04) ...
Setting up libgcc1:amd64 (1:8.4.0-1ubuntu1~18.04) ...
(Reading database ... 77910 files and directories currently installed.)
Preparing to unpack .../0-python3-problem-report_2.20.9-0ubuntu7.14_all.deb ...
Unpacking python3-problem-report (2.20.9-0ubuntu7.14) over (2.20.9-0ubuntu7.12) ...
Preparing to unpack .../1-python3-apport_2.20.9-0ubuntu7.14_all.deb ...
Unpacking python3-apport (2.20.9-0ubuntu7.14) over (2.20.9-0ubuntu7.12) ...
Preparing to unpack .../2-apport_2.20.9-0ubuntu7.14_all.deb ...
invoke-rc.d: could not determine current runlevel
 * Stopping automatic crash report generation: apport                                                                                                                             [ OK ]
Unpacking apport (2.20.9-0ubuntu7.14) over (2.20.9-0ubuntu7.12) ...
Preparing to unpack .../3-containerd_1.3.3-0ubuntu1~18.04.2_amd64.deb ...
Unpacking containerd (1.3.3-0ubuntu1~18.04.2) over (1.3.3-0ubuntu1~18.04.1) ...
Preparing to unpack .../4-linux-libc-dev_4.15.0-96.97_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.15.0-96.97) over (4.15.0-91.92) ...
Preparing to unpack .../5-azure-cli_2.3.1-1~bionic_all.deb ...
Unpacking azure-cli (2.3.1-1~bionic) over (2.2.0-1~bionic) ...
Setting up libquadmath0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libgomp1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libatomic1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libcc1-0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up azure-cli (2.3.1-1~bionic) ...
Setting up libtsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up linux-libc-dev:amd64 (4.15.0-96.97) ...
Setting up containerd (1.3.3-0ubuntu1~18.04.2) ...
Setting up liblsan0:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up python3-problem-report (2.20.9-0ubuntu7.14) ...
Setting up libmpx2:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up libitm1:amd64 (8.4.0-1ubuntu1~18.04) ...
Setting up python3-apport (2.20.9-0ubuntu7.14) ...
Setting up apport (2.20.9-0ubuntu7.14) ...
invoke-rc.d: could not determine current runlevel
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.39) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

Then we can set up go (so we can "make"). While their guide suggests go 1.11, the git repo has moved to 1.12.

$ cd /tmp
$ wget https://dl.google.com/go/go1.12.linux-amd64.tar.gz
$ umask 0002
$ sudo tar -xvf go1.12.linux-amd64.tar.gz
$ sudo mv go /usr/local
$ vi ~/.bashrc

builder@DESKTOP-JBA79RT:/tmp$ cat ~/.bashrc | tail -n3
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH

$ source ~/.bashrc

We need a fluentbit container with newrelic on there.  We can build it and make our own, but the documentation tells us:

AKS and NewRelic
This handy message with no actual container name or link given :|

However, nowhere was it listed where.  Browsing dockerhub, i found the image: https://hub.docker.com/r/newrelic/newrelic-fluentbit-output ... but then we need one more piece of info - the path to the build sharedobject (so). I figured it out from the dockerfile here: https://github.com/newrelic/newrelic-fluent-bit-output/blob/master/Dockerfile#L15

We now have enough to setup fluent with newrelic.

The first four parts can be done right from the fluentbit documentation:

https://docs.fluentbit.io/manual/installation/kubernetes

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl create namespace logging
namespace/logging created
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-accou
nt.yaml
serviceaccount/fluent-bit created
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
clusterrole.rbac.authorization.k8s.io/fluent-bit-read created
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/fluent-bit-read created
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$

Next we want to wget (or curl -O) the configmap and ds files:

$ wget https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml
$ cp fluent-bit-configmap.yaml fluent-bit-configmap.yaml.orig
$ wget https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml
$ cp fluent-bit-ds.yaml fluent-bit-ds.yaml.orig

The configmap will need to add our license key. We’ll also add a key for the host and service:

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ diff fluent-bit-configmap.yaml fluent-bit-configmap.yaml.orig
11,14d10
<   plugins.conf: |
<     [PLUGINS]
<         Path /fluent-bit/bin/out_newrelic.so
<
21d16
<         Plugins_File  plugins.conf
53,54d47
<         Add hostname        ${FLUENT_NEWRELIC_HOST}
<         Add service_name    ${FLUENT_NEWRELIC_SERVICE}
58c51
<         Name            newrelic
---
>         Name            es
60c53,54
<         licenseKey      c8xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxNRAL
---
>         Host            ${FLUENT_ELASTICSEARCH_HOST}
>         Port            ${FLUENT_ELASTICSEARCH_PORT}

Similarly we’ll change the daemonset yaml to point to the newrelic image and add the variables we referenced above:

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ diff fluent-bit-ds.yaml fluent-bit-ds.yaml.orig
1c1
< apiVersion: apps/v1
---
> apiVersion: extensions/v1beta1
7c7,9
<     app.kubernetes.io/name: fluent-bit
---
>     k8s-app: fluent-bit-logging
>     version: v1
>     kubernetes.io/cluster-service: "true"
9,11d10
<   selector:
<     matchLabels:
<       app.kubernetes.io/name: fluent-bit
15c14,16
<         app.kubernetes.io/name: fluent-bit
---
>         k8s-app: fluent-bit-logging
>         version: v1
>         kubernetes.io/cluster-service: "true"
22,23c23,24
<       - name: nr-fluent-bit
<         image: newrelic/newrelic-fluentbit-output:1.1.5
---
>       - name: fluent-bit
>         image: fluent/fluent-bit:1.3.11
28,31c29,32
<         - name: FLUENT_NEWRELIC_HOST
<           value: "aks"
<         - name: FLUENT_NEWRELIC_SERVICE
<           value: "dev5"
---
>         - name: FLUENT_ELASTICSEARCH_HOST
>           value: "elasticsearch"
>         - name: FLUENT_ELASTICSEARCH_PORT
>           value: "9200"
60d60
<

For those that don’t dig on diff:

AKS and NewRelic
configmap changes (modified on right)
AKS and NewRelic
daemonset changes (modified on right)

Now apply

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl apply -f fluent-bit-configmap.yaml
configmap/fluent-bit-config created
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl apply -f fluent-bit-ds.yaml
daemonset.apps/fluent-bit created

Lastly, we can now check that it’s working:

builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl get pods -n logging
NAME               READY   STATUS    RESTARTS   AGE
fluent-bit-tztbx   1/1     Running   0          8m36s
fluent-bit-z5g7g   1/1     Running   0          8m36s
builder@DESKTOP-JBA79RT:~/Workspaces/newrelic-fluent-bit-output$ kubectl logs fluent-bit-tztbx -n logging
Fluent Bit v1.0.3
Copyright (C) Treasure Data

[2020/04/13 13:15:38] [ info] [storage] initializing...
[2020/04/13 13:15:38] [ info] [storage] in-memory
[2020/04/13 13:15:38] [ info] [storage] normal synchronization mode, checksum disabled
[2020/04/13 13:15:38] [ info] [engine] started (pid=1)
[2020/04/13 13:15:38] [ info] [filter_kube] https=1 host=kubernetes.default.svc port=443
[2020/04/13 13:15:38] [ info] [filter_kube] local POD info OK
[2020/04/13 13:15:38] [ info] [filter_kube] testing connectivity with API server...
[2020/04/13 13:15:43] [ info] [filter_kube] API server connectivity OK
[2020/04/13 13:15:43] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020

Now almost immediately we see results in the NewRelicONE Logs UI:

AKS and NewRelic

And this means we can query any kind of standard logging we would expect, such as all logs from a particular container:

AKS and NewRelic
AKS and NewRelic

If we wanted to, we could go a step further and capture logs from the nodes themselves (following this guide).

Though, because AKS uses VMSS, we can also use NewRelic’s integration with Azure. (granted i followed the wizard to create an SP and add as a reader to my subscription.  there were no issues, so i'm not adding that here)

AKS and NewRelic

This allows us to do things like directly track VMSS events via NewRelic:

AKS and NewRelic
AKS and NewRelic

And we won’t get too deep into alerts. But one can create notification channels to subscribe to alert policies.  For instance, to set an alert on the default Kubernetes policy, i can just subscribe my email in a Notification Channel.

AKS and NewRelic

And we can then create alerts, for instance on logs in the NewRelicOne side:

AKS and NewRelic
AKS and NewRelic
Example NewRelic Email

As an aside, we can see Metrics under Monitoring from Azure itself:

AKS and NewRelic

We can track CPUs, for instance.

AKS and NewRelic

If we enable logging to an Azure Workspaces, we can see logs there as well:

AKS and NewRelic
AKS and NewRelic

Summary

It’s hard to figure out what is and is not included with our Demo environment.  

For instance, Infrastructure monitoring is $7.20/mo for essentials and $14.40/mo for “Pro” with 12k CUs.  What’s a CU? Well it’s some unit.  How much are more CUs?  You can ask us…

Then for Logging, it’s a bit more straightforward with plans between $55/mo and $75/mo.

AKS and NewRelic
Log Pricing as of Apr-13-2020

They have a calculator, which gives some examples what that means:

  • 8 Days Retention: $55 X 10 GB Daily = $550 per month or $6,600 per year
  • 15 Days Retention: $65 X 10 GB Daily = $650 per month or $7,800 per year
  • 30 Days Retention: $75 X 10 GB Daily = $750 per month or $9,000 per year

I’m not sure I am doing this right, but this is close to what Azure will charge for similar tracking:

AKS and NewRelic

However, with Azure Monitoring (which includes logs) you pay per alerting rule.

I think NewRelic has a very compelling full stack offering.  As they are a seasoned player in the space, there are great integrations to expand on it, such as BigPanda and PagerDuty for on-call support.

]]>
<![CDATA[Azure Arc for Servers (Preview)]]>Azure Arc is a preview feature for Azure that will let you manage in Azure resources outside of Azure.  So far they’ve discussed Arc for databases, servers and Kubernetes.  I was recently given access to the Preview of Arc for servers.  Let’s check out what’s there and

]]>
https://freshbrewed.science/azure-arc-for-servers-preview/5e8b4f70223ea70ebae4107dMon, 06 Apr 2020 16:06:29 GMT

Azure Arc is a preview feature for Azure that will let you manage in Azure resources outside of Azure.  So far they’ve discussed Arc for databases, servers and Kubernetes.  I was recently given access to the Preview of Arc for servers.  Let’s check out what’s there and what it can do at present.

Adding an Arc Server

We can start by adding a Linode instance into our Resource Group.  Start in Azure with the Create Wizard.

Let’s add Azure Arc for servers:

Azure Arc for Servers (Preview)
From Azure portal, click Add+

We will need to pick our Subscription and Resource Group as well as OS:

Azure Arc for Servers (Preview)

When we click create, we can see the Install script:

Azure Arc for Servers (Preview)

Next, I’ll hop over to Linode and create a new nano node to test.

Azure Arc for Servers (Preview)

We can ssh in and run the script:

Azure Arc for Servers (Preview)

You’ll need to login for that last line:

root@localhost:~# azcmagent connect --resource-group "idjaks04rg" --tenant-id "xxxxx-xxxxxx-xxxxxxx-xxxxxxx" --location "westus2" --subscription-id "xxxxxxxx-xxxxxxxx-xxxxxxx-xxxxxxx"
level=info msg="Onboarding Machine. It usually takes a few minutes to complete. Sometimes it may take longer depending on network and server load status."
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FFW3RGW9H to authenticate.
level=info msg="Successfully Onboarded Resource to Azure" VM Id=cb9b5edf-a16a-49c6-acbf-a08127bf6bef
root@localhost:~#

One issue I did see is that the “hostname” is used for machine name. So after adding we see this listed as “localhost” in the Azure Portal.  The other entry was from a CentOS host i also added.

Azure Arc for Servers (Preview)

As you can see, I have Azure Arc loaded with my Linode server.  It’s been there about a week.

From Linode:

Azure Arc for Servers (Preview)

And from Azure Portal:

Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)

By default, we can’t do much without adding Log Analytics:

Azure Arc for Servers (Preview)

What happens when we power off?

Azure Arc for Servers (Preview)

Now that it's offline:

Azure Arc for Servers (Preview)

At first, i still saw it connected in Azure Portal:

Azure Arc for Servers (Preview)

However after a few minutes, it noted that it had gone offline:

Azure Arc for Servers (Preview)

Even with Azure Log analytics on, we really can only see if it’s powered on or off.

Azure Arc for Servers (Preview)

Arc Policies

Let’s add a policy for Azure Log Analytics.

Azure Arc for Servers (Preview)

We can check on policies, like Azure Log Analytics for Linux, we can set a policy that shows if it's missing and (supposedly) install it:

Azure Arc for Servers (Preview)

But i didn't install it.  We could then install it and validate (https://docs.microsoft.com/en-us/azure/azure-monitor/platform/agent-linux#install-the-agent-using-wrapper-script) if we desired.

Windows

What about windows?  We will use AWS to test this one (since Linode, by it’s name, just does Linux).

First, let’s create a key-pair for logging in later: https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#KeyPairs:

Azure Arc for Servers (Preview)

We’ll create a PEM key pair.

Azure Arc for Servers (Preview)

I want to launch a set of EC2 Windows servers quickly.  I found a decent quite start for creating an HA cluster of RDS Bastion hosts here: https://docs.aws.amazon.com/quickstart/latest/rd-gateway/welcome.html

Click “Launch Quick Start”

Azure Arc for Servers (Preview)

In the template, you’ll want to use your CIDR block and the Key Pair we created.

Azure Arc for Servers (Preview)

If that confuses you, just google “what is my IP”, drop the last digit and slap a “0/24” there.  That covers from .1 to .255 which for me, would likely cover my outgoing connections from Xfinity.

Azure Arc for Servers (Preview)

Lower, we’ll need a password and domain name.  I don’t think it needs a corresponding R53. I’ll assume you could put anything you want there.

Azure Arc for Servers (Preview)

My Parameters look something like this:

Azure Arc for Servers (Preview)

Once we accept the IAM Creation notification and click create, we should see Cloudformation kick in to create our stack:

Azure Arc for Servers (Preview)

Once done:

Azure Arc for Servers (Preview)

We can use StackAdmin (the admin account we specified in the template) to login:

Azure Arc for Servers (Preview)

Now, back in our Azure Portal, let’s create a new Arc server, this time set it to be windows:

Azure Arc for Servers (Preview)

In our Remote Desktop session, launch and admin powershell. I found the clipboard wasn’t pasting, so i saved the contents into a notepad and ran that:

Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)

Going back to our RG, we should see the new machine added:

Azure Arc for Servers (Preview)
Resource Group view in Azure Portal
Azure Arc for Servers (Preview)
Instance Details

But again, our Logs are not populated just by adding the Agent. So let’s create a Log Analytics Workspace in our RG:

Azure Arc for Servers (Preview)

I’ll then go to the workspace to download the client and see the settings;

Azure Arc for Servers (Preview)

We’ll use the workspace, but not Systems Operations Center for this demo.

Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)

If we look up the monitoring agent in the control panel, we can see it’s alive in the second tab:

Azure Arc for Servers (Preview)

We can use the Heartbeat check to see if it’s alive: Heartbeat| where TimeGenerated > ago(30m)

Azure Arc for Servers (Preview)

However, I can do the same in the root workspace as well:

Azure Arc for Servers (Preview)

So what is this buying us?  I like the idea of compliance checks...

For instance, we set one to install the log agent, if absent, on linux. However, we can go back and see there are still no logs (after a couple days):

Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)

So either it didn’t actually install the agent and the check is in err or it installed it but didn’t configure or start it, equally useless to me.

Cleanup

AWS

Let’s go to Stack in CloudFormation and delete the root stack:

Azure Arc for Servers (Preview)
Azure Arc for Servers (Preview)

In Linode, you’ll want to delete your instances.  I learned the hard way with DigitalOcean, you still get billed for powered off machines (DO charges full price too):

Azure Arc for Servers (Preview)

In Azure, you can either delete the whole RG or just the instances in it:

Azure Arc for Servers (Preview)

In my case, I still want to experiment more with the Ubuntu host.

Summary

Azure Arc is interesting. I think there could be potential there to do some interesting things.  I have to assume it’s either the earliness of the preview or settings on my end that prevented me from doing anything useful with it.  For now, i’ll just say it needs some time to bake before it’s worth using for anything other than monitoring whether machines are powered on.

]]>
<![CDATA[AKS and Ingress : Constraining Access]]>In my last missive I detailed out AKS and Ingress using NGinx Ingress Controllers. For public facing services this can be sufficient.  However, many businesses require tighter controls - either for canary style deployments, hybrid cloud mixed systems or geofencing using CIDR blocks.

While we have a public IP that

]]>
http://localhost:2368/aks-and-ingress-constraining-access/5e868da53ca42c775c09192dFri, 03 Apr 2020 02:36:29 GMT

In my last missive I detailed out AKS and Ingress using NGinx Ingress Controllers. For public facing services this can be sufficient.  However, many businesses require tighter controls - either for canary style deployments, hybrid cloud mixed systems or geofencing using CIDR blocks.

While we have a public IP that can be reached globally, here are two methods we can explore to restrict traffic.

Network Security Groups

One way is to restrict traffic at the Network Security Group (NSG) level. This maintains the perimeter around our VNet that holds our cluster’s VMs. We can see rules have been added for us that allows all of the internet to access our NGinx.

AKS and Ingress : Constraining Access
Traffic checked at blocked at the Network Perimeter
AKS and Ingress : Constraining Access

However, I can reach that from anywhere.

AKS and Ingress : Constraining Access
Hitting from home
AKS and Ingress : Constraining Access
Sending my traffic to Sweden, i can still hit it

Let’s head to the Inbound rules and first add a rule.

AKS and Ingress : Constraining Access
Existing rules created for us

We are going to put it at a range above 500 but below the 65000 rules, then remove the old ones created by Nginx…

AKS and Ingress : Constraining Access

Which means, this:

AKS and Ingress : Constraining Access

becomes:

AKS and Ingress : Constraining Access
While i set source port to "*" i did limit target ports

One thing I discovered in testing is that my external facing IP address was not enough.  I used traceroute to find the hops from Comcast and add CIDR rules for all of them.

This now means my network range can see the site, but others would get an unreachable page:

AKS and Ingress : Constraining Access
TIme outs

From my AKS cluster, i now see :

AKS and Ingress : Constraining Access
(times out)

Restricting via NGinx

Perhaps you want to set controls on the NGinx ingress itself instead of the NSG.  In other words, you want to allow traffic to hit the ingress, but reject anything but our allowable ranges. This can be controlled by annotations on the ingress deployment, this is something that can be parameterized and updated via pipelines allowing our CICD systems to control it.  This could be particularly useful for slow roleouts (canary deployments) or allowing limited private access.  The disadvantage is that your NGinx can get hammered from anywhere so you are more exposed from a security standpoint.

AKS and Ingress : Constraining Access
Traffic Checked by NGinx Ingress Controller and rejected with HTTP/S error

Let’s install our Helloworld example:

$ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
$ helm install aks-helloworld azure-samples/aks-helloworld --namespace shareddev
NAME: aks-helloworld
LAST DEPLOYED: Thu Apr  2 10:49:46 2020
NAMESPACE: shareddev
STATUS: deployed
REVISION: 1
TEST SUITE: None

We can list our services there and add an ingress:

$ kubectl get svc -n shareddev
NAME             TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
aks-helloworld   ClusterIP      10.0.200.75   <none>           80/TCP                       21s
ingress-nginx    LoadBalancer   10.0.19.138   xxx.xxx.xxx.xxx   80:30466/TCP,443:30090/TCP   24h
$ vi helloworld-ingress.yaml
$ kubectl apply -f helloworld-ingress.yaml 
error: error validating "helloworld-ingress.yaml": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): unknown field "pathType" in io.k8s.api.networking.v1beta1.HTTPIngressPath; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl apply -f helloworld-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld created
AKS and Ingress : Constraining Access

We can now restrict at the controller level.

Let’s look at the Nginx configmap:

$ kubectl get cm nginx-configuration -n shareddev
NAME                  DATA   AGE
nginx-configuration   0      24h
$ kubectl get cm nginx-configuration -n shareddev -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-configuration","namespace":"shareddev"}}
  creationTimestamp: "2020-04-01T15:29:50Z"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: nginx-configuration
  namespace: shareddev
  resourceVersion: "177523"
  selfLink: /api/v1/namespaces/shareddev/configmaps/nginx-configuration
  uid: a0c5ef9f-742d-11ea-a4a0-bac2a8381a43

Here we have no settings.. Let’s add a data block to lock down access.

$ kubectl edit cm nginx-configuration -n shareddev
configmap/nginx-configuration edited
$ kubectl get cm nginx-configuration -n shareddev -o yaml
apiVersion: v1
data:
  whitelist-source-range: 10.2.2.2/24
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-configuration","namespace":"shareddev"}}
  creationTimestamp: "2020-04-01T15:29:50Z"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: nginx-configuration
  namespace: shareddev
  resourceVersion: "177827"
  selfLink: /api/v1/namespaces/shareddev/configmaps/nginx-configuration
  uid: a0c5ef9f-742d-11ea-a4a0-bac2a8381a43

Note that i added a data block with a whitelist-source-range. I used 10.2.2.2 knowing it won't match any of my systems.

We can now see it’s immediately blocked.

AKS and Ingress : Constraining Access
Nginx validates our IP and rejects with a 403 error page

But what if we want to enable access to some by not all?

Let’s restore our configmap ( kubectl edit configmap nginx-configuration -n shareddev)   and remove the datablock:

$ kubectl get cm nginx-configuration -n shareddev -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-configuration","namespace":"shareddev"}}
  creationTimestamp: "2020-04-01T15:29:50Z"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: nginx-configuration
  namespace: shareddev
  resourceVersion: "178232"
  selfLink: /api/v1/namespaces/shareddev/configmaps/nginx-configuration
  uid: a0c5ef9f-742d-11ea-a4a0-bac2a8381a43

Let’s make two endpoints, one that is open, but the other has an annotation to block access:

$ cat helloworld-ingress.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld
  namespace: shareddev
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworld
        pathType: Prefix
        backend:
          serviceName: aks-helloworld
          servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld-blocked
  namespace: shareddev
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/whitelist-source-range: '10.2.2.0/24'
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworldb
        pathType: Prefix
        backend:
          serviceName: aks-helloworld
          servicePort: 80

$ kubectl apply -f helloworld-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld configured
ingress.networking.k8s.io/aks-helloworld-blocked created

We can now see that the “b” (blocked) endpoint is blocked by Nginx but the other remains open:

AKS and Ingress : Constraining Access
You can see the top is rejected but the lower is allowed

Summary

When it comes to controlling access to the resources in our cluster, we have illustrated two ways we can constrain access.  The first uses cloud native perimeter security techniques that would apply to any cloud compute.  This is the stricter approach but does require access to modify Network Security Groups.  The other technique uses kubernetes native annotations to selectively limit and enable access to specific ingress pathways.

]]>
<![CDATA[AKS and Ingress, again]]>I wrote most of my Azure Kubernetes guides a while back in a time where Helm 2 was standard and RBAC was an optional add-in.  Recently when doing AKS work for work, I realized really how much has changed.  For instance, all AKS clusters now have RBAC turned on by

]]>
https://freshbrewed.science/aks-and-ingress-again/5e7ebe84c02f8717744bd1e8Sat, 28 Mar 2020 03:45:20 GMT

I wrote most of my Azure Kubernetes guides a while back in a time where Helm 2 was standard and RBAC was an optional add-in.  Recently when doing AKS work for work, I realized really how much has changed.  For instance, all AKS clusters now have RBAC turned on by default and furthermore most help charts, including the standard “stable” ones have moved to Helm 3.  AKS 1.16 fundamentally changed things by expiring so many “beta” and pre-release APIs (favouring instead annotations).  So let’s get back to AKS and Ingress.

The reason I wish to sort out Ingress is that many of the guides I’ve seen from Microsoft tech blogs still reference pre-RBAC clusters and Helm 2. They don’t work with the current charts that our out there and ingress is probably the most key piece of actually using a cluster to host applications.

Creating an AKS cluster

Az CLI and Login

First, we better upgrade our Azure CLI (If you lack it, you can use apt-get install instead of upgrade)

$ sudo apt-get update
$ sudo apt-get upgrade azure-cli

You can use --version to see if you are out of date:

AKS and Ingress, again
builder@DESKTOP-JBA79RT:~$ az --version
azure-cli                          2.2.0

command-modules-nspkg              2.0.3
core                               2.2.0
nspkg                              3.0.4
telemetry                          1.0.4

Python location '/opt/az/bin/python3'
Extensions directory '/home/builder/.azure/cliextensions'

Python (Linux) 3.6.5 (default, Mar  6 2020, 14:41:24)
[GCC 7.4.0]

Legal docs and information: aka.ms/AzureCliLegal



Your CLI is up-to-date.

Please let us know how we are doing: https://aka.ms/clihats

You’ll want to login:

$ az login
$ az account set --subscription (your sub id)

Let’s just verify by showing what clusters are there now:

$ az aks list
[]

Cluster create

$ az group create --name idjaks02rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks02rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks02rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Next we should create an SP (so we can later handle creating an ACR or enabling RBAC based IAM to Azure resources)

builder@DESKTOP-JBA79RT:~$ az ad sp create-for-rbac -n idjaks02sp --skip-assignment --output json > my_sp.json
Changing "idjaks02sp" to a valid URI of "http://idjaks02sp", which is the required format used for service principal names
builder@DESKTOP-JBA79RT:~$ cat my_sp.json | jq -r .appId
5bbad7af-0559-411c-a5c6-c33874cbbd5b

We can save the ID and Password for the next step

builder@DESKTOP-JBA79RT:~$ export SP_PASS=`cat my_sp.json | jq -r .password`
builder@DESKTOP-JBA79RT:~$ export SP_ID=`cat my_sp.json | jq -r .appId`

Now we can create it

builder@DESKTOP-JBA79RT:~$ az aks create --resource-group idjaks02rg --name idjaks02 --location centralus --node-count 3 --enable-cluster-autoscaler --min-count 2 --max-count 4 --generate-ssh-keys --network-plugin azure --network-policy azure --service-principal $SP_ID --client-secret $SP_PASS
{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": true,
      "enableNodePublicIp": null,
      "maxCount": 4,
      "maxPods": 30,
      "minCount": 2,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.15.10",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks02-idjaks02rg-70b42e",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks02-idjaks02rg-70b42e-bd99caae.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourcegroups/idjaks02rg/providers/Microsoft.ContainerService/managedClusters/idjaks02",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.15.10",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLzysqDWJpJ15Sho/NYk3ZHzC36LHw5zE1gyxhEQCH53BSbgA39XVXs/8TUjrkoVi6/YqlliYVg7TMQSjG51d3bLuelMh7IGIPGqSnT5rQe4x9ugdi+rLeFgP8+rf9aGYwkKMd98Aj2i847/deNLFApDoTtI54obZDuhu2ySW23BiQqV3lXuIe/0WwKpG0MFMoXU9JrygPXyNKbgJHR7pLR9U8WVLMF51fmUEeKb5johgrKeIrRMKBtiijaJO8NP6ULuOcQ+Z0VpUUbZZpIqeo8wqdMbDHkyFqh5a5Z1qrY5uDSpqcElqR5SiVesumUfMTBxz83/oprz23e747h8rP"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks02",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/MC_idjaks02rg_idjaks02_centralus/providers/Microsoft.Network/publicIPAddresses/2ad8aacf-3c25-4eea-922e-24b1341b0f87",
          "resourceGroup": "MC_idjaks02rg_idjaks02_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "azure",
    "networkPolicy": "azure",
    "outboundType": "loadBalancer",
    "podCidr": null,
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaks02rg_idjaks02_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaks02rg",
  "servicePrincipalProfile": {
    "clientId": "5bbad7af-0559-411c-a5c6-c33874cbbd5b",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": {
    "adminPassword": null,
    "adminUsername": "azureuser"
  }
}

I wanted to verify both Kubenet and Azure CNI networking, so I'll create another cluster we'll be using later

builder@DESKTOP-JBA79RT:~$ az group create --name idjaks03rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks03rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks03rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

And i'll just make a quick SP for this cluster as well

builder@DESKTOP-JBA79RT:~$ az ad sp create-for-rbac -n idjaks03sp --skip-assignment
Changing "idjaks03sp" to a valid URI of "http://idjaks03sp", which is the required format used for service principal names
{
  "appId": "9a5eac0f-ea43-4791-9dc1-d2226b35de7d",
  "displayName": "idjaks03sp",
  "name": "http://idjaks03sp",
  "password": "159ef2f8-xxxx-xxxx-xxxx-4ff051177429",
  "tenant": "d73a39db-6eda-495d-8000-7579f56d68b7"
}

And now create it

builder@DESKTOP-JBA79RT:~$ az aks create -g idjaks03rg -n idjaks03 --location centralus --node-count 3 --generate-ssh-keys --network-plugin kubenet --service-principal 9a5eac0f-xxxx-xxxx-xxxx-d2226b35de7d --client-secret 159ef2f8-384f-45bd-9a20-4ff051177429
{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": null,
      "enableNodePublicIp": null,
      "maxCount": null,
      "maxPods": 110,
      "minCount": null,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.15.10",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks03-idjaks03rg-70b42e",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks03-idjaks03rg-70b42e-4ef6b7dc.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourcegroups/idjaks03rg/providers/Microsoft.ContainerService/managedClusters/idjaks03",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.15.10",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLzysqDWJpJ15Sho/NYk3ZHzC36LHw5zE1gyxhEQCH53BSbgA39XVXs/8TUjrkoVi6/YqlliYVg7TMQSjG51d3bLuelMh7IGIPGqSnT5rQe4x9ugdi+rLeFgP8+rf9aGYwkKMd98Aj2i847/deNLFApDoTtI54obZDuhu2ySW23BiQqV3lXuIe/0WwKpG0MFMoXU9JrygPXyNKbgJHR7pLR9U8WVLMF51fmUEeKb5johgrKeIrRMKBtiijaJO8NP6ULuOcQ+Z0VpUUbZZpIqeo8wqdMbDHkyFqh5a5Z1qrY5uDSpqcElqR5SiVesumUfMTBxz83/oprz23e747h8rP"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks03",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/MC_idjaks03rg_idjaks03_centralus/providers/Microsoft.Network/publicIPAddresses/ef81397e-f752-4b8e-9b20-842aa72fbc9e",
          "resourceGroup": "MC_idjaks03rg_idjaks03_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "kubenet",
    "networkPolicy": null,
    "outboundType": "loadBalancer",
    "podCidr": "10.244.0.0/16",
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaks03rg_idjaks03_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaks03rg",
  "servicePrincipalProfile": {
    "clientId": "9a5eac0f-ea43-4791-9dc1-d2226b35de7d",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": null
}

Verify they are both stood up just fine

builder@DESKTOP-JBA79RT:~$ az aks list -o table
Name      Location    ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
--------  ----------  ---------------  -------------------  -------------------  -----------------------------------------------------------
idjaks02  centralus   idjaks02rg       1.15.10              Succeeded            idjaks02-idjaks02rg-70b42e-bd99caae.hcp.centralus.azmk8s.io
idjaks03  centralus   idjaks03rg       1.15.10              Succeeded            idjaks03-idjaks03rg-70b42e-4ef6b7dc.hcp.centralus.azmk8s.io

Ingress with Nginx

We will now show using both network types.  For NGinx, since the Microsoft docs are a bit out of date, at least with regards to Helm, let's use the current GH page for Nginx: https://kubernetes.github.io/ingress-nginx/deploy/

First, we need to login to the cluster with Admin credentials

builder@DESKTOP-JBA79RT:~$ az aks get-credentials -n idjaks02 -g idjaks02rg --admin
Merged "idjaks02-admin" as current context in /home/builder/.kube/config

And if we want that sanity that our kubectl is up to date, we can always check the provider id (check resource group);

builder@DESKTOP-JBA79RT:~$ kubectl get nodes -o json | jq '.items[0].spec | .providerID'
"azure:///subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/mc_idjaks02rg_idjaks02_centralus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-nodepool1-13062035-vmss/virtualMachines/0"

First, we apply the mandatory (which applies to all k8s providers except minikube):

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

Then our cloud generic (for AKS)

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created

We should see an Nginx controller now running:

builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-7fcf8df75d-vrwft   1/1     Running   0          102s

And we can see an LB was created for us:

builder@DESKTOP-JBA79RT:~$ kubectl get svc -n ingress-nginx
NAME            TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.0.74.42   52.141.208.74   80:31389/TCP,443:30406/TCP   6m47s

Hitting that endpoint should show a 404 page:

AKS and Ingress, again
404 shows Nginx is up and replying

Next let’s install a hello world app

builder@DESKTOP-JBA79RT:~$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories
builder@DESKTOP-JBA79RT:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "azure-samples" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-JBA79RT:~$ kubectl create ns ingress-basic
namespace/ingress-basic created
builder@DESKTOP-JBA79RT:~$ helm install aks-helloworld azure-samples/aks-helloworld --namespace ingress-basic
NAME: aks-helloworld
LAST DEPLOYED: Fri Mar 27 11:29:40 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let's get the pod and forward traffic

builder@DESKTOP-JBA79RT:~$ kubectl get pods -n ingress-basic
NAME                                             READY   STATUS    RESTARTS   AGE
acs-helloworld-aks-helloworld-5d6f57bdb5-5nx95   1/1     Running   0          42s
builder@DESKTOP-JBA79RT:~$ kubectl port-forward acs-helloworld-aks-helloworld-5d6f57bdb5-5nx95 -n ingress-basic 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080
AKS and Ingress, again

Create the Ingress and apply it.  Note we had to add turn off validation

builder@DESKTOP-JBA79RT:~$ cat simple-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworld
        pathType: Prefix
        backend:
          serviceName: aks-helloworld
          servicePort: 80

builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml
error: error validating "simple-ingress.yaml": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): unknown field "pathType" in io.k8s.api.networking.v1beta1.HTTPIngressPath; if you choose to ignore these errors, turn validation off with --validate=false
builder@DESKTOP-JBA79RT:~$ !v
vi simple-ingress.yaml
builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld created
AKS and Ingress, again
Properly serves traffic

This works, but no matter what we pass on the URL, it will just redirect to “/”

If we wanted to pass subpaths (more common with webservices), we would do a rewrite

builder@DESKTOP-JBA79RT:~$ cat simple-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworld(/|$)(.*)
        backend:
          serviceName: aks-helloworld
          servicePort: 80
builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld configured

We can see that reflected now:

AKS and Ingress, again
Showing the base URL directing, but the Pod doesn't serve /asdf

I should point out that the Cluster-IP can be used by containers inside the cluster. We can test that with a local VNC host.

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/ConSol/docker-headless-vnc-container/master/kubernetes/kubernetes.headless-vnc.example.deployment.yaml
deployment.apps/headless-vnc created
$ kubectl port-forward headless-vnc-54f58c69f9-zxwbd 5901:5901
(password is “vncpassword”)
AKS and Ingress, again
Here we see traffic from Pod to Pod

Another question we want answered is what happens when we upgrade? Which pods are going to change?

Let’s move the cluster to 1.16 (and throw caution to the wind)

AKS and Ingress, again
AKS and Ingress, again
Showing upgrade process

And the node pool

AKS and Ingress, again

We can use $ watch kubectl get nodes to see when it comes up

To see when they flip over…

Every 2.0s: kubectl get nodes --all-namespaces                                                                                            DESKTOP-JBA79RT: Fri Mar 27 12:48:34 2020
NAME                                STATUS   ROLES   AGE    VERSION
aks-nodepool1-13062035-vmss000000   Ready    agent   117m   v1.15.10
aks-nodepool1-13062035-vmss000002   Ready    agent   117m   v1.15.10
builder@DESKTOP-JBA79RT:~$ kubectl get nodes --all-namespaces
NAME                                STATUS   ROLES   AGE    VERSION
aks-nodepool1-13062035-vmss000000   Ready    agent   130m   v1.16.7
aks-nodepool1-13062035-vmss000002   Ready    agent   130m   v1.16.7

Let's get the services

builder@DESKTOP-JBA79RT:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                        TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                         AGE
default         headless-vnc                NodePort       10.0.100.216   <none>          6901:32001/TCP,5901:32002/TCP   21m
default         kubernetes                  ClusterIP      10.0.0.1       <none>          443/TCP                         133m
ingress-basic   aks-helloworld              ClusterIP      10.0.128.75    <none>          80/TCP                          91m
ingress-nginx   ingress-nginx               LoadBalancer   10.0.74.42     52.141.208.74   80:31389/TCP,443:30406/TCP      106m
kube-system     dashboard-metrics-scraper   ClusterIP      10.0.121.188   <none>          8000/TCP                        13m
kube-system     kube-dns                    ClusterIP      10.0.0.10      <none>          53/UDP,53/TCP                   132m
kube-system     kubernetes-dashboard        ClusterIP      10.0.148.139   <none>          443/TCP                         12m
kube-system     metrics-server              ClusterIP      10.0.35.19     <none>          443/TCP                         132m
builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                             READY   STATUS              RESTARTS   AGE
default         headless-vnc-54f58c69f9-s8l9g                    0/1     ContainerCreating   0          66s
ingress-basic   acs-helloworld-aks-helloworld-5d6f57bdb5-vmwkt   1/1     Running             0          3m40s
ingress-nginx   nginx-ingress-controller-7fcf8df75d-fjpql        1/1     Running             0          3m40s
kube-system     azure-cni-networkmonitor-mj45l                   1/1     Running             0          130m
kube-system     azure-cni-networkmonitor-whvxf                   1/1     Running             0          130m
kube-system     azure-ip-masq-agent-5vqvl                        1/1     Running             0          130m
kube-system     azure-ip-masq-agent-x6xx7                        1/1     Running             0          130m
kube-system     azure-npm-cpk6m                                  1/1     Running             0          130m
kube-system     azure-npm-zwv9x                                  1/1     Running             0          130m
kube-system     coredns-698c77c5d7-4grnl                         1/1     Running             0          3m40s
kube-system     coredns-698c77c5d7-hqpc6                         0/1     ContainerCreating   0          66s
kube-system     coredns-autoscaler-7dcd5c4456-4xh28              1/1     Running             0          3m40s
kube-system     dashboard-metrics-scraper-69d57d47b8-24bkx       1/1     Running             0          3m40s
kube-system     kube-proxy-fxl25                                 1/1     Running             0          8m30s
kube-system     kube-proxy-ndc9v                                 1/1     Running             0          8m16s
kube-system     kubernetes-dashboard-7f7676f7b5-zg575            1/1     Running             0          66s
kube-system     metrics-server-ff58ffc74-v67sx                   1/1     Running             0          66s
kube-system     tunnelfront-557955c4df-98mp5                     1/1     Running             0          66s

We can actually see it kept the same cluster IP - the services stay put..

Another way to think about it, if we have a scaling event that moves the pods around, they’ll get new IPs, but the services stay put.  Let's delete the pod and verify that...

builder@DESKTOP-JBA79RT:~$ kubectl describe pod nginx-ingress-controller-7fcf8df75d-fjpql -n ingress-nginx | grep IP
IP:             10.240.0.14
builder@DESKTOP-JBA79RT:~$ kubectl delete pod nginx-ingress-controller-7fcf8df75d-fjpql -n ingress-nginx
pod "nginx-ingress-controller-7fcf8df75d-fjpql" deleted
builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces | grep nginx
ingress-nginx   nginx-ingress-controller-7fcf8df75d-sr5t5        1/1     Running   0          43s
builder@DESKTOP-JBA79RT:~$ kubectl describe pod nginx-ingress-controller-7fcf8df75d-sr5t5 -n ingress-nginx | grep IP
IP:             10.240.0.10

Services still work

builder@DESKTOP-JBA79RT:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                        TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                         AGE
default         headless-vnc                NodePort       10.0.100.216   <none>          6901:32001/TCP,5901:32002/TCP   26m
default         kubernetes                  ClusterIP      10.0.0.1       <none>          443/TCP                         138m
ingress-basic   aks-helloworld              ClusterIP      10.0.128.75    <none>          80/TCP                          96m
ingress-nginx   ingress-nginx               LoadBalancer   10.0.74.42     52.141.208.74   80:31389/TCP,443:30406/TCP      111m
kube-system     dashboard-metrics-scraper   ClusterIP      10.0.121.188   <none>          8000/TCP                        18m
kube-system     kube-dns                    ClusterIP      10.0.0.10      <none>          53/UDP,53/TCP                   138m
kube-system     kubernetes-dashboard        ClusterIP      10.0.148.139   <none>          443/TCP                         18m
kube-system     metrics-server              ClusterIP      10.0.35.19     <none>          443/TCP                         138m
AKS and Ingress, again

Using Native Load Balancers directly

First, let’s fire up that sample service:

builder@DESKTOP-2SQ9NQM:~$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~$ kubectl create ns ingress-basic
namespace/ingress-basic created
builder@DESKTOP-2SQ9NQM:~$  helm install aks-helloworld azure-samples/aks-helloworld --namespace ingress-basic

NAME: aks-helloworld
LAST DEPLOYED: Fri Mar 27 19:11:55 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let’s check it out

builder@DESKTOP-2SQ9NQM:~$ kubectl get pods -n ingress-basic
NAME                                             READY   STATUS    RESTARTS   AGE
acs-helloworld-aks-helloworld-5d6f57bdb5-kjx2t   1/1     Running   0          32m
builder@DESKTOP-2SQ9NQM:~$ kubectl port-forward acs-helloworld-aks-helloworld-5d6f57bdb5-kjx2t -n ingress-basic 8080:
80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
AKS and Ingress, again

Now let's check our service

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default         kubernetes             ClusterIP   10.0.0.1      <none>        443/TCP         8h
ingress-basic   aks-helloworld         ClusterIP   10.0.17.88    <none>        80/TCP          60s
kube-system     kube-dns               ClusterIP   10.0.0.10     <none>        53/UDP,53/TCP   8h
kube-system     kubernetes-dashboard   ClusterIP   10.0.45.224   <none>        80/TCP          8h
kube-system     metrics-server         ClusterIP   10.0.15.11    <none>        443/TCP         8h

We can force the aks-helloworld service to attach to an internal load balancer.  As you recall we checked our nodes and they were 10.240.0.4, 5, and 6.  So let’s pick a private IP in that CIDR.

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  loadBalancerIP: 10.240.0.25
  ports:
  - port: 80
  selector:
    app: aks-helloworld

We can force the ingress on the service by redefining the service outside of the chart (however, Kubernetes will warn us)

builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f ingress2.yaml -n ingress-basic
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/aks-helloworld configured

We see that did apply an internal loadbalancer:

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                   TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default         kubernetes             ClusterIP      10.0.0.1      <none>        443/TCP         8h
ingress-basic   aks-helloworld         LoadBalancer   10.0.17.88    10.240.0.25   80:30462/TCP    40m
kube-system     kube-dns               ClusterIP      10.0.0.10     <none>        53/UDP,53/TCP   8h
kube-system     kubernetes-dashboard   ClusterIP      10.0.45.224   <none>        80/TCP          8h
kube-system     metrics-server         ClusterIP      10.0.15.11    <none>        443/TCP         8h

We can also see that an loadbalancer with a private IP matches that in the resource group in the Azure portal:

AKS and Ingress, again

But since this is an internal LB, we’ll need a private pod to check.

builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f https://raw.githubusercontent.com/ConSol/docker-headless-vnc-container/master/kubernetes/kubernetes.headless-vnc.example.deployment.yaml
deployment.apps/headless-vnc created
service/headless-vnc created
builder@DESKTOP-2SQ9NQM:~$ kubectl get pods --all-namespaces | grep vnc
default         headless-vnc-54f58c69f9-w5n26                    0/1     ContainerCreating   0          51s

But that didn’t work.

AKS and Ingress, again

The key issue is the both the fact the namespace needs to match the app and also, you use a selector on the label of the pod, not the service.

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  loadBalancerIP: 10.240.0.25
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: acs-helloworld-aks-helloworld
builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f ingress2.yaml
service/aks-helloworld created

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
default         echoserver-lb          LoadBalancer   10.0.146.124   10.240.0.8    8080:30702/TCP                  24m
default         headless-vnc           NodePort       10.0.180.196   <none>        6901:32001/TCP,5901:32002/TCP   85m
default         kubernetes             ClusterIP      10.0.0.1       <none>        443/TCP                         10h
ingress-basic   aks-helloworld         LoadBalancer   10.0.99.210    10.240.0.25   8080:31206/TCP                  2m47s
kube-system     kube-dns               ClusterIP      10.0.0.10      <none>        53/UDP,53/TCP                   10h
kube-system     kubernetes-dashboard   ClusterIP      10.0.45.224    <none>        80/TCP                          10h
kube-system     metrics-server         ClusterIP      10.0.15.11     <none>        443/TCP                         10h
AKS and Ingress, again

We can do the same for External IPs as well:

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  namespace: ingress-basic
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: acs-helloworld-aks-helloworld

Now let's get the namespaces

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE       NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
default         echoserver-lb          LoadBalancer   10.0.146.124   10.240.0.8    8080:30702/TCP                  37m
default         headless-vnc           NodePort       10.0.180.196   <none>        6901:32001/TCP,5901:32002/TCP   98m
default         kubernetes             ClusterIP      10.0.0.1       <none>        443/TCP                         10h
ingress-basic   aks-helloworld         LoadBalancer   10.0.9.144     13.86.3.38    8080:31193/TCP                  7m19s
kube-system     kube-dns               ClusterIP      10.0.0.10      <none>        53/UDP,53/TCP                   10h
kube-system     kubernetes-dashboard   ClusterIP      10.0.45.224    <none>        80/TCP                          10h
kube-system     metrics-server         ClusterIP      10.0.15.11     <none>        443/TCP                         10h
AKS and Ingress, again

Cleaning up

builder@DESKTOP-2SQ9NQM:~$ az aks delete -n idjaks03 -g idjaks03rg
Are you sure you want to perform this operation? (y/n): y
 - Running 

Summary

There are multiple ways one can create an ingress loadbalancer into your cluster. You can serve traffic with Nginx or just tie it directly to the load balancer. The advantage to the Nginx approach is it is more Cloud agnostic - that setup will work with minor changes between various clouds and on-premise.  It does mean having an Nginx controller serving up layer 7 traffic, but it can also serve as a TLS endpoint.

Using the Azure Loadbalancer via a kubernetes service with annotations takes out the middeman Nginx.  It should serve Layer 4 and 7 traffic equally well.  However, the annotations are Azure specific so one will need to change per provider (not too hard, for instance in AWS documentation it has things like service.beta.kubernetes.io/aws-load-balancer-type: nlb).

The key things to recall are very your spec selectors and match namespaces.

]]>