Automating AKS Deployments like a Boss: Part 5 (AppD)

Published: Apr 7, 2019 by Isaac Johnson

In Part 4we explored setting up EFK, both OSS and Commercial. But in considering commercial, there are other great options as well including ApplicationDynamics (AppD).  

We are going to use the same cluster from before, so if you haven’t set up an AKS auto-scaling cluster, please zip back to Part 3 and follow the guide there.

First, we are going to need to build the appd docker image to use.  We can also use a provided one, but chances are you are doing this for a production deployment with a container registry available.

Clone the repo: https://github.com/michaelenglert/docker.appd_agents

Next, following it’s readme.md, we need to download the Machine Agent and Java Agent from the downloads page (https://download.appdynamics.com/download/)

Download the machine agent
Download the Sun and JRocket JVM (which i found on page three of downloads)

Now let’s see the clone and and then copy from downloads in preperation for making the image:

AHD-MBP13-048:~ isaac.johnson$ cd Workspaces/
AHD-MBP13-048:Workspaces isaac.johnson$ git clone https://github.com/michaelenglert/docker.appd_agents.git
Cloning into 'docker.appd_agents'...
remote: Enumerating objects: 66, done.
remote: Counting objects: 100% (66/66), done.
remote: Compressing objects: 100% (48/48), done.
remote: Total 257 (delta 28), reused 53 (delta 17), pack-reused 191
Receiving objects: 100% (257/257), 34.64 KiB | 4.95 MiB/s, done.
Resolving deltas: 100% (119/119), done.
AHD-MBP13-048:Workspaces isaac.johnson$ cd docker.appd_agents/
AHD-MBP13-048:docker.appd_agents isaac.johnson$ cd docker-offline/
AHD-MBP13-048:docker-offline isaac.johnson$ mv ~/Downloads/AppServerAgent-4.5.8.25346.zip java-agent.zip
AHD-MBP13-048:docker-offline isaac.johnson$ mv ~/Downloads/MachineAgent-4.5.9.2096.zip machine-agent.zip

Next let’s go to the offline build folder in the repo and create the docker image.  

AHD-MBP13-048:docker-offline isaac.johnson$ docker build -t myappdagent .
Sending build context to Docker daemon 108.5MB
Step 1/22 : FROM alpine AS builder
latest: Pulling from library/alpine
8e402f1a9c57: Pull complete 
Digest: sha256:644fcb1a676b5165371437feaa922943aaf7afcfa8bfee4472f6860aad1ef2a0
Status: Downloaded newer image for alpine:latest
 ---> 5cb3aa00f899
Step 2/22 : LABEL maintainer="Michael Englert <michi.eng@gmail.com>"
 ---> Running in 01d4dff75acd
Removing intermediate container 01d4dff75acd
 ---> 2da1df4fd5e0
Step 3/22 : ENV APPD_HOME="/opt/appdynamics"
 ---> Running in 9bc343fe46dc
Removing intermediate container 9bc343fe46dc
 ---> 2539c49eac6c
Step 4/22 : ADD agents.sh ${APPD_HOME}/
 ---> 183aab716af9
Step 5/22 : ADD machine-agent.zip /tmp/
 ---> 0b0befbd19df
Step 6/22 : ADD java-agent.zip /tmp/
 ---> f39e2e61577e
Step 7/22 : RUN apk update
 ---> Running in 9c0eaa1408d6
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
v3.9.2-71-g2c18af98e5 [http://dl-cdn.alpinelinux.org/alpine/v3.9/main]
v3.9.2-71-g2c18af98e5 [http://dl-cdn.alpinelinux.org/alpine/v3.9/community]
OK: 9758 distinct packages available
Removing intermediate container 9c0eaa1408d6
 ---> a91c2e21fdd8
Step 8/22 : RUN apk upgrade
 ---> Running in 661a6dc825a8
(1/4) Upgrading musl (1.1.20-r3 -> 1.1.20-r4)
(2/4) Upgrading libcrypto1.1 (1.1.1a-r1 -> 1.1.1b-r1)
(3/4) Upgrading libssl1.1 (1.1.1a-r1 -> 1.1.1b-r1)
(4/4) Upgrading musl-utils (1.1.20-r3 -> 1.1.20-r4)
Executing busybox-1.29.3-r10.trigger
OK: 6 MiB in 14 packages
Removing intermediate container 661a6dc825a8
 ---> d70f3ea1ed62
Step 9/22 : RUN apk add unzip
 ---> Running in 0368eee00152
(1/1) Installing unzip (6.0-r4)
Executing busybox-1.29.3-r10.trigger
OK: 6 MiB in 15 packages
Removing intermediate container 0368eee00152
 ---> 9261ace50b09
Step 10/22 : RUN chmod +x ${APPD_HOME}/agents.sh
 ---> Running in 343187a982d7
Removing intermediate container 343187a982d7
 ---> d52b3de1f108
Step 11/22 : RUN mkdir -p ${APPD_HOME}/machine-agent
 ---> Running in 83e22845d957
Removing intermediate container 83e22845d957
 ---> 1adb707b49f7
Step 12/22 : RUN mkdir -p ${APPD_HOME}/java-agent
 ---> Running in eab558c08eba
Removing intermediate container eab558c08eba
 ---> 6c595c24a441
Step 13/22 : RUN mkdir -p ${APPD_HOME}/java-agenttemp
 ---> Running in e61f3a381bd8
Removing intermediate container e61f3a381bd8
 ---> 4632a320586d
Step 14/22 : RUN unzip /tmp/machine-agent.zip -d ${APPD_HOME}/machine-agent
 ---> Running in 30e1763ae21f
... SNIP ...
Need to get 559 kB of archives.
After this operation, 1789 kB of additional disk space will be used.
Get:1 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 libprocps6 amd64 2:3.3.12-3+deb9u1 [58.5 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 libncurses5 amd64 6.0+20161126-1+deb9u2 [93.4 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 procps amd64 2:3.3.12-3+deb9u1 [250 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 libgpm2 amd64 1.20.4-6.2+b1 [34.2 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian stretch/main amd64 psmisc amd64 22.21-2.1+b2 [123 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 559 kB in 0s (1513 kB/s)
Selecting previously unselected package libprocps6:amd64.
(Reading database ... 7771 files and directories currently installed.)
Preparing to unpack .../libprocps6_2%3a3.3.12-3+deb9u1_amd64.deb ...
Unpacking libprocps6:amd64 (2:3.3.12-3+deb9u1) ...
Selecting previously unselected package libncurses5:amd64.
Preparing to unpack .../libncurses5_6.0+20161126-1+deb9u2_amd64.deb ...
Unpacking libncurses5:amd64 (6.0+20161126-1+deb9u2) ...
Selecting previously unselected package procps.
Preparing to unpack .../procps_2%3a3.3.12-3+deb9u1_amd64.deb ...
Unpacking procps (2:3.3.12-3+deb9u1) ...
Selecting previously unselected package libgpm2:amd64.
Preparing to unpack .../libgpm2_1.20.4-6.2+b1_amd64.deb ...
Unpacking libgpm2:amd64 (1.20.4-6.2+b1) ...
Selecting previously unselected package psmisc.
Preparing to unpack .../psmisc_22.21-2.1+b2_amd64.deb ...
Unpacking psmisc (22.21-2.1+b2) ...
Setting up libncurses5:amd64 (6.0+20161126-1+deb9u2) ...
Setting up psmisc (22.21-2.1+b2) ...
Setting up libgpm2:amd64 (1.20.4-6.2+b1) ...
Setting up libprocps6:amd64 (2:3.3.12-3+deb9u1) ...
Setting up procps (2:3.3.12-3+deb9u1) ...
update-alternatives: using /usr/bin/w.procps to provide /usr/bin/w (w) in auto mode
update-alternatives: warning: skip creation of /usr/share/man/man1/w.1.gz because associated file /usr/share/man/man1/w.procps.1.gz (of link group w) doesn't exist
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Removing intermediate container 65fcb0581417
 ---> aba7de02ef26
Step 22/22 : CMD ["/bin/bash", "-c", "${APPD_HOME}/agents.sh"]
 ---> Running in 4f0777c1650a
Removing intermediate container 4f0777c1650a
 ---> 19ca06e2e21e
Successfully built 19ca06e2e21e
Successfully tagged myappdagent:latest
AHD-MBP13-048:docker-offline isaac.johnson$ 

Now at this point you could docker tag and docker push to your container registry (see steps 3 and 4 here).  I am not going to do that at the moment, but you are welcome to replace the image: reference later with your own.

Next, we need some AppD settings that, frankly, i could only find in download zip of an agent when might install on a VM.

Download an agent through the AppD portal:

Agent Download page

You’ll find the settings you need in the xml file:

vi ~/Downloads/machineagent-bundle-64bit-linux-4.5.9.2096/conf/controller-info.xml

You’ll copy the settings you see into the k8s yaml file locally:

AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ vi /Users/isaac.johnson/Workspaces/kubernetes.appd-agents/appdynamics-agent.yaml 

It’s here, by the way, that we can also change the docker image to match an ACR our cluster can see: (e.g. image: “michaelenglert/appd-agent:latest”)

While I will use the authors agent for this tutorial. But i could likely set up an ACR and create a secret in the cluster for Helm to use (or launch this Yaml through an AzDO pipeline).

The yaml file needs double quotes on all values, otherwise you’ll get errors launching the DaemonSet:

my settings used in the daemonset yaml

I only show you the above because the repo suggested true could be bare and in testing i realized that all values need quotes.

Launching:

First, we need to login to our cluster - we can check for pods to verify the connection is working. Since we are re-using some names, i needed to remove the old kube config file.

AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ rm ~/.kube/config 
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ az aks get-credentials --name idj-aks-monitoring-aks1 --resource-group idj-aks-monitoring 
Merged "idj-aks-monitoring-aks1" as current context in /Users/isaac.johnson/.kube/config
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl get pods
No resources found.
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-754f947b4-4jqrj 1/1 Running 0 109m
kube-system coredns-754f947b4-s5kbw 1/1 Running 0 103m
kube-system coredns-autoscaler-6fcdb7d64-6gqvc 1/1 Running 0 109m
kube-system heapster-5fb7488d97-ccsdx 2/2 Running 0 103m
kube-system kube-proxy-hqhhb 1/1 Running 0 102m
kube-system kube-svc-redirect-rwhrl 2/2 Running 0 103m
kube-system kubernetes-dashboard-847bb4ddc6-n9bt8 1/1 Running 0 109m
kube-system metrics-server-7b97f9cd9-472h9 1/1 Running 0 109m
kube-system tunnelfront-8488dbc967-l68rm 1/1 Running 0 109m

The volume mounts in the main gave me issue, but launching the minimal worked:

AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl create -f appdynamics-agent_minimal.yaml 
daemonset.extensions/appdynamics-agent created
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default appdynamics-agent-9ltk6 1/1 Running 0 37s

Installing Helm

Note: with RBAC, we need to set up the service role - otherwise tiller will say it’s installed but a subsequent check with helm version will show that it’s not there.

AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller 
clusterrolebinding.rbac.authorization.k8s.io/tiller created
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ 

Installing a simple guestbook app

AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ helm repo add ibm-repo https://ibm.github.io/helm101/
"ibm-repo" has been added to your repositories
AHD-MBP13-048:kubernetes.appd-agents isaac.johnson$ helm install ibm-repo/guestbook
NAME: ponderous-indri
LAST DEPLOYED: Sat Apr 6 19:09:22 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
ponderous-indri-guestbook 0/2 2 0 1s
redis-master 0/1 1 0 1s
redis-slave 0/2 2 0 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
ponderous-indri-guestbook-64cd64b4f-8v7db 0/1 ContainerCreating 0 1s
ponderous-indri-guestbook-64cd64b4f-cjc77 0/1 ContainerCreating 0 1s
redis-master-7b5cc58fc8-r79k5 0/1 ContainerCreating 0 1s
redis-slave-5db5dcfdfd-qg69k 0/1 ContainerCreating 0 1s
redis-slave-5db5dcfdfd-qwrhr 0/1 ContainerCreating 0 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ponderous-indri-guestbook LoadBalancer 10.0.207.27 <pending> 3000:30581/TCP 1s
redis-master ClusterIP 10.0.101.222 <none> 6379/TCP 1s
redis-slave ClusterIP 10.0.128.193 <none> 6379/TCP 1s


NOTES:
1. Get the application URL by running these commands:
  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status of by running 'kubectl get svc -w ponderous-indri-guestbook --namespace default'
  export SERVICE_IP=$(kubectl get svc --namespace default ponderous-indri-guestbook -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:3000

Let’s head back to AppDynamics and see what has been collected:

We can see our node is already showing data:

An AKS Node dashboard

If we head over to Containers, we can see all the containers presently on this node:

Containers listed on a node

Following steps from Part 3, let’s launch the dashboard, find our guestbook and scale it up a bit:

Scaling a guestbook to 80 pods

As we scale up, we can see the load reflected in the standard kubernetes dashboard:

Node statistics in the Kubernetes Dashboard

We can also see the load reflect in the AppDynamics server dashboard for the AKS Node:

Scaling to 80 pods, load reflected

Next, if we haven’t set up our AKS to autoscale, let’s head to the VMSS in the cluster and force a scale up to 3 nodes:

Forcing a scaling condition to scale out to 3 nodes

Side Note: As i was waiting for our nodes to come online, i was impressed at how well our single node cluster was handling the burden of 80 guestbook apps.  The memory had yet to exceed 50%:

The little cluster that could (note the number of network interfaces)

After the scaling event completed, we can see 3 nodes now in the Kuberenetes Dashboard:

Nodes in the kubernetes dashboard

So too can we track them in AppDynamics (if we move fast enough, see note later)

Servers listed in AppD

One of the nice things we can confirm is that as our cluster scales out and in, the ReplicaSet will spread our AppD Agent across the cluster as a whole.

We can see the agent reflected in the Kuberenetes dashboard here:

Scale out in the k8s dashboard

And in AppD as well:

A new server in AppD

Note: I have a minor issue with AppDynamics and hope it’s either user error or an issue that’s transient.  

Shortly after a machine is show in the server list, it disappears.  So while i see just one server now in the list (and not one associated to this AKS):

Servers list showing only one real VM and none of my AKS nodes.

While i cannot reference them in the servers list, I can clearly get to the data from the running nodes (from my Home/History). If I get a resolution later, i’ll update this blog post.

I should also note that when a node is removed due to a scale-in policy, you can still see data in AppD (provided you have a saved link somewhere):

A deprecated agent after a scale-in event

Summary

So far we have explored Azure’s own Monitoring offerings, EFK and now ApplicationDynamics.  Clearly the Kubernetes ecosystem is rich with options.  And while I did encounter some minor issues looking up existing servers in AppD, I’m certain those issues will be resolved in the future.

I would like to see more unified container indexing in AppD, however we did not explore the adding of agents into containers themselves which is another way AppD can monitor applications in AKS.

k8s tutorial

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes