Distributed Artifactory and Kubernetes

I sat in a meeting recently discussing the merits of an artifact deployment strategy.  One of the leads of the team on the phone wasn’t sure if this strategy would actually work and asked “why don’t you just write a blog entry about it”.  This took me back - I was honored both because I think they are pretty sharp DevOps engineers and clearly they have read this blog.  While I rarely take requests directly, in this case, challenge accepted, “Futures” team!

One question often asked about CI/CD is how to properly distribute artifacts.  It seems like a fairly obvious problem and indeed there have been solutions going back to the early days of shared volumes and NFS.   I recall solving this many times at a variety of companies in my past using a distributed Subversion network, which solves behind the scenes syncing to remote repositories, but at the consequence of an ever growing versioned object base/repository.  And clearly this doesn’t scale; I recall my colleague Chad pinging me one day months after I left a site, that the artifact svn repo had exceeded half a TB.  Ironically, we had Artifactory purchased at that company, but it was just used for jars and maven dependencies.

Today artifact storage has matured with solid offerings from the leaders JFrog Artifactory and Sonatype Nexus as well as challengers from Microsoft Azure Artifacts (from Azure DevOps/VSTS) and Ineda ProGet.  Some folks are even comfortable just revisioning their binaries directly in Github, AWS S3 or Azure Blob storage.  I personally have used Dropbox and S3 to distribute binaries.

The key issue for companies who wish to track binaries in a secure and safe way revolves around the following questions:

  1. Who supports this? (ie. open source forums, vendor support, enterprise agreements?)
  2. How do we ensure the artifacts are secured (e.g. access policies, federated identity, MD5 checksums, logs)
  3. How do we distribute these in the multi-cloud/hybrid-cloud safely?

If your business has PHI or PII, ensuring artifacts are secured is that much more important.

From a DevOps perspective, the key features we need to satisfy the goal:

  1. Idempotent pushes - the files don’t overwrite and have checksums to ensure it’s all there - they succeed completely or not at all.
  2. Artifacts are organized in packages with revisions - releasable things persist for as long as required by the business.
  3. Artifacts are intelligently distributed - this is key - We don’t necessarily want *all* packages distributed to all locations - often we want only the subset that matters for a location.

A Strategy with JFrog Artifactory

Artifactory can host docker containers which makes it a possible solution for a kubernetes environment.  It includes XRay in the commercial versions for artifact scanning. The “replication” feature of Artifactory (akin to the Smart Proxy feature of Nexus) can proactively sync a repository to downstream instances.

Let’s get started

First, let’s spin a cluster in LKE to host our chart.

We can now download the config and test it

$ linode-cli lke clusters-list
┌─────┬────────────────────┬────────────┐
│ id  │ label              │ region     │
├─────┼────────────────────┼────────────┤
│ 428 │ ArtifactoryCluster │ us-central │
└─────┴────────────────────┴────────────┘
$ linode-cli lke kubeconfig-view 428 | tail -n2 | head -n1 | sed 's/.\{8\}$//' | sed 's/^.\{8\}//' | base64 --decode > ~/.kube/config
$ kubectl get nodes
NAME                      STATUS   ROLES    AGE     VERSION
lke428-542-5de17fd81370   Ready    <none>   3m13s   v1.16.2
lke428-542-5de17fd817aa   Ready    <none>   3m5s    v1.16.2

Pro-tip: Installing the linode-cli.. While there does exist a “CLI” you can install with apt-get install linode-cli, it’s just an older CLI “linode” for managing some core features.  To install the linode-cli we use below, first ensure you have PIP installed (sudo apt-get install python-pip) and then install with pip (sudo pip install linode-cli).   You can also download and build from source (github)

Set up Helm

As you recall, Helm/Tiller 2.x doesn’t work out of the box with K8s 1.16, so we have to install manually:

$ cat ~/Documents/rbac-config.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
$ kubectl create -f /Users/johnsi10/Documents/rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created


$ cat helm_init.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "200"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

$ kubectl apply -f helm_init.yaml 
deployment.apps/tiller-deploy created
service/tiller-deploy created

$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Install Artifactory

We want to install Artifactory on here.  We’ll first add the jfrog repo and update, then install the chart.

$ helm repo add jfrog https://charts.jfrog.io
"jfrog" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "adwerx" chart repository
...Successfully got an update from the "jfrog" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
$ helm install --name my-release jfrog/artifactory
NAME:   my-release
LAST DEPLOYED: Fri Nov 29 14:40:45 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                           DATA  AGE
my-release-artifactory-installer-info          1     2s
my-release-artifactory-nginx-artifactory-conf  1     2s
my-release-artifactory-nginx-conf              1     2s
my-release-postgresql-configuration            1     2s

==> v1/Deployment
NAME                          READY  UP-TO-DATE  AVAILABLE  AGE
my-release-artifactory-nginx  0/1    1           0          2s

==> v1/NetworkPolicy
NAME                                              AGE
my-release-artifactory-artifactory-networkpolicy  1s

==> v1/Pod(related)
NAME                                         READY  STATUS    RESTARTS  AGE
my-release-artifactory-0                     0/1    Pending   0         1s
my-release-artifactory-nginx-d44f4567-flrmg  0/1    Init:0/1  0         1s
my-release-postgresql-0                      0/1    Pending   0         1s

==> v1/Role
NAME                    AGE
my-release-artifactory  2s

==> v1/RoleBinding
NAME                    AGE
my-release-artifactory  2s

==> v1/Secret
NAME                                      TYPE               DATA  AGE
my-release-artifactory                    Opaque             0     2s
my-release-artifactory-binarystore        Opaque             1     2s
my-release-artifactory-nginx-certificate  kubernetes.io/tls  2     2s
my-release-postgresql                     Opaque             1     2s

==> v1/Service
NAME                            TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
my-release-artifactory          ClusterIP     10.128.198.96  <none>       8081/TCP                    2s
my-release-artifactory-nginx    LoadBalancer  10.128.63.244  45.79.61.91  80:30528/TCP,443:31497/TCP  2s
my-release-postgresql           ClusterIP     10.128.161.84  <none>       5432/TCP                    2s
my-release-postgresql-headless  ClusterIP     None           <none>       5432/TCP                    2s

==> v1/ServiceAccount
NAME                    SECRETS  AGE
my-release-artifactory  1        2s

==> v1/StatefulSet
NAME                    READY  AGE
my-release-artifactory  0/1    1s
my-release-postgresql   0/1    1s


NOTES:
Congratulations. You have just deployed JFrog Artifactory!

1. Get the Artifactory URL by running these commands:

   NOTE: It may take a few minutes for the LoadBalancer IP to be available.
         You can watch the status of the service by running 'kubectl get svc -w my-release-artifactory-nginx'
   export SERVICE_IP=$(kubectl get svc --namespace default my-release-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
   echo http://$SERVICE_IP/

2. Open Artifactory in your browser
   Default credential for Artifactory:
   user: admin
   password: password

We can get the IP right away, but we need to wait for the pods to come up:

$ export SERVICE_IP=$(kubectl get svc --namespace default my-release-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ echo http://$SERVICE_IP/
http://45.79.61.91/

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS            RESTARTS   AGE
default       my-release-artifactory-0                      0/1     PodInitializing   0          94s
default       my-release-artifactory-nginx-d44f4567-flrmg   0/1     Running           0          94s
default       my-release-postgresql-0                       1/1     Running           0          94s

Now it will come up, but be aware that this is not the OSS version and you’ll need to get a demo key from the website: https://jfrog.com/artifactory/free-trial/

If you don’t apply a license you’ll end up with an instance running with “admin/password” and no way to modify the password, which clearly isn’t ideal.

So you can delete the deployment if you want to think it over and not leave a running instance out there on a public IP:

$ kubectl get deployments
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
my-release-artifactory-nginx   1/1     1            1           9m41s
$ kubectl delete deployment my-release-artifactory-nginx 
deployment.apps "my-release-artifactory-nginx" deleted

Pro-Tip: Based on the Linode CLI today, you can use this one liner to get the kubeconfig:

$  linode-cli lke kubeconfig-view `linode-cli lke clusters-list | tail -n 2 | head -n1 | sed 's/^.\{8\}//' | sed 's/ .*//'` | tail -n2 | head -n1 | sed 's/.\{8\}$//' | sed 's/^.\{8\}//' | base64 --decode > ~/.kube/config
$ kubectl get nodes
NAME                      STATUS   ROLES    AGE     VERSION
lke433-548-5de29a9a7c2d   Ready    <none>   4m44s   v1.16.2
lke433-548-5de29a9a802e   Ready    <none>   4m40s   v1.16.2

Once we have a license applied (you can get a demo license via automated email, provided you don’t choose multi-site in your request)

We can now see more options when we choose to create a new repo (/admin/repository/local/new)

Experimentations:

The following track the ways in which I tried to set up syncing.  In the end, only Artifactory to Artifactory worked, with caveats beyond that.

ACR: Setting up a destination CR in Azure:

Let’s take a pause and create an ACR in Azure to prove container syncing works.

Once created, we’ll need the admin user for pushing from Artifactory:

We should be now able to use this repo with:

Url: https://lkedemocr.azurecr.io
User: lkedemocr
Pass: h2gCMnqTgh67dpX3ZqJ0OwgKdN=cYAL5

Verify we can reach this repo:

$ az acr login --name lkedemocr
Unable to get AAD authorization tokens with message: An error occurred: CONNECTIVITY_REFRESH_TOKEN_ERROR
Access to registry 'lkedemocr.azurecr.io' was denied. Response code: 401. Please try running 'az login' again to refresh permissions.
Unable to get admin user credentials with message: The resource with name 'lkedemocr' and type 'Microsoft.ContainerRegistry/registries' could not be found in subscription 'xxxxxxxx’'.
Username: lkedemocr
Password: 
Login Succeeded
$ docker images | head -n5
REPOSITORY                                                   TAG                         IMAGE ID            CREATED             SIZE
freeboard                                                    latest                      3162f871ef95        2 weeks ago         928MB
freeboard1                                                   latest                      ff169793756b        2 weeks ago         928MB
uscdaksdev5cr.azurecr.io/skaffold-project/skaffold-example   v1.0.0-21-g4dd6b59e-dirty   b85e9ef75d32        2 weeks ago         7.55MB
gcr.io/api-project-799042244963/skaffold-example             v1.0.0-21-g4dd6b59e-dirty   b85e9ef75d32        2 weeks ago         7.55MB
$ docker tag freeboard:latest lkedemocr.azurecr.io/freeboard:testing
$ docker push lkedemocr.azurecr.io/freeboard:testing
The push refers to repository [lkedemocr.azurecr.io/freeboard]
b9be68ae8c02: Pushed 
7c4e4e0f1ff3: Pushed 
a0bbb150aa4a: Pushing [================================================>  ]  17.18MB/17.77MB
16e71426ee51: Pushing [==================================================>]  19.24MB

Back to artifactory…

Create a new local docker repo:

And the rub ends up being that while we can put in valid path, username and password, Artifactory rejects pushing to ACR:

Let’s try a JFrog Hosted SaaS instance:

I tried both directions - to replicate from the SaaS to my instance:

Artifactory to Artifactory syncing (what worked)

This took me a bit, however in the end, I realized the URL used both by the SaaS offering (idjjfrogsastest-mysasdocker.jfrog.io) and the containerized instance with k8s (http://45.79.61.98:80) is *NOT* the URL they seek for replication.  They want the “full path URL” you can find in the repository browser:

E.g. https://idjjfrogsastest.jfrog.io/idjjfrogsastest/mysasdocker/

Using that, at least we validate it’s a licensing issue:

Replication isn't enabled for the Pro Edition

The SaaS offering *is* multi-site enabled so I was able to sync to my k8s install, (which was the goal in the first place):

setting replication from SaaS to our k8s based instance

Docker Login issues:

Even skipping the fact replication fails to our k8s instance, i couldn't login into the server either:

$ cat /etc/docker/daemon.json 
{
  "insecure-registries" : ["45.79.61.98"]
}
$ export DOCKER_OPTS+=" --insecure-registry 45.79.61.98"
$ docker login 45.79.61.98
Username: admin
Password: 
Error response from daemon: Get https://45.79.61.98/v2/: x509: cannot validate certificate for 45.79.61.98 because it doesn't contain any IP SANs

I was able to get past that error via the UI in the system tray - which leads me to believe this is a Mac OS issue.

You may need to be explicit on port (as i did):

$ docker login http://45.79.61.98:80
Username: admin	
Password: 
Error response from daemon: Get https://45.79.61.98:80/v2/: http: server gave HTTP response to HTTPS client

$ docker login http://45.79.61.98:80
Username: admin	
Password: 
Login Succeeded

$ docker login 45.79.61.98
Username: admin
Password: 
Error response from daemon: Bad response from Docker engine

But finally specifying the protocol worked (which is good since Nginx is actually handing 443 TLS, albeit self-signed):

$ docker login https://45.79.61.98
Username: admin
Password: 
Login Succeeded

Though i logged in, it would seem that pushing still fails:

$ docker tag nginx:1.13 45.79.61.98/nginx:testing
$ docker push 45.79.61.98/nginx:testing
The push refers to repository [45.79.61.98/nginx]
7ab428981537: Retrying in 1 second 
82b81d779f83: Retrying in 1 second 
d626a8ad97a1: Retrying in 3 seconds 
unknown: Not Found

The other way fails too…

$ docker push 45.79.61.98:80/nginx:testing
The push refers to repository [45.79.61.98:80/nginx]
7ab428981537: Retrying in 1 second 
82b81d779f83: Retrying in 1 second 
d626a8ad97a1: Retrying in 1 second 

Lightbulb moment (what works)

Then I realized you need to specify the top level repository in the tag:

$ docker tag nginx:1.13 45.79.61.98:80/myartcr/nginx:testing
$ docker push 45.79.61.98:80/myartcr/nginx:testing
The push refers to repository [45.79.61.98:80/myartcr/nginx]
7ab428981537: Pushed 
82b81d779f83: Pushed 
d626a8ad97a1: Pushed 
testing: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948

… circling back on the repo pushing.. At this point we have a SaaS instance with replication enabled to our k8s instance.  Let’s try tagging a pushing an image and seeing if it actually ends up in both places!

Testing Replication

Lets tag and push rancher:

$ docker images | grep rancher
rancher/k3s                                                  v0.7.0                      f1ec9d3fbf66        4 months ago        119MB
$ docker tag rancher/k3s:v0.7.0 idjjfrogsastest-mysasdocker.jfrog.io/rancher/k3s:v0.7.0
$ docker push idjjfrogsastest-mysasdocker.jfrog.io/rancher/k3s:v0.7.0
The push refers to repository [idjjfrogsastest-mysasdocker.jfrog.io/rancher/k3s]
bc417f5c35ae: Pushed 
f7fd657b18c0: Pushed 
v0.7.0: digest: sha256:008f8e15ab25714f2786b474b75a38fd8ed387c6e5764c65a737e632ce25690b size: 735

Next, let’s run a replication manually:

click the play button to start a replication now

We can see it, as we would expect, in the SaaS instance:

And in a few moments, we see them replicated to our k8s instance:

Using the OSS Artifactory Chart

while in this demo I didn't dig too far into the Open Source version of artifactory, it's worth noting I tested it and installed it.

In a new cluster, we can apply the same yamls to get helm going:

$ kubectl apply -f ~/Documents/rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
$ kubectl apply -f ~/Documents/helm_init.yaml 
deployment.apps/tiller-deploy created
service/tiller-deploy created

Next, add the JFrog repo and install the OSS chart

$ helm repo add jfrog https://charts.jfrog.io
"jfrog" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "adwerx" chart repository
...Successfully got an update from the "jfrog" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.


$ helm install --name artifactory-oss jfrog/artifactory-oss
NAME:   artifactory-oss
LAST DEPLOYED: Sat Nov 30 10:51:17 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                                DATA  AGE
artifactory-oss-artifactory-installer-info          1     1s
artifactory-oss-artifactory-nginx-artifactory-conf  1     1s
artifactory-oss-artifactory-nginx-conf              1     1s
artifactory-oss-postgresql-configuration            1     1s

==> v1/Deployment
NAME                               READY  UP-TO-DATE  AVAILABLE  AGE
artifactory-oss-artifactory-nginx  0/1    1           0          1s

==> v1/NetworkPolicy
NAME                                                   AGE
artifactory-oss-artifactory-artifactory-networkpolicy  1s

==> v1/Pod(related)
NAME                                               READY  STATUS    RESTARTS  AGE
artifactory-oss-artifactory-0                      0/1    Pending   0         0s
artifactory-oss-artifactory-nginx-bfff44db9-kjs8j  0/1    Init:0/1  0         1s
artifactory-oss-postgresql-0                       0/1    Pending   0         1s

==> v1/Role
NAME                         AGE
artifactory-oss-artifactory  1s

==> v1/RoleBinding
NAME                         AGE
artifactory-oss-artifactory  1s

==> v1/Secret
NAME                                           TYPE               DATA  AGE
artifactory-oss-artifactory                    Opaque             0     1s
artifactory-oss-artifactory-binarystore        Opaque             1     1s
artifactory-oss-artifactory-nginx-certificate  kubernetes.io/tls  2     1s
artifactory-oss-postgresql                     Opaque             1     1s

==> v1/Service
NAME                                 TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
artifactory-oss-artifactory          ClusterIP     10.128.248.162  <none>       8081/TCP                    1s
artifactory-oss-artifactory-nginx    LoadBalancer  10.128.45.176   <pending>    80:30280/TCP,443:31218/TCP  1s
artifactory-oss-postgresql           ClusterIP     10.128.52.87    <none>       5432/TCP                    1s
artifactory-oss-postgresql-headless  ClusterIP     None            <none>       5432/TCP                    1s

==> v1/ServiceAccount
NAME                         SECRETS  AGE
artifactory-oss-artifactory  1        1s

==> v1/StatefulSet
NAME                         READY  AGE
artifactory-oss-artifactory  0/1    1s
artifactory-oss-postgresql   0/1    1s


NOTES:
Congratulations. You have just deployed JFrog Artifactory OSS!


We can get the pods and the LB public IP:

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
artifactory-oss-artifactory-0                       1/1     Running   0          4m1s
artifactory-oss-artifactory-nginx-bfff44db9-kjs8j   1/1     Running   1          4m2s
artifactory-oss-postgresql-0                        1/1     Running   0          4m2s

$ kubectl get svc --namespace default artifactory-oss-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

One thing that I found was sometimes the k8s cluster would not come back with the IP and leave it in pending.  I’m not sure if this is a bug with the LKE beta. However, checking the Linode Console for NodeBalancers, we can see the Public IP:

Configuring Artifactory:

The first thing you want to do is change the default admin password from “password”: http://45.79.62.99/artifactory/webapp/#/admin/security/users/admin/edit

Next, let’s create a new repository of type docker:

http://45.79.62.99/artifactory/webapp/#/admin/repository/local/new

Next set it to not block pushes and name it local repo:

Install Artifactory OSS locally in k3s

For most linux hosts, we can use the standard k3s install: curl -sfL https://get.k3s.io | sh -

However, for my local mac, we can use k3d as detailed in our former blog post.

$ k3d create
2019/11/30 11:27:19 ERROR: Cluster k3s-default already exists
$ k3d start
2019/11/30 11:27:29 Starting cluster [k3s-default]
2019/11/30 11:27:29 ...Starting server
2019/11/30 11:27:30 SUCCESS: Started cluster [k3s-default]
$ k3d get-kubeconfig
/Users/johnsi10/.config/k3d/k3s-default/kubeconfig.yaml

$ export KUBECONFIG=/Users/johnsi10/.config/k3d/k3s-default/kubeconfig.yaml
$ kubectl get nodes
NAME                     STATUS   ROLES    AGE   VERSION
k3d-k3s-default-server   Ready    master   66d   v1.14.4-k3s.1

If we use helm, we can see we have tiller working (and a vault OSS instance running from a prior project):

$ helm list
NAME 	REVISION	UPDATED                 	STATUS  	CHART      	APP VERSION	NAMESPACE
vault	1       	Tue Sep 24 22:17:31 2019	DEPLOYED	vault-0.1.0	           	default  

Now we can install Artifactory

$ helm repo add jfrog https://charts.jfrog.io
"jfrog" has been added to your repositories
$ helm install --name artifactory-oss-local jfrog/artifactory-oss
NAME:   artifactory-oss-local
LAST DEPLOYED: Sat Nov 30 11:40:12 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                                      DATA  AGE
artifactory-oss-local-artifactory-installer-info          1     0s
artifactory-oss-local-artifactory-nginx-artifactory-conf  1     0s
artifactory-oss-local-artifactory-nginx-conf              1     0s
artifactory-oss-local-postgresql-configuration            1     0s

==> v1/Deployment
NAME                                     READY  UP-TO-DATE  AVAILABLE  AGE
artifactory-oss-local-artifactory-nginx  0/1    1           0          0s

==> v1/NetworkPolicy
NAME                                                         AGE
artifactory-oss-local-artifactory-artifactory-networkpolicy  0s

==> v1/Pod(related)
NAME                                                      READY  STATUS    RESTARTS  AGE
artifactory-oss-local-artifactory-0                       0/1    Pending   0         0s
artifactory-oss-local-artifactory-nginx-768ccc78cb-d2jdd  0/1    Init:0/1  0         0s
artifactory-oss-local-postgresql-0                        0/1    Pending   0         0s

==> v1/Role
NAME                               AGE
artifactory-oss-local-artifactory  0s

==> v1/RoleBinding
NAME                               AGE
artifactory-oss-local-artifactory  0s

==> v1/Secret
NAME                                                 TYPE               DATA  AGE
artifactory-oss-local-artifactory                    Opaque             0     0s
artifactory-oss-local-artifactory-binarystore        Opaque             1     0s
artifactory-oss-local-artifactory-nginx-certificate  kubernetes.io/tls  2     0s
artifactory-oss-local-postgresql                     Opaque             1     0s

==> v1/Service
NAME                                       TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
artifactory-oss-local-artifactory          ClusterIP     10.43.76.89    <none>       8081/TCP                    0s
artifactory-oss-local-artifactory-nginx    LoadBalancer  10.43.104.130  <pending>    80:31120/TCP,443:30898/TCP  0s
artifactory-oss-local-postgresql           ClusterIP     10.43.207.171  <none>       5432/TCP                    0s
artifactory-oss-local-postgresql-headless  ClusterIP     None           <none>       5432/TCP                    0s

==> v1/ServiceAccount
NAME                               SECRETS  AGE
artifactory-oss-local-artifactory  1        0s

==> v1/StatefulSet
NAME                               READY  AGE
artifactory-oss-local-artifactory  0/1    0s
artifactory-oss-local-postgresql   0/1    0s


NOTES:
Congratulations. You have just deployed JFrog Artifactory OSS!

Because our local k3s via k3d doesn't have anything to provide a public IP, we will not see the Public IP.  Additionally, i would not expect to directly reach the cluster IP:

$ kubectl get svc
NAME                                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
artifactory-oss-local-artifactory           ClusterIP      10.43.76.89     <none>        8081/TCP                     2m41s
artifactory-oss-local-artifactory-nginx     LoadBalancer   10.43.104.130   <pending>     80:31120/TCP,443:30898/TCP   2m41s
artifactory-oss-local-postgresql            ClusterIP      10.43.207.171   <none>        5432/TCP                     2m41s
artifactory-oss-local-postgresql-headless   ClusterIP      None            <none>        5432/TCP                     2m41s
kubernetes                                  ClusterIP      10.43.0.1       <none>        443/TCP                      66d
vault                                       ClusterIP      None            <none>        8200/TCP,8201/TCP            66d

However, we should be able to access our artifactory via kube-proxy:

Pro-tip: k3d can easily scrub and recreate clusters. If you have problems and want to start over, just delete the cluster and start fresh:

$ k3d list
+-------------+------------------------------+---------+---------+
|    NAME     |            IMAGE             | STATUS  | WORKERS |
+-------------+------------------------------+---------+---------+
| k3s-default | docker.io/rancher/k3s:v0.7.0 | stopped |   0/0   |
+-------------+------------------------------+---------+---------+
$ k3d delete k3s-default
2019/11/30 11:51:47 Removing cluster [k3s-default]
2019/11/30 11:51:47 ...Removing server
2019/11/30 11:51:50 ...Removing docker image volume
2019/11/30 11:51:50 SUCCESS: removed cluster [k3s-default]
$ k3d create
2019/11/30 11:52:43 Created cluster network with ID a7ea845f4999ddaacccb2263f24b3235e56d0ba82fc30ec275fc710fdd3d1785
2019/11/30 11:52:43 Created docker volume  k3d-k3s-default-images
2019/11/30 11:52:43 Creating cluster [k3s-default]
2019/11/30 11:52:43 Creating server using docker.io/rancher/k3s:v0.7.0...
2019/11/30 11:52:44 SUCCESS: created cluster [k3s-default]
2019/11/30 11:52:44 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
kubectl cluster-info

Because I recreated the cluster, i need to create a storage class and set as default:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
$ kubectl get storageclass
NAME         PROVISIONER             AGE
local-path   rancher.io/local-path   67s
$ kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/local-path patched
$ kubectl get storageclass
NAME                   PROVISIONER             AGE
local-path (default)   rancher.io/local-path   118s

Now we can install again:

$ helm install --name artifactory-oss-local jfrog/artifactory-oss
NAME:   artifactory-oss-local
LAST DEPLOYED: Sat Nov 30 11:59:07 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                                      DATA  AGE
artifactory-oss-local-artifactory-installer-info          1     1s
artifactory-oss-local-artifactory-nginx-artifactory-conf  1     1s
artifactory-oss-local-artifactory-nginx-conf              1     1s
artifactory-oss-local-postgresql-configuration            1     1s

==> v1/Deployment
NAME                                     READY  UP-TO-DATE  AVAILABLE  AGE
artifactory-oss-local-artifactory-nginx  0/1    1           0          1s

==> v1/NetworkPolicy
NAME                                                         AGE
artifactory-oss-local-artifactory-artifactory-networkpolicy  1s

==> v1/Pod(related)
NAME                                                      READY  STATUS    RESTARTS  AGE
artifactory-oss-local-artifactory-0                       0/1    Pending   0         1s
artifactory-oss-local-artifactory-nginx-768ccc78cb-gdx28  0/1    Init:0/1  0         1s
artifactory-oss-local-postgresql-0                        0/1    Pending   0         1s

==> v1/Role
NAME                               AGE
artifactory-oss-local-artifactory  1s

==> v1/RoleBinding
NAME                               AGE
artifactory-oss-local-artifactory  1s

==> v1/Secret
NAME                                                 TYPE               DATA  AGE
artifactory-oss-local-artifactory                    Opaque             0     1s
artifactory-oss-local-artifactory-binarystore        Opaque             1     1s
artifactory-oss-local-artifactory-nginx-certificate  kubernetes.io/tls  2     1s
artifactory-oss-local-postgresql                     Opaque             1     1s

==> v1/Service
NAME                                       TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
artifactory-oss-local-artifactory          ClusterIP     10.43.236.11   <none>       8081/TCP                    1s
artifactory-oss-local-artifactory-nginx    LoadBalancer  10.43.24.11    <pending>    80:31642/TCP,443:31966/TCP  1s
artifactory-oss-local-postgresql           ClusterIP     10.43.155.224  <none>       5432/TCP                    1s
artifactory-oss-local-postgresql-headless  ClusterIP     None           <none>       5432/TCP                    1s

==> v1/ServiceAccount
NAME                               SECRETS  AGE
artifactory-oss-local-artifactory  1        1s

==> v1/StatefulSet
NAME                               READY  AGE
artifactory-oss-local-artifactory  0/1    1s
artifactory-oss-local-postgresql   0/1    1s


NOTES:
Congratulations. You have just deployed JFrog Artifactory OSS!

But again, even trying longhorn for FS, i could not get my k3d to properly serve PVCs.
I went back and created a k3s (1.0.0) with multipass (see guide here):

$ multipass list
Name                    State             IPv4             Image
foo3                    Running           192.168.64.5     Ubuntu 18.04 LTS
foo2                    Running           192.168.64.4     Ubuntu 18.04 LTS
foo1                    Running           192.168.64.22    Ubuntu 18.04 LTS
$ unset KUBECONFIG
$ kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
foo2   Ready    <none>   109s    v1.16.3-k3s.2
foo3   Ready    <none>   72s     v1.16.3-k3s.2
foo1   Ready    master   8m14s   v1.16.3-k3s.2

This time the PVCs were bound just fine;

NAME                                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-artifactory-oss-local-postgresql-0                  Bound    pvc-176e6821-1cc7-46e5-9249-86c46f99b877   50Gi       RWO            local-path     12s
artifactory-volume-artifactory-oss-local-artifactory-0   Bound    pvc-0e7cc89f-c516-414c-bd2c-782d8cfb2ee4   20Gi       RWO            local-path     12s

When the pods are up, we should be able to port forward to the Artifactory instance:

Proxying ACR:

First, let’s push a smaller image up to ACR so we have something for which to proxy:

$ docker tag nginx:1.13 lkedemocr.azurecr.io/nginx:1.13
$ docker push lkedemocr.azurecr.io/nginx:1.13
The push refers to repository [lkedemocr.azurecr.io/nginx]
7ab428981537: Pushed 
82b81d779f83: Pushed 
d626a8ad97a1: Pushed 
1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948

Next, we can set up a “remote repository” of type Docker to proxy the registry.

Initially, i got the error:

$ docker pull 45.79.61.98/myacrproxy/nginx:1.13
Error response from daemon: manifest for 45.79.61.98/myacrproxy/nginx:1.13 not found: manifest unknown: The named manifest is not known to the registry.

But that is when i realized i neglected to add my lkedemo user/pass in the “advanced” section (and by default, it tries to proxy anonymously, which ACR isn’t keen on).

On my second pass, that worked fine:

$ docker pull 45.79.61.98/myacrproxy/nginx:1.13
1.13: Pulling from myacrproxy/nginx
Digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90
Status: Downloaded newer image for 45.79.61.98/myacrproxy/nginx:1.13
45.79.61.98/myacrproxy/nginx:1.13

We then tried the SaaS offering and set up syncing:

We can now login and prove we can sync with that remote repository as well:

$ docker login idjjfrogsastest-mysaasproxy.jfrog.io
Username: admin
Password: 
Login Succeeded
$ docker pull idjjfrogsastest-mysaasproxy.jfrog.io/nginx:1.13
1.13: Pulling from nginx
Digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90
Status: Downloaded newer image for idjjfrogsastest-mysaasproxy.jfrog.io/nginx:1.13
idjjfrogsastest-mysaasproxy.jfrog.io/nginx:1.13

I wanted to test XRay, but unfortunately that isn’t included in the SaaS Demo nor Pro editions.

As you can see, XRay is grayed out

Summary

What we learned in our testing is that JFrog Artifactory commercial offering is quite complete. The OSS version, however, is far more limited only offering basic maven repo hosting and restricting features like replications. As Artifactory is, let’s face it, a fat Java binary, i have a hard time recommending the OSS version unless it’s a half step to the commercial product.  

I was really quite disappointed to find that repository replications limited themselves to Artifactory instances only.  I was really hoping for an intelligent container registry solution i could use with ECR, ACR, or GCR to name a few.  That said, I did like how easy it was to proxy back ACR so if we had a situation with an Azure Container Registry in place, we could easily pull them in for XRay scanning after the fact. Well, that is, I assume one can as XRay isn’t in the SaaS demo or Pro Demo licensed.  

As far as pricing, as good as XRay might be, I am not sure if it’s worth US$29,500/year or $500/mo for a cloud instance. The $30 is just for the license - one needs to pay for compute beyond that.  However, if the CSO likes XRay, it’s price might compare favourably to tools like Prisma/Twistlock.  

In our next blog post we’ll add Nexus and others into the mix to show how we can handle multiple artifact management products.  We can also show a tool independant method of container image syncing using a pipeline, which is less elegant, but a strategy many employ to sync container images to different downstream registries.