Published: Feb 20, 2024 by Isaac Johnson
Last time we setup Spacelift and used it to deploy our IaC OpenTofu (Terraform) code. Today we will look at Ansible and Kubernetes deployments using Spacelift.io.
Ansible
Let’s create a new stack
I’ll pick my ansible playbooks repo
I’ll pick Ansible and type in a playbook (I wish I could have used a picklist from the repo but it’s just a free form field)
I skipped the optional fields to get to summary
I trigger a run and see it pull the Ansible runner image
It finished successfully but if we look, it didn’t install anywhere
I tried making a Context for ansible
Where I set the ansible cfg
And created a hosts and an ansible.cfg to point to it
I tried two formats for hosts
builder@LuiGi17:/mnt/c/Users/isaac/Downloads$ cat hosts
127.0.0.1
builder@LuiGi17:/mnt/c/Users/isaac/Downloads$ vi hosts
builder@LuiGi17:/mnt/c/Users/isaac/Downloads$ cat hosts
[all]
127.0.0.1
but none of them seemed to get picked up
I have a bit of a bone to pick in that in this UI, I cannot actually see where my Ansible run is taking place
The only way to really know is to use the ‘Behavior’ tab to check the pool
I’ve tried every way I can think of to dynamically add hosts via context.
I used hosts.ini and hosts:
and set an ansible.cfg
Also via variable
I also tried hooks
I’ll try adding in source, something I would never really do for a system
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ git checkout -b test-spacelift
Switched to a new branch 'test-spacelift'
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ vi hosts
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ cat hosts
mail.testfake.com
[webservers]
foo.testfake.com
bar.testfake.com
[dbservers]
one.testfake.com
two.testfake.com
three.testfake.com
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ git add hosts && git commit -m 'fake test'
[test-spacelift d2b2df0] fake test
1 file changed, 10 insertions(+)
create mode 100644 hosts
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ git push
fatal: The current branch test-spacelift has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin test-spacelift
builder@DESKTOP-QADGF36:~/Workspaces/ansible-playbooks$ git push --set-upstream origin test-spacelift
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 16 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 327 bytes | 327.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
remote:
remote: Create a pull request for 'test-spacelift' on GitHub by visiting:
remote: https://github.com/idjohnson/ansible-playbooks/pull/new/test-spacelift
remote:
To https://github.com/idjohnson/ansible-playbooks.git
* [new branch] test-spacelift -> test-spacelift
Branch 'test-spacelift' set up to track remote branch 'test-spacelift' from 'origin'.
Now that we have some kind of fake hosts file in source, we can use that branch in Spacelift
But even that was not found
Kubernetes
Let’s try and add a Kubernetes stack
I’ll pick a repo with helm or YAML manifests
I can go up to 1.29.1 at the time of this writing, but I’ll pick a kubectl that matches my test cluster version
I created it, however, I didn’t trigger it yet since I want to create a worker pool we can use for engaging with K8s
I’ll go to “Worker Pools” and chose to “create” a new one
I’ll give it a name and description as well as some labels
We need a CSR to create the pool. I did a new one using:
$ openssl req -new -newkey rsa:4096 -nodes -keyout luigi-spacelift.key -out spacelift-luigi.csr
.+.........+...+..+++++++++++++++++++++++++++++++++++++++++++++*...+.+...+........+............+...+............+...+++++++++++++++++++++++++++++++++++++++++++++*...+....+...........+....+.....+.+...+..+.......+.................+...............................+..+.+.....+.+......+..+..........+........+...+......+.............+...+..+..........+...............+...+..+................+.....+.+......+............+..+......+.......+..+....+.........+...+.........+.....+...+...+.......+......+.........+.................+...+...+...+++++
......+......+.........+.+...............+...+...+++++++++++++++++++++++++++++++++++++++++++++*.....+..+....+.........+..+....+...+........+...+.+...+.....+..........snip....
which I then picked in the Pool create widget
This automatically prompts us to save a config file
I’ll add and update the helm repo
builder@LuiGi17:~$ helm repo add spacelift https://downloads.spacelift.io/helm
"spacelift" has been added to your repositories
builder@LuiGi17:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "spacelift" chart repository
...Successfully got an update from the "openfunction" chart repository
...Successfully got an update from the "sonatype" chart repository
...Successfully got an update from the "backstage" chart repository
...Successfully got an update from the "frappe" chart repository
...Successfully got an update from the "ananace-charts" chart repository
...Successfully got an update from the "deliveryhero" chart repository
...Successfully got an update from the "gitea-charts" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
I have the two files I need to do the work
builder@LuiGi17:~$ ls -l /home/builder/luigi-spacelift.key
-rw------- 1 builder builder 3272 Feb 5 19:40 /home/builder/luigi-spacelift.key
builder@LuiGi17:~$ ls -ltra /mnt/c/Users/isaac/Downloads/ | tail -n3
-rwxrwxrwx 1 builder builder 1765 Feb 5 19:41 spacelift-luigi.csr
drwxrwxrwx 1 builder builder 4096 Feb 5 19:44 .
-rwxrwxrwx 1 builder builder 2740 Feb 5 19:44 worker-pool-01HNY1BNJ5S9R8BD6Q5G2VGKM6.config
So now I just install with helm
builder@LuiGi17:~$ helm upgrade spacelift-worker spacelift/spacelift-worker --install --set "credentials.token=`cat /mnt/c/Users/isaac/Downloads/worker-pool-01HNY1BNJ5S9R8BD6Q5G2VGKM6.config | tr -d '\n'`,credentials.privateKey=`cat /home/builder/luigi-spacelift.key | base64 | tr -d '\n'`"
Release "spacelift-worker" does not exist. Installing it now.
NAME: spacelift-worker
LAST DEPLOYED: Mon Feb 5 19:48:37 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
I can now see a pool created
Now, back in my Stack, I’ll go to the Behavior tab to change pools
I went back and triggered the flow. I take issue a bit with Spacelift. Even though I switched to an incognito browser just so i could keep my personal Github (idjohnson), I see it fetched instead my corporate identity as the triggering user.
This failed because it does nothing to sort out the Kubeconfig in the agent. I need to provide that
This actually creates an interesting scenario. If I wanted to, I could use an external facing Kubeconfig in my context
I’ll create a new context
I’ll attach it to the K8s stack
I’ll add a mounted file with a proper externally available Kube config
And ALSO a variable to set a KUBECONFIG path
I now have an associated context with both pieces we need
Seems it still wants the dot path
I’ll add that
I actually got past the error by manually adding the context. Seems the “auto add” didn’t take. I knew it was fixed when I saw my mounted files listed in Resources
Realizing my errors were because that repo lacked a k8s manifest, I switched to one that had some. More importantly, I specified the subfolder with the manifests
Because I had two “addapp” deployments in that folder, it complained (rightfully so)
I made a patch branch to fix
Then switched to it and saved
I noticed it had tied itself to the repo when fixing another file triggered an run automatically
Once I cleaned up the YAML manifests, I got past planning and was prompted to apply
It created most of the objects but did get stuck on an imagePullSecrets in one of the deployments
Here we can see a deployment in action
Summary
We weren’t able to get Ansible (which is marked Beta) to work. However, we worked out deploying Kubernetes using private worker pools and an externally accessible Kubeconfig. I stewed on that and realized that if my Kubeconfig was externally reachable, there really was no need to use a private pool. I switched back to the provided shared pool and it worked just as well
I’ll have to hold judgement on Ansible. Right now, I’ll assume it’s not ready for prime time and plan on keeping my AWX instance.