Published: May 7, 2024 by Isaac Johnson
We all know pastebin is great for sharing code snippets. Many of us use Github GISTs for much the same purpose. However, there are occasions we may wish to just self host an equivalent service.
Opengist is a self-hosted gist service written in Go. The README shows how to launch in docker quite easily:
version: "3"
services:
opengist:
image: ghcr.io/thomiceli/opengist:1.7
container_name: opengist
restart: unless-stopped
ports:
- "6157:6157" # HTTP port
- "2222:2222" # SSH port, can be removed if you don't use SSH
volumes:
- "$HOME/.opengist:/opengist"
Let’s look at how we might do the same using a Kubernetes manifest.
builder@DESKTOP-QADGF36:~/Workspaces/opengist$ vi opengist.yaml
builder@DESKTOP-QADGF36:~/Workspaces/opengist$ cat opengist.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: opengist-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: opengist-deployment
spec:
replicas: 1
selector:
matchLabels:
app: opengist
template:
metadata:
labels:
app: opengist
spec:
containers:
- name: opengist
image: ghcr.io/thomiceli/opengist:1.7
ports:
- containerPort: 6157
volumes:
- name: opengist-volume
persistentVolumeClaim:
claimName: opengist-pvc
---
apiVersion: v1
kind: Service
metadata:
name: opengist-service
spec:
selector:
app: opengist
ports:
- protocol: TCP
port: 80
targetPort: 6157
type: ClusterIP
I can now apply it:
builder@DESKTOP-QADGF36:~/Workspaces/opengist$ kubectl apply -f ./opengist.yaml
persistentvolumeclaim/opengist-pvc created
deployment.apps/opengist-deployment created
service/opengist-service created
Let me next do a local test
$ kubectl get svc opengist-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
opengist-service ClusterIP 10.43.95.24 <none> 80/TCP 43m
$ kubectl port-forward svc/opengist-service 8888:80
Forwarding from 127.0.0.1:8888 -> 6157
Forwarding from [::1]:8888 -> 6157
I now see the main page
I’ll register a new account next
I can create a quick text GIST
And then make it a public GIST anyone (could) view if they had K8s access
Ingress
I actually want to try this as a URL. To do that, I need to expose it publicly with TLS ingress to my cluster.
I’ll first create an A Record. This time let’s use AWS as we’ve done a lot of Azure DNS lately.
$ cat r53-opengist.json
{
"Comment": "CREATE opengist fb.s A record ",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "opengist.freshbrewed.science",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "75.73.224.240"
}
]
}
}
]
}
$ aws route53 change-resource-record-sets --hosted-zone-id Z39E8QFU0F9PZP --change-batch file://r53-opengist.json
{
"ChangeInfo": {
"Id": "/change/C06289992V6BMKD7V7424",
"Status": "PENDING",
"SubmittedAt": "2024-04-25T22:43:19.091Z",
"Comment": "CREATE opengist fb.s A record "
}
}
I’ll next use that in an Nginx ingress manifest:
$ cat ingress.opengist.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.org/websocket-services: opengist-service
name: opengist
spec:
rules:
- host: opengist.freshbrewed.science
http:
paths:
- backend:
service:
name: opengist-service
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- opengist.freshbrewed.science
secretName: opengist-tls
Which I can apply:
$ kubectl apply -f ./ingress.opengist.yaml
ingress.networking.k8s.io/opengist created
Once I saw the cert was live
$ kubectl get cert opengist-tls
NAME READY SECRET AGE
opengist-tls True opengist-tls 3m51s
I went to test at https://opengist.freshbrewed.science/
As you can see, that lines up
One of the handy features of Opengist is being able to git clone
the GIST itself using git
Next, I want to try and create a private GIST which would be useful for things like API keys, passwords or Credentials
I can see the URL as well as chose a GIT option to clone my private secret using basic auth or SSH keys
My first check is to just double check the URL requires auth to access by trying the URL in an Incognito/InPrivate window
Next, I tried the git clone with basic auth which has me login like any other GIT https repo
If I was scripting and needed to pass the Auth in the URL, I could do that with user:pass:
The last option is using an SSH key
This means I would need to go the settings for my user and use my Pub key
$ cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgO1T9X8+t/hjDA/e+HqIVcLufkhSUswa15cy1i3Rfz isaac.johnson@gmail.com
and save that to my settings
Which I can now see listed
Actually, here lies a bit of an issue. I’m using an Nginx Ingress serving https so it really cannot handle port 22 (SSH) and I’m not about to expose 22 globally so it will time out
There are some Admin settings we could turn on
These include disabling signup, gravatars or requiring login to access.
Linx Server
Let’s look at another containerized app for sharing everything from photos to code snippets, Linx.
Let’s look first at the Docker compose file
version: '2.2'
services:
linx-server:
container_name: linx-server
image: andreimarcu/linx-server
command: -config /data/linx-server.conf
volumes:
- /path/to/files:/data/files
- /path/to/meta:/data/meta
- /path/to/linx-server.conf:/data/linx-server.conf
network_mode: bridge
ports:
- "8080:8080"
restart: unless-stopped
https://github.com/andreimarcu/linx-server
I think we can turn that into a Kubernetes manifest without much trouble
builder@DESKTOP-QADGF36:~/Workspaces/linx$ cat manifest.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxfile-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxmeta-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linx-server-configmap
data:
linx-server.conf: |
bind: 127.0.0.1:8080
sitename: myLinx
maxsize: 4294967296
maxexpiry: 86400
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linx-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: linx-server
template:
metadata:
labels:
app: linx-server
spec:
containers:
- name: linx-server
image: andreimarcu/linx-server
command: ["-config", "/data/linx-server.conf"]
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /data/linx-server.conf
subPath: linx-server.conf
- name: linx-file-volume
mountPath: /data/files
- name: linx-meta-volume
mountPath: /data/meta
volumes:
- name: config-volume
configMap:
defaultMode: 420
name: linx-server-configmap
- name: linx-file-volume
persistentVolumeClaim:
claimName: linxfile-pvc
- name: linx-meta-volume
persistentVolumeClaim:
claimName: linxmeta-pvc
---
apiVersion: v1
kind: Service
metadata:
name: linx-server-service
spec:
selector:
app: linx-server
ports:
- protocol: TCP
port: 8080
targetPort: 8080
I can now create with a quick kubectl command
$ kubectl apply -f ./manifest.yaml
persistentvolumeclaim/linxfile-pvc created
persistentvolumeclaim/linxmeta-pvc created
configmap/linx-server-configmap created
deployment.apps/linx-server-deployment created
service/linx-server-service created
The first issue I noticed was the command wasn’t quite right and was causing the pod to fall over
builder@DESKTOP-QADGF36:~/Workspaces/linx$ kubectl get pods -l app=linx-server
NAME READY STATUS RESTARTS AGE
linx-server-deployment-58cf6bd897-vnmbt 0/1 CrashLoopBackOff 3 (28s ago) 76s
builder@DESKTOP-QADGF36:~/Workspaces/linx$ kubectl logs linx-server-deployment-58cf6bd897-vnmbt
builder@DESKTOP-QADGF36:~/Workspaces/linx$ kubectl describe pod linx-server-deployment-58cf6bd897-vnmbt | tail -n 10
---- ------ ---- ---- -------
Normal Scheduled 77s default-scheduler Successfully assigned default/linx-server-deployment-58cf6bd897-vnmbt to builder-hp-elitebook-745-g5
Normal Pulled 75s kubelet Successfully pulled image "andreimarcu/linx-server" in 1.970830284s (1.970857312s including waiting)
Normal Pulled 74s kubelet Successfully pulled image "andreimarcu/linx-server" in 628.179356ms (628.218327ms including waiting)
Normal Pulled 58s kubelet Successfully pulled image "andreimarcu/linx-server" in 403.825084ms (403.850088ms including waiting)
Normal Pulling 35s (x4 over 77s) kubelet Pulling image "andreimarcu/linx-server"
Normal Pulled 35s kubelet Successfully pulled image "andreimarcu/linx-server" in 403.600029ms (403.629153ms including waiting)
Normal Created 35s (x4 over 75s) kubelet Created container linx-server
Warning Failed 35s (x4 over 75s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-config": executable file not found in $PATH: unknown
Warning BackOff 8s (x7 over 74s) kubelet Back-off restarting failed container linx-server in pod linx-server-deployment-58cf6bd897-vnmbt_default(c57599bb-640e-47f9-b7a0-a1bc6914ff36)
I decided to test locally by making the dirs and conf file
$ mkdir -p /home/builder/Workspaces/linx/meta
$ mkdir -p /home/builder/Workspaces/linx/files
$ cat /home/builder/Workspaces/linx/linx-server.conf:
bind = 127.0.0.1:8899
sitename = myLinx
maxsize = 4294967296
maxexpiry = 86400
Then running the container locally
builder@DESKTOP-QADGF36:~/Workspaces/linx$ docker run -p 8899:8899 -v /home/builder/Workspaces/linx/files:/data/files -v /home/builder/Workspaces/linx/meta:/data/meta -v /home/builder/Workspaces/linx/linx-server.conf:/data/linx-server.conf andreimarcu/linx-server -config /data/linx-server.conf
2024/04/26 11:06:11 Serving over http, bound on 0.0.0.0:8080
Using Docker Desktop, I looked up the actual running command
which was
~ $ ps
PID USER TIME COMMAND
1 nobody 0:00 /usr/local/bin/linx-server -bind=0.0.0.0:8080 -filespath=/data/files/ -metapath=/data/meta/ -config /data/linx-server.conf
21 nobody 0:00 /bin/sh
27 nobody 0:00 ps
~ $
~ $
Let’s change the command block to match as well as the configmap to use “=” instead of “:”
$ cat manifest.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxfile-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxmeta-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linx-server-configmap
data:
linx-server.conf: |
bind = 127.0.0.1:8080
sitename = myLinx
maxsize = 4294967296
maxexpiry = 86400
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linx-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: linx-server
template:
metadata:
labels:
app: linx-server
spec:
containers:
- name: linx-server
image: andreimarcu/linx-server
command:
- "/usr/local/bin/linx-server"
- "-bind=0.0.0.0:8080"
- "-filespath=/data/files/"
- "-metapath=/data/meta/"
- "-config"
- "/data/linx-server.conf"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /data/linx-server.conf
subPath: linx-server.conf
- name: linx-file-volume
mountPath: /data/files
- name: linx-meta-volume
mountPath: /data/meta
volumes:
- name: config-volume
configMap:
defaultMode: 420
name: linx-server-configmap
- name: linx-file-volume
persistentVolumeClaim:
claimName: linxfile-pvc
- name: linx-meta-volume
persistentVolumeClaim:
claimName: linxmeta-pvc
---
apiVersion: v1
kind: Service
metadata:
name: linx-server-service
spec:
selector:
app: linx-server
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Let’s apply
builder@DESKTOP-QADGF36:~/Workspaces/linx$ kubectl apply -f ./manifest.yaml
persistentvolumeclaim/linxfile-pvc unchanged
persistentvolumeclaim/linxmeta-pvc unchanged
configmap/linx-server-configmap configured
deployment.apps/linx-server-deployment configured
service/linx-server-service unchanged
This time it worked
$ kubectl get pods -l app=linx-server
NAME READY STATUS RESTARTS AGE
linx-server-deployment-5cbf857bb5-897cr 1/1 Running 0 38s
I port-forwarded to the pod
$ kubectl port-forward linx-server-deployment-5cbf857bb5-897cr 8899:8080
Forwarding from 127.0.0.1:8899 -> 8080
Forwarding from [::1]:8899 -> 8080
Handling connection for 8899
Handling connection for 8899
Handling connection for 8899
Handling connection for 8899
I can now see the site
I clicked the box and uploaded an image
And I can verify it comes up
If I enter a password when uploading, it will upload and create a URL
Which I would need to enter to view the file
On this tool, let’s expose but use Azure DNS this time
$ az account set --subscription "Pay-As-You-Go" && az network dns record-set a add-record -g idjdnsrg -z tpk.pw -a 75.73.224.240 -n linx
{
"ARecords": [
{
"ipv4Address": "75.73.224.240"
}
],
"TTL": 3600,
"etag": "f49fe289-7963-4ad7-a330-0e82b10b70ae",
"fqdn": "linx.tpk.pw.",
"id": "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/idjdnsrg/providers/Microsoft.Network/dnszones/tpk.pw/A/linx",
"name": "linx",
"provisioningState": "Succeeded",
"resourceGroup": "idjdnsrg",
"targetResource": {},
"type": "Microsoft.Network/dnszones/A"
}
I’ll now create an ingress manifest
$ cat linx.ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: azuredns-tpkpw
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: linx-ingress
spec:
rules:
- host: linx.tpk.pw
http:
paths:
- backend:
service:
name: linx-server-service
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- linx.tpk.pw
secretName: linx-tls
And apply it
$ kubectl apply -f ./linx.ingress.yaml
ingress.networking.k8s.io/linx-ingress created
It was soon sorted
builder@DESKTOP-QADGF36:~$ kubectl get cert linx-tls
NAME READY SECRET AGE
linx-tls False linx-tls 56s
builder@DESKTOP-QADGF36:~$ kubectl get cert linx-tls
NAME READY SECRET AGE
linx-tls True linx-tls 85s
Now that it’s exposed, I can check it is working
Passwords
The author does support passwords by way of the scrypted library.
I’m still a bit stumped on how to generate them.
For instance, looking at the test code
package apikeys
import (
"testing"
)
func TestCheckAuth(t *testing.T) {
authKeys := []string{
"vhvZ/PT1jeTbTAJ8JdoxddqFtebSxdVb0vwPlYO+4HM=",
"vFpNprT9wbHgwAubpvRxYCCpA2FQMAK6hFqPvAGrdZo=",
}
if r, err := CheckAuth(authKeys, ""); err != nil && r {
t.Fatal("Authorization passed for empty key")
}
if r, err := CheckAuth(authKeys, "thisisnotvalid"); err != nil && r {
t.Fatal("Authorization passed for invalid key")
}
if r, err := CheckAuth(authKeys, "haPVipRnGJ0QovA9nyqK"); err != nil && !r {
t.Fatal("Authorization failed for valid key")
}
}
We see that somehow one of these two keys
"vhvZ/PT1jeTbTAJ8JdoxddqFtebSxdVb0vwPlYO+4HM=",
"vFpNprT9wbHgwAubpvRxYCCpA2FQMAK6hFqPvAGrdZo=",
matches
haPVipRnGJ0QovA9nyqK
Looking at the scrypted conf
const (
scryptSalt = "linx-server"
scryptN = 16384
scryptr = 8
scryptp = 1
scryptKeyLen = 32
)
I found a web encrypter that worked at https://8gwifi.org/scrypt.jsp.
I’ll try and create a test password to use
Which I can use in the manifest
$ cat ./manifest.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxfile-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: linxmeta-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linx-server-auth
data:
authfile.conf: |
9Pd2bt4D7a0iB0YlM6NzLoeUHbDqqPablS2zoRGH8ew=
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linx-server-configmap
data:
linx-server.conf: |
bind = 127.0.0.1:8080
sitename = myLinx
maxsize = 4294967296
maxexpiry = 86400
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linx-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: linx-server
template:
metadata:
labels:
app: linx-server
spec:
containers:
- name: linx-server
image: andreimarcu/linx-server
command:
- "/usr/local/bin/linx-server"
- "-bind=0.0.0.0:8080"
- "-filespath=/data/files/"
- "-metapath=/data/meta/"
- "-authfile=/data/authfile.conf"
- "-basicauth=true"
- "-config"
- "/data/linx-server.conf"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /data/linx-server.conf
subPath: linx-server.conf
- name: auth-volume
mountPath: /data/authfile.conf
subPath: authfile.conf
- name: linx-file-volume
mountPath: /data/files
- name: linx-meta-volume
mountPath: /data/meta
volumes:
- name: config-volume
configMap:
defaultMode: 420
name: linx-server-configmap
- name: auth-volume
configMap:
defaultMode: 420
name: linx-server-auth
- name: linx-file-volume
persistentVolumeClaim:
claimName: linxfile-pvc
- name: linx-meta-volume
persistentVolumeClaim:
claimName: linxmeta-pvc
---
apiVersion: v1
kind: Service
metadata:
name: linx-server-service
spec:
selector:
app: linx-server
ports:
- protocol: TCP
port: 8080
targetPort: 8080
I’ll apply
$ kubectl apply -f ./manifest.yaml
persistentvolumeclaim/linxfile-pvc unchanged
persistentvolumeclaim/linxmeta-pvc unchanged
configmap/linx-server-auth configured
configmap/linx-server-configmap unchanged
deployment.apps/linx-server-deployment unchanged
service/linx-server-service unchanged
Then rotate the pod to make the configmap take affect
$ kubectl delete pod -l app=linx-server
pod "linx-server-deployment-688974f8c5-j24gr" deleted
Here you can see using the password in action
To change passwords, generate in a similar fashion, update the manifest and rotate the pod
builder@DESKTOP-QADGF36:~/Workspaces/linx$ !v
vi manifest.yaml
builder@DESKTOP-QADGF36:~/Workspaces/linx$ kubectl delete pod -l app=linx-server
pod "linx-server-deployment-688974f8c5-ffxdp" deleted
Summary
Today we explored setting up Opengist, an easy-to-use code sharing tool. Once exposed via TLS in Kubernetes, we explored several of its features including git cloning, passwords and accounts. We did have to punt on SSH as my Ingress makes that a bit of a challenge.
Second, we looked at Linx-server, a nice Open-source media sharing tool. As before, we set this up in Kubernetes and exposed it via TLS. Lastly, we looked into how passwords work and how one can generate and update new ones.