Published: Jan 9, 2025 by Isaac Johnson
I had a couple of Open-Source apps on my list to check out. The first is a quick link upload/sharing app, Pingvin. It’s similar to Filegator but makes it really easy to create time limited and password protected links.
The other tool is Subscription Manager. It’s easy to fire up in Docker but in all honesty, I beat my head against the wall trying to get it to work in Kubernetes. You’ll see a lot of examples of tagging in Gemini Code Assist and at one point Copilot (the new “Free” tier), but ultimately, I really only got it to work locally.
Let’s start with Pingvin…
Pingvin
A couple months back Marius posted about Pingvin. Pingvin is a self-hosted file sharing platform similar to WeTransfer or Filegator.
In his post, he launched Pingvin with Docker
docker run -d --name=Pingvin-Share \
-p 6090:3000 \
-v /volume1/docker/pingvin:/opt/app/backend/data \
-v /volume1/docker/pingvin/public:/opt/app/frontend/public/img \
--restart always \
stonith404/pingvin-share
If we look at the Docker-compose
services:
pingvin-share:
image: stonith404/pingvin-share
restart: unless-stopped
ports:
- 3000:3000
environment:
- TRUST_PROXY=false # Set to true if a reverse proxy is in front of the container
volumes:
- "./data:/opt/app/backend/data"
- "./data/images:/opt/app/frontend/public/img"
# To add ClamAV, to scan your shares for malicious files,
# see https://stonith404.github.io/pingvin-share/setup/integrations/#clamav-docker-only
We can see it’s pretty much the same (less the reverse proxy).
I’ll ask Gemini to convert this to a Kubernetes YAML manifest
which rendered out
apiVersion: v1
kind: Service
metadata:
name: pingvin-share
spec:
selector:
app: pingvin-share
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pingvin-share
spec:
replicas: 1
selector:
matchLabels:
app: pingvin-share
template:
metadata:
labels:
app: pingvin-share
spec:
containers:
- name: pingvin-share
image: stonith404/pingvin-share
ports:
- containerPort: 3000
env:
- name: TRUST_PROXY
value: "true"
volumeMounts:
- name: backend-data
mountPath: /opt/app/backend/data
- name: public-img
mountPath: /opt/app/frontend/public/img
volumes:
- name: backend-data
persistentVolumeClaim:
claimName: backend-data
- name: public-img
persistentVolumeClaim:
claimName: public-img
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backend-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Adjust as needed
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: public-img
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi # Adjust as needed
We can now apply
$ kubectl apply -f ./pingvin.yaml
service/pingvin-share created
persistentvolumeclaim/backend-data created
persistentvolumeclaim/public-img created
deployment.apps/pingvin-share created
I can see it now running
$ kubectl get po | grep ping
pingvin-share-856d9fc498-kwqbz 1/1 Running 0 28s
I can now port-forward to test
$ kubectl port-forward svc/pingvin-share 8088:80
Forwarding from 127.0.0.1:8088 -> 3000
Forwarding from [::1]:8088 -> 3000
Handling connection for 8088
Handling connection for 8088
Handling connection for 8088
I’ll Sign Up for an account
I can now upload a file
I uploaded a small 2Mb AnyCubic slice file and clicked share
I can set an expiry as well as a password
Clicking share gives me a URL
I can now just use an incognito window to test that indeed I can download
Exposing with TLS
To expose with a proper LE created cert, I’ll first need an A-Record in Cloud DNS
$ gcloud dns --project=myanthosproject2 record-sets create pingvin.steeped.space --zone="steepedspace" --type="A" --ttl="300" --rrdatas="75.73.224.240"
NAME TYPE TTL DATA
pingvin.steeped.space. A 300 75.73.224.240
I’ll then want to create an Ingress YAML
builder@DESKTOP-QADGF36:~$ cat pingvin.ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: gcpleprod2
ingress.kubernetes.io/proxy-body-size: "0"
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.org/client-max-body-size: "0"
nginx.org/proxy-connect-timeout: "3600"
nginx.org/proxy-read-timeout: "3600"
nginx.org/websocket-services: pingvin-share
name: pingvin-share
namespace: default
spec:
rules:
- host: pingvin.steeped.space
http:
paths:
- backend:
service:
name: pingvin-share
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- pingvin.steeped.space
secretName: pingvin-sharegcp-tls
and apply it
builder@DESKTOP-QADGF36:~$ kubectl apply -f ./pingvin.ingress.yaml
ingress.networking.k8s.io/pingvin-share created
When I see the cert is ready
builder@DESKTOP-QADGF36:~$ kubectl get cert pingvin-sharegcp-tls
NAME READY SECRET AGE
pingvin-sharegcp-tls True pingvin-sharegcp-tls 76s
I was able to test logging in (and indeed it kept my prior password and user)
While I don’t see the prior share listed, I did try manually creating the same URL and it worked
I can change language and colour schemes in ‘My Account’
I can also set a TOTP MFA
Now on sign-in I need to add my Authenticator code (which worked)
Subscription Manager
In another Marius Post, he launched this very cool little OS app using Portainer on a NAS
If we go to Github, we can find the project page for Subscription Manager
From that page we can see we can just invoke it with
docker run -p 3000:3000 dh1011/subscription-manager:latest
We can also see the Docker Compose invokation
version: '3.8'
services:
app:
image: dh1011/subscription-manager:latest
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- HOST=0.0.0.0
volumes:
subscriptions_data:
NOTE: Before we go on, a bit of a spoiler - My attempts to get this to forward via K8s, run in K8s and then add Passwords really do not pan out. You are welcome to follow along for a bit of a journey and I welcome feedback as always, but the only success I had with Subscription Manager was running in Docker
Let’s convert that over to a K8s manifest to deploy in k3s
apiVersion: v1
kind: Service
metadata:
name: subscription-manager
spec:
selector:
app: subscription-manager
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: subscription-manager
spec:
replicas: 1
selector:
matchLabels:
app: subscription-manager
template:
metadata:
labels:
app: subscription-manager
spec:
containers:
- name: subscription-manager
image: dh1011/subscription-manager:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: HOST
value: 0.0.0.0
volumeMounts:
- name: subscriptionsdata
mountPath: /subscriptions_data
volumes:
- name: subscriptionsdata
persistentVolumeClaim:
claimName: subscription_data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: subscriptionsdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Adjust storage size as needed
And we can apply
builder@LuiGi:~$ kubectl apply -f ./submgr.yaml
service/subscription-manager created
deployment.apps/subscription-manager created
persistentvolumeclaim/subscriptionsdata created
I didn’t need the PVC so I removed it
builder@LuiGi:~$ cat submgr.yaml
apiVersion: v1
kind: Service
metadata:
name: subscription-manager
spec:
selector:
app: subscription-manager
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: subscription-manager
spec:
replicas: 1
selector:
matchLabels:
app: subscription-manager
template:
metadata:
labels:
app: subscription-manager
spec:
containers:
- name: subscription-manager
image: dh1011/subscription-manager:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: HOST
value: 0.0.0.0
I can now port-forward
$ kubectl port-forward svc/subscription-manager 8088:80
Forwarding from 127.0.0.1:8088 -> 3000
Forwarding from [::1]:8088 -> 3000
It kept dumping connections. I checked the logs and it seems that it’s listening on port 5000, not 3000
builder@LuiGi:~$ kubectl logs subscription-manager-6bfd4689d9-gk2mn
> subscription-manager@0.1.0 dev
> concurrently "npm run server" "npm run client"
[1]
[1] > subscription-manager@0.1.0 client
[1] > react-scripts start
[1]
[0]
[0] > subscription-manager@0.1.0 server
[0] > node server/server.js
[0]
[0] Using subscriptions database located at: /app/server/subscriptions.db due to baseDir: /app/server
[0] Server running on port 5000
I switched to “5000” and tried again.. but I just keep getting dumps
builder@LuiGi:~$ kubectl logs subscription-manager-58d48f5847-xslbk
> subscription-manager@0.1.0 dev
> concurrently "npm run server" "npm run client"
[0]
[0] > subscription-manager@0.1.0 server
[0] > node server/server.js
[0]
[1]
[1] > subscription-manager@0.1.0 client
[1] > react-scripts start
[1]
[0] Using subscriptions database located at: /app/server/subscriptions.db due to baseDir: /app/server
[0] Server running on port 5000
[0] Database initialized successfully with new table for user configuration.
[1] Attempting to bind to HOST environment variable: 0.0.0.0
[1] If this was unintentional, check that you haven't mistakenly set it in your shell.
[1] Learn more here: https://cra.link/advanced-config
[1]
[1] (node:67) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] (Use `node --trace-deprecation ...` to show where the warning was created)
[1] (node:67) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] Starting the development server...
[1]
[1] /app/node_modules/react-scripts/scripts/start.js:19
[1] throw err;
[1] ^
[1]
[1] Error: EMFILE: too many open files, watch '/app/public'
[1] at FSWatcher.<computed> (node:internal/fs/watchers:247:19)
[1] at Object.watch (node:fs:2418:34)
[1] at createFsWatchInstance (/app/node_modules/chokidar/lib/nodefs-handler.js:119:15)
[1] at setFsWatchListener (/app/node_modules/chokidar/lib/nodefs-handler.js:166:15)
[1] at NodeFsHandler._watchWithNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:331:14)
[1] at NodeFsHandler._handleDir (/app/node_modules/chokidar/lib/nodefs-handler.js:567:19)
[1] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[1] at async NodeFsHandler._addToNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:617:16)
[1] at async /app/node_modules/chokidar/index.js:451:21
[1] at async Promise.all (index 0)
[1] Emitted 'error' event on FSWatcher instance at:
[1] at FSWatcher._handleError (/app/node_modules/chokidar/index.js:647:10)
[1] at NodeFsHandler._addToNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:645:18)
[1] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[1] at async /app/node_modules/chokidar/index.js:451:21
[1] at async Promise.all (index 0) {
[1] errno: -24,
[1] syscall: 'watch',
[1] code: 'EMFILE',
[1] path: '/app/public',
[1] filename: '/app/public'
[1] }
[1]
[1] Node.js v18.20.4
[1] npm run client exited with code 1
On my Dockerhost, docker-compose
fails
builder@builder-T100:~/subscription-manager$ docker-compose up --build
2025-01-03 19:06:21,330 DEBUG [ddtrace.appsec._remoteconfiguration] [_remoteconfiguration.py:60] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - [2394021][P: 2387459] Register ASM Remote Config Callback
2025-01-03 19:06:21,330 DEBUG [ddtrace._trace.tracer] [tracer.py:868] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - finishing span name='requests.request' id=15291861347693581631 trace_id=137536349050157129029812018520136067947 parent_id=None service='requests' resource='GET /version' type='http' start=1735952781.3289847 end=1735952781.3299909 duration=0.001006056 error=1 tags={'_dd.base_service': 'bin', '_dd.injection.mode': 'host', '_dd.p.dm': '-0', '_dd.p.tid': '6778898d00000000', 'component': 'requests', 'error.message': "HTTPConnection.request() got an unexpected keyword argument 'chunked'", 'error.stack': 'Traceback (most recent call last):\n File "/opt/datadog-packages/datadog-apm-library-python/2.11.3/ddtrace_pkgs/site-packages-ddtrace-py3.10-manylinux2014/ddtrace/contrib/requests/connection.py", line 110, in _wrap_send\n response = func(*args, **kwargs)\n File "/home/builder/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send\n r = adapter.send(request, **kwargs)\n File "/home/builder/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send\n resp = conn.urlopen(\n File "/home/builder/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen\n response = self._make_request(\n File "/home/builder/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request\n conn.request(\nTypeError: HTTPConnection.request() got an unexpected keyword argument \'chunked\'\n', 'error.type': 'builtins.TypeError', 'http.method': 'GET', 'http.url': 'http+docker://localhost/version', 'http.useragent': 'docker-compose/1.29.2 docker-py/5.0.3 Linux/6.8.0-48-generic', 'language': 'python', 'out.host': 'localhost', 'runtime-id': '2bf576e309054c36bed0bb592427ffeb', 'span.kind': 'client'} metrics={'_dd.measured': 1, '_dd.top_level': 1, '_dd.tracer_kr': 1.0, '_sampling_priority_v1': 1, 'process_id': 2394021} links='' events='' (enabled:True)
2025-01-03 19:06:22,023 DEBUG [ddtrace.internal.telemetry.writer] [writer.py:172] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 7289 in 0.11311s to telemetry/proxy/api/v2/apmtelemetry. response: 202
2025-01-03 19:06:22,207 DEBUG [ddtrace.internal.telemetry.writer] [writer.py:172] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 1063 in 0.18310s to telemetry/proxy/api/v2/apmtelemetry. response: 202
2025-01-03 19:06:22,281 DEBUG [ddtrace.internal.telemetry.writer] [writer.py:172] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 1060 in 0.07318s to telemetry/proxy/api/v2/apmtelemetry. response: 202
2025-01-03 19:06:22,330 DEBUG [ddtrace.internal.writer.writer] [writer.py:250] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - creating new intake connection to unix:///var/run/datadog/apm.socket with timeout 2
2025-01-03 19:06:22,330 DEBUG [ddtrace.internal.writer.writer] [writer.py:254] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Sending request: PUT v0.5/traces {'Datadog-Meta-Lang': 'python', 'Datadog-Meta-Lang-Version': '3.10.12', 'Datadog-Meta-Lang-Interpreter': 'CPython', 'Datadog-Meta-Tracer-Version': '2.11.3', 'Datadog-Client-Computed-Top-Level': 'yes', 'Datadog-Entity-ID': 'in-176992358', 'Content-Type': 'application/msgpack', 'X-Datadog-Trace-Count': '1'}
2025-01-03 19:06:22,332 DEBUG [ddtrace.internal.writer.writer] [writer.py:262] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Got response: 200 OK
2025-01-03 19:06:22,332 DEBUG [ddtrace.internal.writer.writer] [writer.py:268] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 1.56KB in 0.00209s to unix:///var/run/datadog/apm.socket/v0.5/traces
2025-01-03 19:06:22,332 DEBUG [ddtrace.sampler] [sampler.py:94] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - initialized RateSampler, sample 100% of traces
2025-01-03 19:06:22,333 DEBUG [ddtrace.sampler] [sampler.py:94] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - initialized RateSampler, sample 100% of traces
2025-01-03 19:06:22,333 DEBUG [ddtrace.sampler] [sampler.py:94] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - initialized RateSampler, sample 100% of traces
2025-01-03 19:06:22,333 DEBUG [ddtrace.sampler] [sampler.py:94] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - initialized RateSampler, sample 100% of traces
2025-01-03 19:06:22,333 DEBUG [ddtrace.sampler] [sampler.py:94] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - initialized RateSampler, sample 100% of traces
2025-01-03 19:06:22,346 DEBUG [ddtrace.internal.telemetry.writer] [writer.py:172] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 694 in 0.06503s to telemetry/proxy/api/v2/apmtelemetry. response: 202
2025-01-03 19:06:22,418 DEBUG [ddtrace.internal.telemetry.writer] [writer.py:172] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - sent 696 in 0.07117s to telemetry/proxy/api/v2/apmtelemetry. response: 202
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 214, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 237, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/home/builder/.local/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/opt/datadog-packages/datadog-apm-library-python/2.11.3/ddtrace_pkgs/site-packages-ddtrace-py3.10-manylinux2014/ddtrace/appsec/_common_module_patches.py", line 163, in wrapped_request_D8CB81E472AF98A2
return original_request_callable(*args, **kwargs)
File "/home/builder/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/datadog-packages/datadog-apm-library-python/2.11.3/ddtrace_pkgs/site-packages-ddtrace-py3.10-manylinux2014/ddtrace/contrib/requests/connection.py", line 110, in _wrap_send
response = func(*args, **kwargs)
File "/home/builder/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/builder/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/builder/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/home/builder/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
TypeError: HTTPConnection.request() got an unexpected keyword argument 'chunked'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 33, in <module>
sys.exit(load_entry_point('docker-compose==1.29.2', 'console_scripts', 'docker-compose')())
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 81, in main
command_func()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 200, in perform_command
project = project_from_options('.', options)
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 60, in project_from_options
return get_project(
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 152, in get_project
client = get_client(
File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 41, in get_client
client = docker_client(
File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 170, in docker_client
client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 221, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: HTTPConnection.request() got an unexpected keyword argument 'chunked'
2025-01-03 19:06:22,423 DEBUG [ddtrace.internal.remoteconfig.worker] [worker.py:113] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - [2394021][P: 2394021] Remote Config Poller fork. Stopping Pubsub services
2025-01-03 19:06:22,423 DEBUG [ddtrace.internal.remoteconfig.worker] [worker.py:113] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - [2394021][P: 2394021] Remote Config Poller fork. Stopping Pubsub services
2025-01-03 19:06:22,424 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.gc.count.gen0:43
2025-01-03 19:06:22,424 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.gc.count.gen1:4
2025-01-03 19:06:22,424 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.gc.count.gen2:7
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.cpu.time.sys:0.13
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.cpu.time.user:1.05
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.cpu.ctx_switch.voluntary:23
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.cpu.ctx_switch.involuntary:45
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.thread_count:3
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.mem.rss:60596224
2025-01-03 19:06:22,425 DEBUG [ddtrace.internal.runtime.runtime_metrics] [runtime_metrics.py:146] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Writing metric runtime.python.cpu.percent:0.0
2025-01-03 19:06:22,426 DEBUG [ddtrace._trace.tracer] [tracer.py:313] [dd.service=bin dd.env= dd.version= dd.trace_id=0 dd.span_id=0] - Waiting 5 seconds for tracer to finish. Hit ctrl-c to quit.
I will try to just launch it with docker
builder@builder-T100:~/subscription-manager$ docker run -p 3123:3000 dh1011/subscription-manager:latest
Unable to find image 'dh1011/subscription-manager:latest' locally
latest: Pulling from dh1011/subscription-manager
7d98d813d54f: Pull complete
da802df85c96: Pull complete
7aadc5092c3b: Pull complete
ad1c7cfc347f: Pull complete
59ee42d02ee5: Pull complete
eb173c1dbe92: Pull complete
f110c757afc5: Pull complete
e3d8693bad2f: Pull complete
9479facfe0ae: Pull complete
41af84c30d62: Pull complete
58b423443908: Pull complete
35a555b1248f: Pull complete
0a3f2a36948b: Pull complete
Digest: sha256:3517b960983162504b304d0c70d849a7093744ce76e4c0a144e8164fdd0b5087
Status: Downloaded newer image for dh1011/subscription-manager:latest
> subscription-manager@0.1.0 dev
> concurrently "npm run server" "npm run client"
[1]
[1] > subscription-manager@0.1.0 client
[1] > react-scripts start
[1]
[0]
[0] > subscription-manager@0.1.0 server
[0] > node server/server.js
[0]
[0] Using subscriptions database located at: /app/server/subscriptions.db due to baseDir: /app/server
[0] Server running on port 5000
[0] Database initialized successfully with new table for user configuration.
[1] (node:66) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] (Use `node --trace-deprecation ...` to show where the warning was created)
[1] (node:66) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] Starting the development server...
[1]
[1] One of your dependencies, babel-preset-react-app, is importing the
[1] "@babel/plugin-proposal-private-property-in-object" package without
[1] declaring it in its dependencies. This is currently working because
[1] "@babel/plugin-proposal-private-property-in-object" is already in your
[1] node_modules folder for unrelated reasons, but it may break at any time.
[1]
[1] babel-preset-react-app is part of the create-react-app project, which
[1] is not maintianed anymore. It is thus unlikely that this bug will
[1] ever be fixed. Add "@babel/plugin-proposal-private-property-in-object" to
[1] your devDependencies to work around this error. This will make this message
[1] go away.
[1]
[1] Compiled with warnings.
[1]
[1] [eslint]
[1] src/components/SubscriptionList.js
[1] Line 39:9: 'currencySymbol' is assigned a value but never used no-unused-vars
[1]
[1] src/components/SubscriptionModal.js
[1] Line 60:9: 'convertToMonthlyAmount' is assigned a value but never used no-unused-vars
[1]
[1] Search for the keywords to learn more about each warning.
[1] To ignore, add // eslint-disable-next-line to the line before.
[1]
[1] WARNING in [eslint]
[1] src/components/SubscriptionList.js
[1] Line 39:9: 'currencySymbol' is assigned a value but never used no-unused-vars
[1]
[1] src/components/SubscriptionModal.js
[1] Line 60:9: 'convertToMonthlyAmount' is assigned a value but never used no-unused-vars
[1]
[1] webpack compiled with 1 warning
[1] Compiling...
[1] Compiled with warnings.
[1]
[1] [eslint]
[1] src/components/SubscriptionList.js
[1] Line 39:9: 'currencySymbol' is assigned a value but never used no-unused-vars
[1]
[1] src/components/SubscriptionModal.js
[1] Line 60:9: 'convertToMonthlyAmount' is assigned a value but never used no-unused-vars
[1]
[1] Search for the keywords to learn more about each warning.
[1] To ignore, add // eslint-disable-next-line to the line before.
[1]
[1] WARNING in [eslint]
[1] src/components/SubscriptionList.js
[1] Line 39:9: 'currencySymbol' is assigned a value but never used no-unused-vars
[1]
[1] src/components/SubscriptionModal.js
[1] Line 60:9: 'convertToMonthlyAmount' is assigned a value but never used no-unused-vars
[1]
[1] webpack compiled with 1 warning
This worked!
I killed it and relaunched with -d
builder@builder-T100:~/subscription-manager$ docker run -d -p 3123:3000 dh1011/subscription-manager:latest
2ec71ffbc3149ce8c432b4951ea050f7bb9b2e61a29ca5173ec1a54cc628bea3
However, the problem is that it has no auth. However, I did find it was easy to add some of my Streaming services.
Updating
I wanted to use Gemini Code Assist to add a username and password.
I had it propose some changes
the built with Docker
I had some fixes with the code, but Gemini did help with that.
I then pushed to my local CR
builder@DESKTOP-QADGF36:~/Workspaces/subscription-manager$ docker push harbor.freshbrewed.science/freshbrewedprivate/subscriptionmanager:0.1
The push refers to repository [harbor.freshbrewed.science/freshbrewedprivate/subscriptionmanager]
b4a680e8a6b2: Pushed
2a029db1caa0: Pushed
2792ea95a5eb: Pushed
ac0256dd5b72: Pushed
9adc247b9ed4: Pushed
40689c24a66a: Pushed
fef83a3a8b40: Pushed
57edb89ea088: Pushed
6edb22a949de: Pushed
6d58389117c3: Pushed
85c6f0cfb532: Pushed
a4fd1e7df47e: Pushed
2f7b6d216a37: Pushed
0.1: digest: sha256:93f95c6458cf78c1ac9887704c96ceec5d293c5ab1a8fc01c9ca20eb43b72837 size: 3054
I then pushed to Dockerhub
builder@DESKTOP-QADGF36:~/Workspaces/subscription-manager$ docker build -t idjohnson/subscription-manager:0.1 .
[+] Building 0.5s (11/11) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 492B 0.0s
=> [internal] load metadata for docker.io/library/node:18 0.4s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 461B 0.0s
=> [1/6] FROM docker.io/library/node:18@sha256:7f31a1eb14c61719b8bb0eaa029310cc33851f71d3578cc422b390f8096977c5 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.26kB 0.0s
=> CACHED [2/6] WORKDIR /app 0.0s
=> CACHED [3/6] COPY package*.json ./ 0.0s
=> CACHED [4/6] RUN npm install 0.0s
=> CACHED [5/6] COPY . . 0.0s
=> CACHED [6/6] RUN npm run build 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e13608c3070282babf2d9f25fd4a4ae82d4f14d58e6d8b3b67f509998c342a85 0.0s
=> => naming to docker.io/idjohnson/subscription-manager:0.1 0.0s
3 warnings found (use docker --debug to expand):
- LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 19)
- LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 20)
- SecretsUsedInArgOrEnv: Do not use ARG or ENV instructions for sensitive data (ENV "REACT_APP_PASSWORD") (line 20)
builder@DESKTOP-QADGF36:~/Workspaces/subscription-manager$ docker push idjohnson/subscription-manager:0.1
The push refers to repository [docker.io/idjohnson/subscription-manager]
b4a680e8a6b2: Layer already exists
2a029db1caa0: Layer already exists
2792ea95a5eb: Layer already exists
ac0256dd5b72: Pushed
9adc247b9ed4: Layer already exists
40689c24a66a: Layer already exists
fef83a3a8b40: Layer already exists
57edb89ea088: Layer already exists
6edb22a949de: Layer already exists
6d58389117c3: Layer already exists
85c6f0cfb532: Layer already exists
a4fd1e7df47e: Layer already exists
2f7b6d216a37: Layer already exists
0.1: digest: sha256:4932517c74b845a64331038bec24ccd2bf78857bf34b30c0a97b07e4e2910b32 size: 3054
I then edited the deployment to fix
builder@DESKTOP-QADGF36:~$ kubectl edit deployment subscription-manager
deployment.apps/subscription-manager edited
A local build didn’t work (as before)
I can now try it locally
builder@builder-T100:~$ docker run -d -p 3213:3000 idjohnson/subscription-manager:0.1
Unable to find image 'idjohnson/subscription-manager:0.1' locally
0.1: Pulling from idjohnson/subscription-manager
0a96bdb82805: Pull complete
54c7be425079: Pull complete
7aa8176e6d89: Pull complete
1523f4b3f560: Pull complete
4bc452607135: Pull complete
d71f59450bae: Pull complete
9e4ed576e08d: Pull complete
b22066964b68: Pull complete
c80ed289cf08: Pull complete
876f717b8f34: Pull complete
1f8bbc9faf53: Pull complete
0ed5f79e5d32: Pull complete
5721b86a0674: Pull complete
Digest: sha256:4932517c74b845a64331038bec24ccd2bf78857bf34b30c0a97b07e4e2910b32
Status: Downloaded newer image for idjohnson/subscription-manager:0.1
0aa6a38c367f2ff687a10d5dc4714a87b0e2dd35faf6fb2c954292449980f080
While the user and password fields were very small, it worked
I can override the user and password
builder@builder-T100:~$ docker run -d -e REACT_APP_USERNAME=myuser -e REACT_APP_PASSWORD=mypassword -p 3213:3000
idjohnson/subscription-manager:0.2
13f359f82d2ede5310b644a42e128c6fe7fb2b537d498458c265cdbbc73439ff
Ingress
Let me try and make an ingress.
I’ll create an A Record in Azure DNS
builder@DESKTOP-QADGF36:~$ az account set --subscription "Pay-As-You-Go" && az network dns record-set a add-record -g idjdnsrg -z tpk.pw -a 75.73.224.240 -n submgr
{
"ARecords": [
{
"ipv4Address": "75.73.224.240"
}
],
"TTL": 3600,
"etag": "c5d7f2c9-693c-435b-87e9-c9c18a26d568",
"fqdn": "submgr.tpk.pw.",
"id": "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/idjdnsrg/providers/Microsoft.Network/dnszones/tpk.pw/A/submgr",
"name": "submgr",
"provisioningState": "Succeeded",
"resourceGroup": "idjdnsrg",
"targetResource": {},
"trafficManagementProfile": {},
"type": "Microsoft.Network/dnszones/A"
}
I can then create an Endpoint, Service and Ingress
---
apiVersion: v1
kind: Endpoints
metadata:
name: submanagersvc
subsets:
- addresses:
- ip: 192.168.1.100
ports:
- name: submgrint
port: 3213
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: submanagersvc
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
- IPv6
ipFamilyPolicy: RequireDualStack
ports:
- name: submgrint
port: 80
protocol: TCP
targetPort: 3213
sessionAffinity: None
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: azuredns-tpkpw
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.org/websocket-services: submanagersvc
generation: 1
name: submgringress
spec:
rules:
- host: submgr.tpk.pw
http:
paths:
- backend:
service:
name: submanagersvc
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- submgr.tpk.pw
secretName: submgr-tls
When we see the cert is satisfied
$ kubectl get cert submgr-tls
NAME READY SECRET AGE
submgr-tls True submgr-tls 62s
However, it still does not seem to work
I even switched the ports back to 3123 and saw the Invalid on the original container
It was really bugging me so I worked it for an hour only to figure out the issue was the hardcoded proxy line in the package.json (Doh!)
I invoked with
builder@builder-T100:~$ docker run -d -e NODE_ENV=production -e HOST=0.0.0.0 -e REACT_APP_USERNAME=myusername -e REACT_APP_PASSWORD=mypassword -p 3213:3000 idjohnson/subscription-manager:0.4
Unable to find image 'idjohnson/subscription-manager:0.4' locally
0.4: Pulling from idjohnson/subscription-manager
0a96bdb82805: Already exists
54c7be425079: Already exists
7aa8176e6d89: Already exists
1523f4b3f560: Already exists
4bc452607135: Already exists
d71f59450bae: Already exists
9e4ed576e08d: Already exists
b22066964b68: Already exists
c80ed289cf08: Already exists
646565a11c5b: Pull complete
3e9ddee8fcf3: Pull complete
0234446d870b: Pull complete
f8c9a5b48f8b: Pull complete
Digest: sha256:0d7a1d51828711990c5103b1879aaecd752433cfe4adfa3b302a1f5817e074e6
Status: Downloaded newer image for idjohnson/subscription-manager:0.4
6d95b3c72dfa567085e5f378a669d391ac2c76412f717cdf93170e30f8ef270b
Now I can see a basic login password
However, I think it’s failing to persist so i cannot save in the forms.
I tried to get Gemini to help save it, but it was paranoid and really wanted me to create a brand new token and cookie logic (though it didn’t say how)
Copilot Free
I’m a bit uneasy about this, but I’m going to try Copilot “For Free”
I’ll prompt
My React App.js prompts for a username and login which works. However, once the user logs in, the username and password fail to persist. Can you suggest updates to App.js to solve this problem?
It feels a bit weird, but I have Gemini Code Assist on my left and Copilot on my right. I saved the current state (which is borked) before applying the suggested edits from Copilot
One thing I had yet to experience was when I go to “Apply in Editor”
It applies them like PR feedback might do it where I need to “Accept” changes
I continued to iterate on errors. For instance, while login persists, the forms fail to update and I think it’s due to Axios authentication issues on static resources
And it fails locally too
I’m going to pause here for now.. I welcome some review (I might circle back), but here is my fork: https://github.com/idjohnson/subscription-manager
Trying old container
I cloned down the original repo and just removed the proxy line
builder@DESKTOP-QADGF36:~/Workspaces/subscription-manager2$ git diff package.json
diff --git a/package.json b/package.json
index 07d2fba..531ef00 100644
--- a/package.json
+++ b/package.json
@@ -56,7 +56,6 @@
"last 1 safari version"
]
},
- "proxy": "http://localhost:5000",
"devDependencies": {
"@iconify-icon/react": "^2.1.0"
I decided to just build and push as 0.0
builder@DESKTOP-QADGF36:~/Workspaces/subscription-manager2$ docker build -t idjohnson/subscription-manager:0.0 . && docker push idjohnson/subscription-manager:0.0
[+] Building 26.8s (12/12) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 427B 0.0s
=> [internal] load metadata for docker.io/library/node:18 0.6s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 461B 0.0s
=> [1/6] FROM docker.io/library/node:18@sha256:7f31a1eb14c61719b8bb0eaa029310cc33851f71d3578cc422b390f8096977c5 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 1.64MB 0.0s
=> CACHED [2/6] WORKDIR /app 0.0s
=> CACHED [3/6] COPY package*.json ./ 0.0s
=> CACHED [4/6] RUN npm install 0.0s
=> [5/6] COPY . . 0.1s
=> [6/6] RUN npm run build 25.8s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:27245602043d7433421ecf17b1095eb7fa0e99393ac45f6f20f0e2d7145d9758 0.0s
=> => naming to docker.io/idjohnson/subscription-manager:0.0 0.0s
The push refers to repository [docker.io/idjohnson/subscription-manager]
f7a3b43b4f55: Pushed
6c330af6dcc7: Pushed
701d44e1ab46: Layer already exists
9e9d41de6ffd: Layer already exists
9adc247b9ed4: Layer already exists
40689c24a66a: Layer already exists
fef83a3a8b40: Layer already exists
57edb89ea088: Layer already exists
6edb22a949de: Layer already exists
6d58389117c3: Layer already exists
85c6f0cfb532: Layer already exists
a4fd1e7df47e: Layer already exists
2f7b6d216a37: Layer already exists
0.0: digest: sha256:b71ebda87662d69ad12962a0f138835070f5f54ad28fe9b7684f0aeb521cbac6 size: 3054
I’ll fire this off without the login and password bits
builder@builder-T100:~$ docker run -d -e NODE_ENV=production -e HOST=0.0.0.0 -p 3213:3000 idjohnson/subscription-manager:0.0
Unable to find image 'idjohnson/subscription-manager:0.0' locally
0.0: Pulling from idjohnson/subscription-manager
0a96bdb82805: Already exists
54c7be425079: Already exists
7aa8176e6d89: Already exists
1523f4b3f560: Already exists
4bc452607135: Already exists
d71f59450bae: Already exists
9e4ed576e08d: Already exists
b22066964b68: Already exists
c80ed289cf08: Already exists
646565a11c5b: Already exists
3e9ddee8fcf3: Already exists
c963bb3dc160: Pull complete
9de6cb2dc919: Pull complete
Digest: sha256:b71ebda87662d69ad12962a0f138835070f5f54ad28fe9b7684f0aeb521cbac6
Status: Downloaded newer image for idjohnson/subscription-manager:0.0
3475af07e349cf74a2a48594dce4865b8af4121444bcf5023308100b484f33bd
However, even this fails - again with the Axios stuff
This is really confusing as the 3123
port does work
and the only difference is that proxy
setting.
I need to at least do like for like; there should be no difference really between these two (save for proxy)
I rebuilt to exactly what is on his main - proxy set and tested locally
builder@builder-T100:~$ docker run -d -e NODE_ENV=production -e HOST=0.0.0.0 -p 3213:3000 idjohnson/subscription-manager:0.0a
Unable to find image 'idjohnson/subscription-manager:0.0a' locally
0.0a: Pulling from idjohnson/subscription-manager
0a96bdb82805: Already exists
54c7be425079: Already exists
7aa8176e6d89: Already exists
1523f4b3f560: Already exists
4bc452607135: Already exists
d71f59450bae: Already exists
9e4ed576e08d: Already exists
b22066964b68: Already exists
c80ed289cf08: Already exists
876f717b8f34: Already exists
1f8bbc9faf53: Already exists
414325c510a4: Pull complete
2d0d12ff0468: Pull complete
Digest: sha256:612f2137fc5e6de495ffad03ac75b1faaf8e773ba81bd654b58cd39580817212
Status: Downloaded newer image for idjohnson/subscription-manager:0.0a
67184ff3eae0cb95881aff7ce2ac9660bbbf977b9c8bd50d456851b76d2f916d
Clearly that proxy is needed for the client and server to communicate. But somehow I need to fix the webpack to properly allow in external IPs.
As we can see, it now works (as good as the original), but I’m back to the Invalid Host Headers issue when externally accessing
I would love to keep banging away on this, and I might. But for now, I’ll put a pin in it
Cleanup
I’ll clean up so it’s not lingering
builder@DESKTOP-QADGF36:~$ kubectl delete -f ./submgr.app.yaml
endpoints "submanagersvc" deleted
service "submanagersvc" deleted
ingress.networking.k8s.io "submgringress" deleted
Summary
Today we explored two apps, Pingvin and Subscription Manager. They both work however I really struggled externalizing Subscription Manager.
I went through a lot of iterations with Gemini Code Assist (and at the end a bit of Copilot Free-tier) to try and fix it, but I’m honestly still a bit stuck. That damnable “Invalid Host Error” when Proxy is set just stumps me (as it works with internal IPs)
I left my fork here in case someone else has some ideas. I might just try again later with Claude or some other AI tools. It would seem we just need some way to expose it and not make it public.
And just as a reminder, when we tried to fire it up in a cluster, the pod craps out
builder@DESKTOP-QADGF36:~$ kubectl logs subscription-manager-67c664457d-br58k
> subscription-manager@0.1.0 dev
> concurrently "npm run server" "npm run client"
[0]
[0] > subscription-manager@0.1.0 server
[0] > node server/server.js
[0]
[1]
[1] > subscription-manager@0.1.0 client
[1] > react-scripts start
[1]
[0] Using subscriptions database located at: /app/subscriptions.db
[0] Server running on port 5000
[0] Database initialized successfully with new table for user configuration.
[1] Attempting to bind to HOST environment variable: 0.0.0.0
[1] If this was unintentional, check that you haven't mistakenly set it in your shell.
[1] Learn more here: https://cra.link/advanced-config
[1]
[1] (node:65) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] (Use `node --trace-deprecation ...` to show where the warning was created)
[1] (node:65) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[1] Starting the development server...
[1]
[1] /app/node_modules/react-scripts/scripts/start.js:19
[1] throw err;
[1] ^
[1]
[1] Error: EMFILE: too many open files, watch '/app/public'
[1] at FSWatcher.<computed> (node:internal/fs/watchers:247:19)
[1] at Object.watch (node:fs:2418:34)
[1] at createFsWatchInstance (/app/node_modules/chokidar/lib/nodefs-handler.js:119:15)
[1] at setFsWatchListener (/app/node_modules/chokidar/lib/nodefs-handler.js:166:15)
[1] at NodeFsHandler._watchWithNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:331:14)
[1] at NodeFsHandler._handleDir (/app/node_modules/chokidar/lib/nodefs-handler.js:567:19)
[1] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[1] at async NodeFsHandler._addToNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:617:16)
[1] at async /app/node_modules/chokidar/index.js:451:21
[1] at async Promise.all (index 0)
[1] Emitted 'error' event on FSWatcher instance at:
[1] at FSWatcher._handleError (/app/node_modules/chokidar/index.js:647:10)
[1] at NodeFsHandler._addToNodeFs (/app/node_modules/chokidar/lib/nodefs-handler.js:645:18)
[1] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[1] at async /app/node_modules/chokidar/index.js:451:21
[1] at async Promise.all (index 0) {
[1] errno: -24,
[1] syscall: 'watch',
[1] code: 'EMFILE',
[1] path: '/app/public',
[1] filename: '/app/public'
[1] }
[1]
[1] Node.js v18.20.5
[1] npm run client exited with code 1
So perhaps the Kubernetes hosting just needs to have some kind of fslimit set in the container.
Regardless, I found myself hitting a point where I just said enough and decided to move on.