K8s and Redis; a tale of Layer 4 Ingress

Published: Mar 4, 2020 by Isaac Johnson

One thing that Kubernetes does particularly well is Layer 7 routing with Ingress.  If you have an HTTP/HTTPS service, there are a slew of tutorials out there as well as best practices for routing traffic from outside your cluster to web services on pods - most of them leveraging Nginx.

But what about non-http traffic?  Not everything in the world is REST based. Not everything is a “web service”.  How can we expose non-http services running in a cluster to the outside world?

We’re going to try two paths with the first being Nginx and we’ll use Redis, the classic key store tool as our tool to test.

Nginx Approach

Setting up your cluster

This assumes a cluster has been setup for you.  In my case, I used an existing AKS setup with RBAC.  I wanted to try with Helm 2.x to start as I’m generally more comfortable with tiller based helm.

Helm 2

First, let’s install a basic redis with helm (2.x).

Getting helm setup:

$ cat ~/Documents/helm_init.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "200"
        image: gcr.io/kubernetes-helm/tiller:v2.16.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

Next apply it

$ kubectl apply -f ~/Documents/helm_init.yaml 
deployment.apps/tiller-deploy created
service/tiller-deploy configured

$ helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}

Installing Redis

Next let’s install a passwordless redis.  In our case, “helm” is helm 2.x.

$ helm install stable/redis --set usePassword=false
NAME: mothy-warthog
LAST DEPLOYED: Sun Mar 1 15:30:22 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME AGE
mothy-warthog-redis 1s
mothy-warthog-redis-health 1s

==> v1/Pod(related)
NAME AGE
mothy-warthog-redis-master-0 0s
mothy-warthog-redis-slave-0 0s

==> v1/Service
NAME AGE
mothy-warthog-redis-headless 1s
mothy-warthog-redis-master 0s
mothy-warthog-redis-slave 1s

==> v1/StatefulSet
NAME AGE
mothy-warthog-redis-master 0s
mothy-warthog-redis-slave 0s


NOTES:
**Please be patient while the chart is being deployed**
Redis can be accessed via port 6379 on the following DNS names from within your cluster:

mothy-warthog-redis-master.default.svc.cluster.local for read/write operations
mothy-warthog-redis-slave.default.svc.cluster.local for read-only operations



To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl run --namespace default mothy-warthog-redis-client --rm --tty -i --restart='Never' \
   
   --image docker.io/bitnami/redis:5.0.7-debian-10-r0 -- bash

2. Connect using the Redis CLI:
   redis-cli -h mothy-warthog-redis-master
   redis-cli -h mothy-warthog-redis-slave

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/mothy-warthog-redis-master 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379

We can use helm list to see our deployment

$ helm list
NAME REVISION	UPDATED STATUS CHART APP VERSION	NAMESPACE    
giddy-bronco 1 Thu Jan 9 22:39:19 2020	DEPLOYED	sonatype-nexus-1.21.3	3.20.1-01 ingress-basic
hardy-turkey 1 Thu Jan 9 22:29:49 2020	DEPLOYED	nginx-ingress-1.28.2 0.26.2 ingress-basic
mothy-warthog	1 Sun Mar 1 15:30:22 2020	DEPLOYED	redis-10.4.1 5.0.7 default 

We can see the service exposed

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 102d
mothy-warthog-redis-headless ClusterIP None <none> 6379/TCP 96m
mothy-warthog-redis-master ClusterIP 10.0.111.136 <none> 6379/TCP 96m
mothy-warthog-redis-slave ClusterIP 10.0.241.205 <none> 6379/TCP 96m

We should be able to access master from a piered vnet: 10.0.111.136

Testing Redis

on my host, let’s quick download and install redis-cli: https://codewithhugo.com/install-just-redis-cli-on-ubuntu-debian-jessie/

cd /tmp
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make
cp src/redis-cli /usr/local/bin/
chmod 755 /usr/local/bin/redis-cli

access master directly using a local shell. You can fire an ubuntu shell in k8s if you don’t have one with : kubectl run myshell --rm -i --tty --image ubuntu -n yournamespace -- bash

root@my-shell-75b487f578-5vljv:/# redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> open 10.0.111.136 6379
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> connect 10.0.111.136 6379
10.0.111.136:6379>

Adding Ingress

now let’s create an ingress namespace

$ cat internal-ingress.yaml 
controller:
  service:
    loadBalancerIP: 10.240.64.65
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"

I first tried 10.244.0.50 to be in the same IP space, but AKS would not statisfy that request, so moved to 10.240.64.65.

As i already have an ingress, let’s make another namespace for this one.  Note that i tried to set a TCP route rule for 6379 explictly here as well --set tcp.6379="default/mothy-warthog-redis-master-0:6379"

$ kubectl create namespace ingress-basic2
namespace/ingress-basic2 created
$ helm install stable/nginx-ingress --namespace ingress-basic2 -f internal-ingress.yaml --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux --set tcp.6379="default/mothy-warthog-redis-master-0:6379"
NAME: garish-starfish
LAST DEPLOYED: Sun Mar 1 18:03:11 2020
NAMESPACE: ingress-basic2
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME AGE
garish-starfish-nginx-ingress 1s

==> v1/ClusterRoleBinding
NAME AGE
garish-starfish-nginx-ingress 1s

==> v1/Deployment
NAME AGE
garish-starfish-nginx-ingress-controller 1s
garish-starfish-nginx-ingress-default-backend 1s

==> v1/Pod(related)
NAME AGE
garish-starfish-nginx-ingress-controller-64cbccc867-4kv7l 0s
garish-starfish-nginx-ingress-controller-64cbccc867-xx7cz 0s
garish-starfish-nginx-ingress-default-backend-75b8688db4-d2zzb 0s

==> v1/Role
NAME AGE
garish-starfish-nginx-ingress 1s

==> v1/RoleBinding
NAME AGE
garish-starfish-nginx-ingress 1s

==> v1/Service
NAME AGE
garish-starfish-nginx-ingress-controller 1s
garish-starfish-nginx-ingress-default-backend 1s

==> v1/ServiceAccount
NAME AGE
garish-starfish-nginx-ingress 1s
garish-starfish-nginx-ingress-backend 1s

==> v1beta1/PodDisruptionBudget
NAME AGE
garish-starfish-nginx-ingress-controller 1s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-basic2 get services -o wide -w garish-starfish-nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

getting our IP

$ kubectl get service -l app=nginx-ingress --namespace ingress-basic2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
garish-starfish-nginx-ingress-controller LoadBalancer 10.0.201.91 <pending> 80:31252/TCP,443:30635/TCP 2m37s
garish-starfish-nginx-ingress-default-backend ClusterIP 10.0.102.118 <none> 80/TCP 2m37s

$ kubectl get service -l app=nginx-ingress --namespace ingress-basic2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
measly-tapir-nginx-ingress-controller LoadBalancer 10.0.120.5 10.240.64.65 80:30138/TCP,443:32745/TCP,6379:30445/TCP 95s
measly-tapir-nginx-ingress-default-backend ClusterIP 10.0.119.52 <none> 80/TCP 95s
JOHNSI10-M1:~ johnsi10$ 

You can see it takes a minute in the log for the backend controller:

Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Warning CreatingLoadBalancerFailed 53s service-controller Error creating load balancer (will retry): failed to ensure load balancer for service ingress-basic2/gaudy-alpaca-nginx-ingress-controller: EnsureHostInPool(ingress-basic2/gaudy-alpaca-nginx-ingress-controller): backendPoolID(/subscriptions/47953ca1-1c1a-4283-a2f7-537a0d2c7660/resourceGroups/mc_uscd-aksdev6-rg_uscd-aks-dev6_centralus/providers/Microsoft.Network/loadBalancers/kubernetes-internal/backendAddressPools/kubernetes) - failed to ensure host in pool: "compute.VirtualMachineScaleSetVMsClient#Update: Failure sending request: StatusCode=0 -- Original Error: Code=\"RetryableError\" Message=\"A retryable error occurred.\" Details=[{\"code\":\"RetryableErrorDueToAnotherOperation\",\"message\":\"Operation ValidateVMScaleSetOperation (413967b3-9036-488a-be6b-2e1a00035937) is updating resource /subscriptions/47953ca1-1c1a-4283-a2f7-537a0d2c7660/resourceGroups/MC_USCD-AKSDEV6-RG_USCD-AKS-DEV6_CENTRALUS/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-80928115-vmss/validate. The call can be retried in 14 seconds.\"}]"
  Normal EnsuringLoadBalancer 48s (x2 over 89s) service-controller Ensuring load balancer
  Normal EnsuredLoadBalancer 11s service-controller Ensured load balancer

The problem ended up being that it just would not connect.

Starting over

Let’s create a fresh AKS.  In my last run i used a long running cluster on an older Kubernetes version that has HTTP ingress running already.

C:\Users\isaac>az group create --name idjaksdemo --location centralus
{
  "id": "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/idjaksdemo",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaksdemo",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}


C:\Users\isaac>az aks create --resource-group idjaksdemo --name idjaks01 --enable-rbac --node-count 3
Argument 'enable_rbac' has been deprecated and will be removed in a future release. Use '--disable-rbac' instead.
←[K{- Finished ..principal creation[##################################] 100.0000%
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": null,
      "enableNodePublicIp": null,
      "maxCount": null,
      "maxPods": 110,
      "minCount": null,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.14.8",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks01-idjaksdemo-d955c0",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks01-idjaksdemo-d955c0-9ac53587.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourcegroups/idjaksdemo/providers/Microsoft.ContainerService/managedClusters/idjaks01",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.14.8",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDihWpPWuhb/J/hB3D3jPVoUCG4gjaWknbnZOVgOKszV5UTgpAB2BnvOfMZMjWr43M/bb9tmBxCCw6Mw0m8phuaskHP6rrsCHKBQw7CXAmDkKgH8+6ul3MYhyiEC4tjuWQXd17oWo7RGbA/EE5hIJzx5fDBbe2qjPdVJKFUq8TEwzdHCT4LRDGjCUaeu1qhxmJszCsQaAJqH7T1ah8HvnM+x++pux0MXIMu3p7Ay098lYuO9RxHvcXW1IH5RrUV+cgWZcW2JZSnIiRn1KXfyUvZf/fpLK9nUvnSYW+Q2WPdJVhLUXQ6OWk2D6Z3M0Mv1r839g4V62gryBj3hKtHGkwj isaac@DESKTOP-2SQ9NQM\n"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks01",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/MC_idjaksdemo_idjaks01_centralus/providers/Microsoft.Network/publicIPAddresses/fc4f8916-cdc8-4d88-a958-51112069140f",
          "resourceGroup": "MC_idjaksdemo_idjaks01_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "kubenet",
    "networkPolicy": null,
    "outboundType": "loadBalancer",
    "podCidr": "10.244.0.0/16",
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaksdemo_idjaks01_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaksdemo",
  "servicePrincipalProfile": {
    "clientId": "0d919178-0fd5-4c5e-ac4f-bc32f825c762",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": null
}
Creating AKS can take some time

Once created, we can get details and login:

C:\Users\isaac>az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
-------- ---------- --------------- ------------------- ------------------- -----------------------------------------------------------
idjaks01 centralus idjaksdemo 1.14.8 Succeeded idjaks01-idjaksdemo-d955c0-9ac53587.hcp.centralus.azmk8s.io

C:\Users\isaac>az aks get-credentials -n idjaks01 -g idjaksdemo --admin
Merged "idjaks01-admin" as current context in C:\Users\isaac\.kube\config

C:\Users\isaac>kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-17454337-vmss000000 Ready agent 5m13s v1.14.8
aks-nodepool1-17454337-vmss000001 Ready agent 4m13s v1.14.8
aks-nodepool1-17454337-vmss000002 Ready agent 4m25s v1.14.8

Install Redis

I will note that here i used the latest Helm 3.  For my windows box, i felt if we are going to do this fresh, let’s just use the latest Azure CLI and latest Helm.

C:\Users\isaac>"C:\Program Files\helm.exe" repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

C:\Users\isaac>"C:\Program Files\helm.exe" repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

C:\Users\isaac>"C:\Program Files\helm.exe" install stable/redis --set usePassword=false --generate-name
NAME: redis-1583293103
LAST DEPLOYED: Tue Mar 3 21:38:26 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
**Please be patient while the chart is being deployed**
Redis can be accessed via port 6379 on the following DNS names from within your cluster:

redis-1583293103-master.default.svc.cluster.local for read/write operations
redis-1583293103-slave.default.svc.cluster.local for read-only operations



To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl run --namespace default redis-1583293103-client --rm --tty -i --restart='Never' \

   --image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash

2. Connect using the Redis CLI:
   redis-cli -h redis-1583293103-master
   redis-cli -h redis-1583293103-slave

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/redis-1583293103-master 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379

Testing

Getting Redis CLI on windows

C:\Users\isaac> kubectl port-forward --namespace default svc/redis-1583293103-master 6379:6379
Forwarding from 127.0.0.1:6379 -> 6379
Forwarding from [::1]:6379 -> 6379

# in another command prompt....

C:\Users\isaac>npm install redis-cli -g
C:\Program Files\nodejs\rdcli -> C:\Program Files\nodejs\node_modules\redis-cli\bin\rdcli

> core-js@3.6.4 postinstall C:\Program Files\nodejs\node_modules\redis-cli\node_modules\core-js
> node -e "try{require('./postinstall')}catch(e){}"

Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library!

The project needs your help! Please consider supporting of core-js on Open Collective or Patreon:
> https://opencollective.com/core-js
> https://www.patreon.com/zloirock

Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -)

+ redis-cli@1.4.0
added 10 packages from 10 contributors in 12.06s

C:\Users\isaac>redis-cli
'redis-cli' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\isaac>rdcli -h 127.0.0.1 -p 6379
127.0.0.1:6379>

Nginx, Part Deux!

First let’s set up a stable Nginx in our default namespace.  A lot of guides have you put Nginx in it’s own namespace, but i wanted to minimize variability

C:\Users\isaac>"C:\Program Files\helm.exe" install nginx-ingress stable/nginx-ingress --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

NAME: nginx-ingress
LAST DEPLOYED: Tue Mar 3 21:52:35 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Verify the NGinx service is running, and got a fresh IP

C:\Users\isaac>kubectl get service -l app=nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.0.173.211 13.89.108.187 80:31112/TCP,443:30441/TCP 61s
nginx-ingress-default-backend ClusterIP 10.0.254.99 <none> 80/TCP 61s

My first shot was to try a basic ingress yaml - this often works for pods that have java apps running via tomcat on 8080.. Maybe it will work with Redis.

C:\Users\isaac>type ingress-basic.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: redis-ingress-static
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: redis-1583293103-master
          servicePort: 6379
        path: /(.*)

C:\Users\isaac>kubectl apply -f ingress-basic.yaml
ingress.extensions/redis-ingress-static created

Testing

C:\Users\isaac>rdcli -h 13.89.108.187 -p 6379
13.89.108.187:6379> set key value
(error) Redis connection to 13.89.108.187:6379 failed - connect ETIMEDOUT 13.89.108.187:6379

C:\Users\isaac>rdcli -h 127.0.0.1 -p 6379
127.0.0.1:6379> set key value
OK

So we can see that failed, vomiting the moment we actually try and set a value.  I think some other guides out there just telnet’ed or connected to Redis (which works) but never tried to actually set a value.

With regards to the 127.0.0.1 step; I tested with a kube proxy to port 6379 ( kubectl port-forward --namespace default svc/redis-1583293103-master 6379:6379 ) just to ensure at every step here that the pod is still good and working.

Let’s try patching the Nginx controller;

C:\Users\isaac>type k8s-patch.yaml
spec:
 template:
   spec:
     containers:
     - name: nginx-ingress-controller
       ports:
        - containerPort: 6379
          hostPort: 6379

builder@DESKTOP-2SQ9NQM:~$ kubectl patch deployment nginx-ingress-controller --patch "$(cat k8s-patch.yaml)"
deployment.extensions/nginx-ingress-controller patched

Yeah - i did jump over to WSL.. I couldn’t get type to work and rather than try and jerry rig cygwin/git bash, i just popped a shell in Windows Subsystem for Linux.  I’m sure there is a clever way or a JSON equivalent.. but moving on..

C:\Users\isaac>rdcli -h 13.89.108.187 -p 6379
13.89.108.187:6379> set key2 value2
(error) Redis connection to 13.89.108.187:6379 failed - connect ETIMEDOUT 13.89.108.187:6379

I thought perhaps Nginx needs some nudging.. so i murdered the pods and tried again;

builder@DESKTOP-2SQ9NQM:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-654d4964d9-bv5wp 1/1 Running 0 2m12s
nginx-ingress-controller-654d4964d9-mkq9x 1/1 Running 0 2m
nginx-ingress-default-backend-55cd446957-sf9pp 1/1 Running 0 25m
redis-1583293103-master-0 1/1 Running 0 39m
redis-1583293103-slave-0 1/1 Running 0 39m
redis-1583293103-slave-1 1/1 Running 0 37m
builder@DESKTOP-2SQ9NQM:~$ kubectl delete pod nginx-ingress-controller-654d4964d9-bv5wp && kubectl delete pod nginx-ingress-controller-654d4964d9-mkq9x && kubectl delete pod nginx-ingress-default-backend-55cd446957-sf9pp
pod "nginx-ingress-controller-654d4964d9-bv5wp" deleted
pod "nginx-ingress-controller-654d4964d9-mkq9x" deleted
pod "nginx-ingress-default-backend-55cd446957-sf9pp" deleted


C:\Users\isaac>rdcli -h 13.89.108.187 -p 6379
13.89.108.187:6379> set key4 value4
(error) Redis connection to 13.89.108.187:6379 failed - connect ETIMEDOUT 13.89.108.187:6379

Still no luck!

Let’s try forcing a data patch on the nginx controller leader.  I’ve seen similar docs on patching a tcp services configmap for minikube.  Maybe our nginx just needs to know 6379 is something to listen and forward?

builder@DESKTOP-2SQ9NQM:~$ kubectl patch configmap ingress-controller-leader-nginx --patch '{"data":{"6379":"default/redis-service:6379"}}'
configmap/ingress-controller-leader-nginx patched

C:\Users\isaac>rdcli -h 13.89.108.187 -p 6379
13.89.108.187:6379> set key value
(error) Redis connection to 13.89.108.187:6379 failed - connect ETIMEDOUT 13.89.108.187:6379

Maybe our ingress is stepping on things? try deleting that

C:\Users\isaac>kubectl delete -f ingress-basic.yaml
ingress.extensions "redis-ingress-static" deleted

not connected> connect 13.89.108.187 6379
Could not connect to Redis at 13.89.108.187:6379: Connection timed out
C:\Users\isaac>rdcli -h 13.89.108.187 -p 6379
13.89.108.187:6379> set key4 value4
(error) Redis connection to 13.89.108.187:6379 failed - connect ETIMEDOUT 13.89.108.187:6379

Tried again.  This time removing some unnecessary settings on the ingress yaml.

C:\Users\isaac>type ingress-basic.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: redis-ingress-static
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: redis-1583293103-master
          servicePort: 6379


builder@DESKTOP-2SQ9NQM:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-654d4964d9-59pjb 1/1 Running 0 22m
nginx-ingress-controller-654d4964d9-l758t 1/1 Running 0 22m
nginx-ingress-default-backend-55cd446957-hsgzc 1/1 Running 0 21m
redis-1583293103-master-0 1/1 Running 0 63m
redis-1583293103-slave-0 1/1 Running 0 63m
redis-1583293103-slave-1 1/1 Running 0 61m
builder@DESKTOP-2SQ9NQM:~$ kubectl delete pod nginx-ingress-controller-654d4964d9-59pjb && kubectl delete pod nginx-ingress-controller-654d4964d9-l758t && kubectl delete pod nginx-ingress-default-backend-55cd446957-hsgzc
pod "nginx-ingress-controller-654d4964d9-59pjb" deleted
pod "nginx-ingress-controller-654d4964d9-l758t" deleted
pod "nginx-ingress-default-backend-55cd446957-hsgzc" deleted
builder@DESKTOP-2SQ9NQM:~$
builder@DESKTOP-2SQ9NQM:~$
builder@DESKTOP-2SQ9NQM:~$
builder@DESKTOP-2SQ9NQM:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> connect 13.89.108.187 6379
Could not connect to Redis at 13.89.108.187:6379: Connection timed out
not connected>

That failed entirely.

Okay, enough with fighting NGinx, a Layer 7 routing service we are trying to force to do something else.

Expose a Service Directly

As I researched this topic, i found some documentationon google about exposing a deployment object (which was https) with the “kubectl expose” action.  I had yet to try this. Would it work with a service?

Let’s refresh our memory and look at our services again:

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 95m
nginx-ingress-controller LoadBalancer 10.0.173.211 13.89.108.187 80:31112/TCP,443:30441/TCP 56m
nginx-ingress-default-backend ClusterIP 10.0.254.99 <none> 80/TCP 56m
redis-1583293103-headless ClusterIP None <none> 6379/TCP 70m
redis-1583293103-master ClusterIP 10.0.180.4 <none> 6379/TCP 70m
redis-1583293103-slave ClusterIP 10.0.85.187 <none> 6379/TCP 70m

Expose the service for which you wish ingress and use type loadbalancer:

builder@DESKTOP-2SQ9NQM:~$ kubectl expose service redis-1583293103-master --type=LoadBalancer --name=myredissvc
service/myredissvc exposed

We can see this time Kubernetes sets up an ingress and fetches a new public IP.

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 96m
myredissvc LoadBalancer 10.0.50.176 <pending> 6379:32334/TCP 8s
nginx-ingress-controller LoadBalancer 10.0.173.211 13.89.108.187 80:31112/TCP,443:30441/TCP 57m
nginx-ingress-default-backend ClusterIP 10.0.254.99 <none> 80/TCP 57m
redis-1583293103-headless ClusterIP None <none> 6379/TCP 71m
redis-1583293103-master ClusterIP 10.0.180.4 <none> 6379/TCP 71m
redis-1583293103-slave ClusterIP 10.0.85.187 <none> 6379/TCP 71m

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 97m
myredissvc LoadBalancer 10.0.50.176 13.86.34.219 6379:32334/TCP 19s
nginx-ingress-controller LoadBalancer 10.0.173.211 13.89.108.187 80:31112/TCP,443:30441/TCP 57m
nginx-ingress-default-backend ClusterIP 10.0.254.99 <none> 80/TCP 57m
redis-1583293103-headless ClusterIP None <none> 6379/TCP 71m
redis-1583293103-master ClusterIP 10.0.180.4 <none> 6379/TCP 71m
redis-1583293103-slave ClusterIP 10.0.85.187 <none> 6379/TCP 71m

Let’s try now:

builder@DESKTOP-2SQ9NQM:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> connect 13.86.34.219
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> connect 13.86.34.219 6379
13.86.34.219:6379> set key value
OK
13.86.34.219:6379>

Works!!!

Summary

Nginx is a great load balancer and for Layer 7, it’s the one to beat.  However, trying to make it handle Layer 4 was just not happening.  I don’t doubt that people have solved it. But it’s also like going bear hunting with a really nice fly casting rod.  You might succeed, but more than likely you’ll get mauled to death for using the wrong tool for the job.

In our case, the simple path of just using kubernetes native expose commandto ask our cloud provider to front a known service was what was needed.

k8s tutorial ingress

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes