When good helm templates go bad

When testing helm templates for Capabilities logic, you might be in for some surprises

When good helm templates go bad

Imagine creating your perfect helm chart, having all the logic ready to go, giving it one final look with helm template. One last thought goes through your head:

Ya know, I'd like to verify that logic I have around my ingress resource...

Your code looks something like the following. For using a custom ingress controller (in this case, Emissary) you want to create a more specific Mapping Resource if the user has Emissary on their cluster, and a plain old Ingress resource if they don't. Easy. So you create the following templates and put the logic on line one of each file.

So, you run helm template --debug --generate-name --set ingress.enabled=true -s templates/ingress.yaml .

$ helm template --debug --generate-name --set ingress.enabled=true -s templates/ingress.yaml

install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /Users/ian.martin/Documents/code/mychart

---
# Source: mychart/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-emby
  labels:
    helm.sh/chart: mychart-0.1.2
    app.kubernetes.io/name: mychart
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "2.1.0"
    app.kubernetes.io/managed-by: Helm
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: release-name-mychart
                port:
                  number: 3000

Ok, that seems good - you got your Ingress resource created. Now, you change to your kubeconfig context for a cluster that has the getambassador.io/v3alpha1 API installed. And, rerun to make sure a Mapping gets created (you may notice the fallacy already!)

helm template --debug --generate-name --set ingress.enabled=true -s templates/mapping.yaml .
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /Users/ian.martin/Documents/code/mychart

Error: could not find template templates/mapping.yaml in chart
helm.go:84: [debug] could not find template templates/mapping.yaml in chart

...and. WHAT!?

No, your're not crazy. That logic isnt working. It happened to me, and about an hour of troubleshooting later - and furiously rerunning helm template because I couldn't believe I got such simple logic wrong - I realized my mistake. I naively was expecting helm template to run against my current context. It doesnt. Further, I expected it to account for the CRD's available in that contexts' cluster. It doesnt.

If you want to be able to test this situation, there's an important option flag in helm template - --api-versions. So, the proper way to test this purely locally would be the following command

helm template --debug --generate-name --set ingress.enabled=true -s templates/mapping.yaml . --api-versions "getambassador.io/v3alpha1"                                                                           
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /Users/ian.martin/Documents/code/mychart

---
# Source: mychart/templates/mapping.yaml
apiVersion: getambassador.io/v3alpha1
kind:  Mapping
metadata:
  name: release-name-mychart
  labels:
...

Success!

This tells the helm command that the getambassador.io/v3alpha1 API should be considered available for this execution. kubectl has a set of built-in API versions that will vary by kubectl version. To see what is enabled by default, run kubectl api-versions.
Additionally, If you want more than one API version added, helm's --api-versions option takes comma-separated names - for example

helm template ... --api-versions "coolapp.com/v2,k8s.something/v1beta2"

Now, the other way that I could have solved this is to actually test the templating logic against the real cluster. For this, I would use helm install --dry-run. This incantation will print out what helm will produce against the current kube context, but as the --dry-run implies, won't actually apply it to the cluster. The output only prints to stdout on the terminal.

Particularly for the helm template way of testing, this could be integrated into an automated test suite, because it doesn't rely on a live cluster.