Env Variables in K8s — Part 1: Deploying ConfigMaps with Apps to Kubernetes.
One of the biggest pain points between developers and devops (or in the old days, operations) are environmental variables.
Prior to the Kubernetes revolution, environmental variables were typically stored in .env files on the servers the app was deployed to. For example, a web service that ran on 6 servers would have a .env file on each individual server that the operations group would typically manage. Those .env files were not committed to the repos because they often contained secrets such as database credentials and operations groups did not want to have to go through a software engineer just to change the name of a database server or other infrastructure related env variable.
There was a time not very long ago when software developers didn’t touch infrastructure and operations didn’t touch source code.
As long as there was perfect communication between software engineers and operations, this arrangement worked fine, but as communication is rarely perfect, deploys sometimes failed and bugs were often introduced because the .env files that operations managed were not what the software developer expected.
This issue was mitigated to a degree with configuration management platforms like Ansible, but in the end, the devs and operations had to know exactly what they were both expecting with env variables.
Today, local development is simplified with dockerized apps and env variables for local deployments are typically stored in a docker-compose file. The problem is that DevOps groups are often still creating ConfigMaps by hand for K8s deployments, thus bugs can still be introduced upon deployment of an app due to miscommunication about env variables.
The best way to mitigate errors in communication regarding what an app expects between software developers and devops is to keep everything about an app in the app’s repo along with the source code for the app. We accomplish this by storing credentials like database credentials and app keys as GitHub secrets, and other env variables as ConfigMap files in the repo.
Typically, we do this by creating a k8s folder at the root of the repo and storing a configmap yaml file for each namespace the app can be deployed to, typically one for staging and one for production.
For example, if an app expected the following env variables: BASE_URL,ALLOWED_ORIGIN,RAILS_ENV, I would create a file in the k8s folder that looked something like the following:
ALLOWED_ORIGIN: 'https://localhost, otherhostnamesasneeded'
The name of the config should match what is specified in the deployment for the app. (Usually this will be in the values.yaml for the app’s Helm files)
Now you just need to modify the GitHub deployment workflows to create or replace the configmap in K8s when the app is deployed. I do this with 2 steps in the deployment workflows. The first one switches the namespace to the one you want to create the app’s configmap in, in this case staging:
- name: Switch to Staging Namespace
run: kubectl config set-context --current --namespace=staging
Kubectl is installed on GitHub runners.
The second step applies the ConfigMap stored in the k8s directory:
- name: Apply ConfigMap
The above step uses this GitHub action: https://github.com/swdotcom/update-and-apply-kubernetes-configs
Once the application deploys you can verify the ConfigMap was created by running:
kubectl describe configmap configmapname
You can also verify that the pods see the correct env variables by logging into one of the apps pods and running:
printenv | grep variable
It is a good idea to do that upon the initial deployment of an app to ensure that a developer has not hardcoded env variables in their app by mistake.