Identity infrastructure, simplified for you.
Learn more about Zitadel by checking out the source repository on GitHub
By default, this chart installs a highly available Zitadel deployment.
The chart deploys a Zitadel init job, a Zitadel setup job and a Zitadel deployment. By default, the execution order is orchestrated using Helm hooks on installations and upgrades.
The easiest way to deploy a Helm release for Zitadel is by following the Insecure Postgres Example. For more sofisticated production-ready configurations, follow one of the following examples:
All the configurations from the examples above are guaranteed to work, because they are directly used in automatic acceptance tests.
The v9 charts default Zitadel and login versions reference the second Zitadel v4 release candidate. Therefore, the chart version 9 is also marked as a release candidate.
Use login.enabled: false
to omit deploying the new login.
By default, a new deployment for the login v2 is configured and created.
For new installations, the setup job automatically creates a user of type machine with role IAM_LOGIN_CLIENT
.
It writes the users personal access token into a Kubernetes secret which is then mounted into the login pods.
For existing installations, the setup job doesn’t create this login client user. Therefore, the Kubernetes secret has to be created manually before upgrading to v9:
IAM_LOGIN_CLIENT
kubectl --namespace <my-namespace> create secret generic login-client --from-file=pat=<my-local-path-to-the-downloaded-pat-file>
To make the login externally accessible, you need to route traffic with the path prefix /ui/v2/login
to the login service.
If you use an ingress controller, you can enable the login ingress with login.ingress.enabled: true
[!CAUTION] Don’t Lock Yourself Out of Your Instance
Before you change your Zitadel configuration, we highly recommend you to create a service user with a personal access token (PAT) and the IAM_OWNER role.
In case something breaks, you can use this PAT to revert your changes or fix the configuration so you can use a login UI again.
To actually use the new login, enable the loginV2 feature on the instance.
Leave the base URI empty to use the default or explicitly configure it to /ui/v2/login
.
If you enable this feature, the login will be used for every application configured in your Zitadel instance.
localhost
is removed from the Zitadel ingresses host
field. Instead, the host
fields for the Zitadel and login ingresses default to zitadel.configmapConfig.ExternalDomain
.[!WARNING] The chart version 8 doesn’t get updates to the default Zitadel version anymore as this might break environments that use CockroachDB. Please set the version explicitly using the appVersion variable if you need a newer Zitadel version. The upcoming version 9 will include the latest Zitadel version by default (Zitadel v3).
The default Zitadel version is now >= v2.55. This requires Cockroach DB to be at >= v23.2 If you are using an older version of Cockroach DB, please upgrade it before upgrading Zitadel.
Note that in order to upgrade cockroach, you should not jump minor versions. For example:
# install Cockroach DB v23.1.14
helm upgrade db cockroachdb/cockroachdb --version 11.2.4 --reuse-values
# install Cockroach DB v23.2.5
helm upgrade db cockroachdb/cockroachdb --version 12.0.5 --reuse-values
# install Cockroach DB v24.1.1
helm upgrade db cockroachdb/cockroachdb --version 13.0.1 --reuse-values
# install Zitadel v2.55.0
helm upgrade my-zitadel zitadel/zitadel --version 8.0.0 --reuse-values
Please refer to the docs by Cockroach Labs. The Zitadel tests run against the official CockroachDB chart.
(Credits to @panapol-p and @kleberbaum :pray:)
Now, you have the flexibility to define resource requests and limits separately for the machineKeyWriter, distinct from the setupJob. If you don’t specify resource requests and limits for the machineKeyWriter, it will automatically inherit the values used by the setupJob.
To maintain consistency in the structure of the values.yaml file, certain properties have been renamed. If you are using any of the following properties, kindly review the updated names and adjust the values accordingly:
Old Value | New Value |
---|---|
setupJob.machinekeyWriterImage.repository |
setupJob.machinekeyWriter.image.repository |
setupJob.machinekeyWriterImage.tag |
setupJob.machinekeyWriter.image.tag |
CockroachDB is not in the default configuration anymore. If you use CockroachDB, please check the host and ssl mode in your Zitadel Database configuration section.
The properties for database certificates are renamed and the defaults are removed. If you use one of the following properties, please check the new names and set the values accordingly:
Old Value | New Value |
---|---|
zitadel.dbSslRootCrt |
zitadel.dbSslCaCrt |
zitadel.dbSslRootCrtSecret |
zitadel.dbSslCaCrtSecret |
zitadel.dbSslClientCrtSecret |
zitadel.dbSslAdminCrtSecret |
- |
zitadel.dbSslUserCrtSecret |
The Zitadel chart uses Helm hooks, which are not garbage collected by helm uninstall, yet. Therefore, to also remove hooks installed by the Zitadel Helm chart, delete them manually:
helm uninstall my-zitadel
for k8sresourcetype in job configmap secret rolebinding role serviceaccount; do
kubectl delete $k8sresourcetype --selector app.kubernetes.io/name=zitadel,app.kubernetes.io/managed-by=Helm
done
For troubleshooting, you can deploy a debug pod by setting the zitadel.debug.enabled
property to true
.
You can then use this pod to inspect the Zitadel configuration and run zitadel commands using the zitadel binary.
For more information, print the debug pods logs using something like the following command:
kubectl logs rs/my-zitadel-debug
If you see this error message in the logs of the setup job, you need to reset the last migration step once you resolved the issue. To do so, start a debug pod and run something like the following command:
kubectl exec -it my-zitadel-debug -- zitadel setup cleanup --config /config/zitadel-config-yaml
Lint the chart:
docker run -it --network host --workdir=/data --rm --volume $(pwd):/data quay.io/helmpack/chart-testing:v3.5.0 ct lint --charts charts/zitadel --target-branch main
Test the chart:
# Create KinD cluster
kind create cluster --config ./charts/zitadel/acceptance_test/kindConfig.yaml
# Test the chart
go test ./...
Watch the Kubernetes pods if you want to see progress.
kubectl get pods --all-namespaces --watch
# Or if you have the watch binary installed
watch -n .1 "kubectl get pods --all-namespaces"