Why is Tiller Missing in Helm 3?
How they’re managing without it

Helm has recently announced it’s much-awaited version 3 and the server component added in Helm 2, Tiller, is missing. So, now we have a server-less Helm. Helm was highly dependent on the tiller for managing a chart life-cycle into Kubernetes.
Why Was Tiller Removed?
Tiller is the tool used by Helm to deploy almost any Kubernetes resource. To do this, by default Helm takes the maximum permission to make changes in Kubernetes. Because of this, anyone who can talk to the Tiller can deploy or modify any resources on the Kubernetes cluster, just like a system-admin. This can cause security issues in the cluster if Helm has not been properly deployed, following certain security measures, and it’s also an added dependency for the Helm. Also, authentication is not enabled in Tiller by default, so if any of the pod has been compromised and has permission to talk to the Tiller, then the complete cluster in which tiller is running has been compromised. For more information about security issues and how they should be dealt with in Helm 2, read this blog by Bitnami.
$ helm init
$HELM_HOME has been configured at /home/andres/.helm.
Tiller (the Helm server-side component) has been installed in your Kubernetes cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation, see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
Why Was Tiller Needed?
Tiller was used as an in-cluster operator by the helm to maintain the state of a helm release. It’s also used to save the release information of all the releases done by the tiller — it uses config-map to save the release information in the same namespace in which Tiller is deployed. This release information was required by the helm when it updated or when there were state changes in any of the releases.
So whenever a helm update command was used, Tiller used to compare the new manifest with the old manifest of the release and made changes accordingly. Thus Helm was dependent on the Tiller to provide the previous state of the release.
How Does Tillerless Helm Work?
The main need of Tiller was to store release information, for which helm is now using secrets and saving it in the same namespace as the release. Whenever Helm needs the release information it gets it from the namespace of the release. To make a change Helm now just fetches information from the Kubernetes API server, makes the changes on the client-side, and stores a record of the installation in Kubernetes. The benefit of tillerless Helm is that since now Helm make changes in the cluster from client-side, it can only make those changes that the client has been granted permission.
Conclusion
Tiller was a good addition in helm 2, but to run it in production it should be properly secured, which would add additional learning steps for the DevOps and SRE. With Helm 3 learning steps have been reduced and security management is been left in hands of Kubernetes to maintain. Helm can now focus on package-management.
Happy Helming!