Deploying Backstage on Kubernetes with Argo CD: A Real-World Helm Chart
Backstage is often presented as a “plug and play” developer portal. In reality, deploying it cleanly in a production-grade Kubernetes cluster requires making a lot of architectural decisions upfront.
After recently installing Backstage on my own Kubernetes cluster, I decided to extract and open-source the work as a Helm chart designed for GitOps deployments with Argo CD. This post explains the assumptions behind the chart, the integrations I chose, and a few lessons learned along the way
Why another Backstage Helm chart?
The official Backstage Helm chart is a good starting point, but real clusters are rarely generic. In my case, I already had:
- A GitOps workflow based on Argo CD
- Centralized secrets managed outside the cluster
- A managed PostgreSQL operator
- A specific ingress strategy (Gateway API)
- Strong opinions about image building
Rather than bending the cluster to fit Backstage, I adapted Backstage to fit the cluster.
The result is a Helm chart that:
- Works well in a GitOps-first environment
- Integrates cleanly with my existing platform components
- Avoids unnecessary coupling or hidden assumption
Cluster assumptions and design choices
This chart is opinionated by design. It assumes the following components are already in place.
🔐Secrets management: Kubernetes SecretStore + AWS Secrets Manager
All sensitive configuration (database credentials, GitLab tokens, OAuth secrets) is stored in AWS Secrets Manager and synced into Kubernetes using External Secrets / SecretStore.
Why?
- No secrets in Git
- No duplication between environments
- Easy rotation without redeploying everything
The Helm chart expects secrets to already exist in Kubernetes, rather than creating them itself.
🗄Database: PostgreSQL via Crunchy PGO
Backstage relies heavily on its database (catalog, auth, search, etc.). Instead of managing PostgreSQL manually, I use Crunchy PGO.
Benefits:
- Automated backups
- Proper HA patterns
- Clean separation between application and data layer
The chart only consumes a PostgreSQL service and credentials—it does not manage the database lifecycle.
💾Storage: Longhorn
Persistent volumes (e.g. for plugins or future extensions) use Longhorn as the default storage class.
This keeps storage:
- Distributed
- Easy to snapshot
- Consistent across clusters
🌐 Ingress: Gateway API (not Ingress)
Instead of the old Ingress resource, the chart uses the Gateway API.
Reasons:
- More expressive routing
- Clear separation between platform and application concerns
- Better long-term alignment with Kubernetes networking evolution
If you’re still on Ingress, this might be the main adaptation you’ll need.
🔑 Authentication: GitLab
I configured Backstage to use GitLab as the authentication provider.
This setup works well for:
- Centralized identity
- Organization and group mapping
- Developer onboarding
However, I ran into an important limitation 👇
A GitLab discovery caveat
While configuring user and group ingestion, I noticed an issue:
- ❌ Public GitLab (gitlab.com): user ingestion does not work as expected
- ✅ Self-hosted GitLab: works perfectly with the same configuration
This appears to be a Backstage limitation or bug rather than a configuration issue. Authentication itself works, but catalog ingestion of users from public GitLab fails.
If you rely on public GitLab and automatic user discovery, this is something to be aware of. If you’ve encountered the same behavior—or found a workaround—I’d love to hear about it.
🐳 Bring Your Own Image
The Helm chart does not build a Backstage image.
Instead, you:
- Build your own image (CI, Kaniko, GitHub Actions, etc.)
- Reference it in values.yaml
- Deploy via Argo CD
This keeps responsibilities clear and avoids mixing build and runtime concerns.
GitOps with Argo CD
The chart is designed to be deployed via Argo CD, not helm install by hand.
That means:
- Declarative values
- Environment-specific overrides
- Easy promotion between dev / staging / prod
- Drift detection out of the box
Backstage fits very naturally into a GitOps workflow once the initial complexity is handled properly.
Open source and next steps
I’ve published the Helm chart on GitHub so others can reuse, adapt, or improve it.
👉 Repository: https://github.com/taxrakoto/backstage-chart
Planned or possible next steps:
- Better handling of GitLab public user ingestion
- Optional support for classic Ingress
- More examples for multi-environment setups
- Hardening defaults for production use
Final thoughts
Backstage is powerful, but it shines most when treated as a platform component, not just another app.
By integrating it properly with:
- your secrets strategy
- your database operator
- your networking model
- your GitOps workflow
you end up with something that’s actually sustainable in production.
If you’re running Backstage on Kubernetes—or planning to—I hope this project saves you some time.
Feedback, issues, and contributions are very welcome.