Automating Multi-Cluster Progressive Deployment with Argo CD ApplicationSet Matrix Generator
Once you have more than two Kubernetes clusters, managing deployments starts to become a real headache. Maintaining separate Application YAMLs for dev, staging, and prod inevitably leads to copy-paste hell, and every time a new cluster is added, someone has to manually add YAML files. I put up with this setup for a while myself — until the day I discovered, after the fact, that a resource change applied only to staging had gone out to production without it. That's when I finally went looking for a proper solution.
This post walks through the complete setup for automatically deploying a resource optimization PR — one that adjusts CPU/memory requests and limits — to dev → staging → prod in sequence, using a Matrix Generator and RollingSync strategy, with concrete examples. A single ApplicationSet automatically generates one Application per cluster, and the pipeline is declared so that staging only proceeds once dev is healthy.
If you're new to ApplicationSet, start from the basics. If you've already used it, feel free to skip straight to the Matrix Generator + Progressive Sync combination.
Core Concepts
ApplicationSet — The Controller That Eliminates Repetitive YAML
When you first start using Argo CD, you create Applications one by one. But as clusters and services multiply, you end up with dozens or hundreds of Application YAMLs. ApplicationSet eliminates that repetition. A Generator produces a list of parameters, and the controller injects those parameters into a template to automatically create and manage Applications.
ApplicationSet: A controller that automatically creates and manages multiple Argo CD Application resources from a single YAML definition. It was integrated into the core in Argo CD v2.3, and Progressive Sync was stabilized in v2.6 and above.
Cluster Generator — Cluster List as Parameters
The Cluster Generator scans the cluster secrets registered in Argo CD and extracts each cluster's metadata as parameters. The key feature is label-based filtering. Simply add a label like environment: dev to a cluster secret, and the moment a new cluster is provisioned, Argo CD automatically detects it and creates the corresponding Application.
generators:
- clusters:
selector:
matchLabels:
environment: production # Only target clusters with this labelVariables like {{name}}, {{server}}, and {{metadata.labels.environment}} inject cluster information directly into the template, so no YAML changes are needed as clusters grow.
Matrix Generator — Combining Two Generators
The Matrix Generator produces the Cartesian product of the parameters generated by two sub-generators. Three clusters × four services = twelve Applications, created automatically.
generators:
- matrix:
generators:
- clusters: # ← First Generator: list of clusters
selector:
matchExpressions:
- key: environment
operator: In
values: [dev, staging, prod]
- git: # ← Second Generator: list of app directories
repoURL: https://github.com/org/k8s-manifests
revision: HEAD
directories:
- path: services/*One thing to watch out for: if you're deploying the same service to multiple namespaces on the same cluster, Application names (using the {{name}}-{{path.basename}} pattern) can collide. In that scenario, include the namespace or a unique identifier in the name.
Note: The official spec limits Matrix's direct sub-generators to two. If you need three or more combinations, you'll need to nest Matrix generators.
Progressive Sync (RollingSync) — Staged Deployment Control
Honestly, without this feature, ApplicationSet wouldn't be nearly as compelling. RollingSync lets you group the generated Applications into stages and deploy them sequentially.
strategy:
type: RollingSync
rollingSync:
steps:
- matchExpressions:
- key: environment
operator: In
values: [dev] # Stage 1: all of dev
- matchExpressions:
- key: environment
operator: In
values: [staging] # Stage 2: after dev is confirmed healthy
- matchExpressions:
- key: environment
operator: In
values: [prod] # Stage 3: after staging is confirmed healthyThe next stage is blocked until all Applications in the current stage have no OutOfSync status and are Healthy. It's tempting to think you only need to check for Healthy, but in practice, the next stage won't proceed if any Application remains OutOfSync. If an OOM occurs or a pod enters CrashLoopBackOff during dev deployment, staging and prod are automatically blocked.
Enabling Progressive Sync: It is disabled by default. You must add the
--enable-progressive-syncsflag when starting the Argo CD server. The method differs depending on your installation: useserver.extraArgsfor Helm installs, orserver.extraCommandArgson theArgoCDCR for Operator-based installs.
Practical Application
Example 1: Complete ApplicationSet YAML
The full flow for deploying a PR that adjusts CPU/memory requests and limits to dev → staging → prod in order.
Setting Cluster Labels
First, add environment labels to the cluster secrets registered in Argo CD. If the cluster is already registered, the argocd.argoproj.io/secret-type=cluster label will already be present — you just need to add the environment label.
# Add environment labels to existing registered clusters
kubectl label secret dev-cluster \
environment=dev \
-n argocd
kubectl label secret stg-cluster \
environment=staging \
-n argocd
kubectl label secret prod-cluster \
environment=prod \
-n argocdFor clusters being newly registered, you must also specify the argocd.argoproj.io/secret-type=cluster label for Argo CD to recognize them as cluster secrets.
Writing the ApplicationSet YAML
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: resource-optimization-rollout
namespace: argocd
spec:
generators:
- matrix:
generators:
- clusters:
selector:
matchExpressions:
- key: environment
operator: In
values: [dev, staging, prod]
- git:
repoURL: https://github.com/org/k8s-manifests
# In production, using a specific tag or commit SHA is recommended (e.g., v1.2.3)
revision: HEAD
directories:
- path: services/*
template:
metadata:
# {{name}} is the cluster name, {{path.basename}} is the service directory name (api, worker, etc.)
name: '{{name}}-{{path.basename}}'
labels:
environment: '{{metadata.labels.environment}}'
spec:
project: default
source:
repoURL: https://github.com/org/k8s-manifests
targetRevision: HEAD
path: '{{path}}/overlays/{{metadata.labels.environment}}'
destination:
server: '{{server}}'
# Assumes the service name (api, worker, etc.) is used as the namespace
namespace: '{{path.basename}}'
syncPolicy:
automated:
prune: true
# selfHeal: true automatically reverts manual changes.
# This can conflict with hotfix scenarios (pod crash → direct patch attempt),
# so it's recommended to define your team's hotfix process before enabling this in production.
selfHeal: true
# When using automated syncPolicy with RollingSync, the stage order defined in
# RollingSync is preserved even when auto-sync is triggered.
strategy:
type: RollingSync
rollingSync:
steps:
- matchExpressions:
- key: environment
operator: In
values: [dev]
- matchExpressions:
- key: environment
operator: In
values: [staging]
- matchExpressions:
- key: environment
operator: In
values: [prod]| Field | Role | Example Value |
|---|---|---|
{{name}} |
Cluster name | dev-cluster, prod-cluster |
{{path.basename}} |
Git directory name (service name) | api, worker |
{{metadata.labels.environment}} |
Environment value extracted from cluster label | dev, staging, prod |
{{server}} |
Cluster API server URL | https://1.2.3.4:6443 |
Example 2: Kustomize Overlay Directory Structure
If you're new to Kustomize, the pattern is to place shared resources in the base directory and override per-environment values in overlays. For more details, refer to the Kustomize official documentation.
Structuring your repository as shown below lets you manage different resource values per environment cleanly.
services/
api/
base/
deployment.yaml # Shared template
kustomization.yaml
overlays/
dev/
kustomization.yaml # cpu: 100m, memory: 128Mi
staging/
kustomization.yaml # cpu: 500m, memory: 512Mi
prod/
kustomization.yaml # cpu: 2000m, memory: 2Gi
worker/
base/
...
overlays/
dev/ staging/ prod/# services/api/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch: |-
# JSON Patch format: replaces the value at the specified path with the given value
- op: replace
path: /spec/template/spec/containers/0/resources
value:
requests:
cpu: "2000m"
memory: "2Gi"
limits:
cpu: "4000m"
memory: "4Gi"
target:
kind: DeploymentThe Matrix Generator produces all combinations of services/* directories (api, worker, ...) and clusters (dev, staging, prod), and automatically references the appropriate overlay for each environment via the {{path}}/overlays/{{metadata.labels.environment}} path. The ApplicationSet YAML stays unchanged even as services grow.
Pros and Cons
The two things to be most careful about are selfHeal: true and missing the Progressive Sync flag. The rest can be considered optionally depending on your operational scale — the full breakdown is in the table below.
Advantages
| Item | Details |
|---|---|
| Declarative management | Define hundreds of Applications in a single YAML, fully aligned with GitOps principles |
| Automatic cluster detection | Label-based: new clusters are automatically included as deployment targets |
| Safe staged deployment | RollingSync prevents failures from propagating to higher environments |
| Dry-run support | Use the --dry-run flag to preview the list of Applications that will be created before actual deployment |
| Per-environment configuration | Apply different resource values per environment using Kustomize/Helm overlays |
Disadvantages and Caveats
| Item | Details | Mitigation |
|---|---|---|
| Matrix 2-generator limit | Direct sub-generators are limited to two | Nest Matrix generators if you need more than two |
| Progressive Sync requires separate activation | Disabled by default; flag must be added | Add --enable-progressive-syncs to server.extraArgs (Helm) or server.extraCommandArgs (Operator) |
| Verify automated + RollingSync behavior | Confirm that RollingSync order is preserved when auto-sync triggers | Recommend thorough staging validation before production rollout |
| Performance at scale | Controller memory usage can spike sharply with hundreds of clusters | Consider migrating to the argocd-agent architecture (see footnote below) |
| Label pollution risk | Missing or mistyped labels can cause deployment omissions | Add label validation to your CI pipeline |
| selfHeal caution | Manual patches are automatically reverted when selfHeal: true is set |
Pre-define hotfix processes (pod crash → direct patch scenario) |
argocd-agent: An architecture that deploys a lightweight agent on remote clusters with a reverse connection to a central hub. This model is being introduced by Red Hat and the Argo community to address performance and security bottlenecks of the traditional centralized approach at the scale of hundreds of clusters.
The Most Common Mistakes in Practice
-
Forgetting the Progressive Sync flag: If the ApplicationSet is applied correctly but RollingSync isn't working, nine times out of ten the
--enable-progressive-syncsflag is missing. I've seen new team members say "something's wrong with ApplicationSet" because they didn't know about this setting. -
Using Cluster Generator without cluster labels: If you've specified a label selector but no Applications are being created, first check whether the labels are actually attached to the cluster secrets.
kubectl get secret -n argocd \
-l argocd.argoproj.io/secret-type=cluster --show-labels- Typo in Kustomize path causing only some environments to fail sync: A typo in the
{{path}}/overlays/{{metadata.labels.environment}}path will cause Sync failures for just that environment's Application. Using--dry-runas described below lets you verify the Application list and paths ahead of time, catching these mistakes early.
Closing Thoughts
The combination of ApplicationSet's Matrix Generator + Cluster Generator + RollingSync is the most practical approach for declaratively solving the problems of repetitive YAML management and unsafe manual deployment sequencing in multi-cluster environments. The YAML structure can feel unfamiliar at first, but once it's set up, a single ApplicationSet covers everything no matter how many services or clusters you add — it's quite convenient.
Three steps you can start with right now:
-
Check and add cluster labels: Use
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster --show-labelsto inspect currently registered cluster secrets. If theenvironment=dev/staging/prodlabels are missing, you can add them with thekubectl labelcommand shown above. -
Preview the Application list with dry-run before applying: Before actually applying the ApplicationSet, use
kubectl apply --dry-run=client -f applicationset.yamlto verify the number of Applications to be created and their paths. This is the stage where you can catch path typos or missing labels. -
Add the Progressive Sync flag and apply RollingSync: Add
--enable-progressive-syncstoargocd-serverand applystrategy.type: RollingSyncto the ApplicationSet — you'll immediately see the staged deployment flow, where staging only proceeds after dev is Healthy. You can monitor the step-by-step progress of each Application in real time from the ApplicationSet > Applications tab in the Argo CD UI.
The next post will cover building a PR Preview environment using a combination of Pull Request Generator and Matrix Generator — automatically creating a preview environment when a PR is opened and deleting it automatically on merge.
References
- Matrix Generator Official Docs | Argo CD
- Cluster Generator Official Docs | Argo CD
- Progressive Syncs Official Docs | Argo CD
- ArgoCD ApplicationSet: Multi-Cluster Deployment Made Easy | Codefresh
- Set It and Forget It: Auto-Rolling Dev, Staging, and Prod with Argo CD | Medium
- Multi-cluster, multi-apps, multi-value deployments using Argo CD Application Sets | GitOpsCon NA 2025
- Enhance Kubernetes deployment efficiency with Argo CD and ApplicationSet | Red Hat Developer
- ApplicationSet with Matrix Generator Ep.13 | blog.stderr.at
- Set Up ArgoCD ApplicationSet Matrix Generator for Cross-Product Deployments | OneUptime
- Multi-cluster GitOps with the Argo CD Agent | Red Hat Blog
- Progressive Sync in OpenShift GitOps | Red Hat Documentation
- Best practices for promotion between clusters | GitHub Discussion