Automating PR Preview Environments with ArgoCD ApplicationSet
During code reviews, there are moments when you think "I'd need to run this locally to verify…" A single screenshot attached to a PR has its limits, and having reviewers checkout the branch and run it themselves is too cumbersome.
A temporary environment that automatically deploys to Kubernetes when a PR is opened, generates a unique URL, and cleans up without a trace when the PR is merged or closed — honestly, when I first set this up, I thought "Does this really work automatically?" But once you understand how ArgoCD ApplicationSet's Generator combinations work, you'll realize it's a surprisingly concise architecture. In this post, I'll walk through how to build a Preview environment fully synchronized with the PR lifecycle by combining the Pull Request Generator and Matrix Generator. By following this guide, you can have your first Preview environment up and running within 30 minutes.
Prerequisites: This post assumes basic Kubernetes knowledge (namespace, Pod, Ingress concepts) and that ArgoCD is already installed on your cluster. If you've used Helm or Kustomize at least once, you should be able to follow along without difficulty.
Core Concepts
What is a PR Preview Environment
A PR Preview environment is an ephemeral deployment environment synchronized 1:1 with a Pull Request's lifecycle. When a developer opens a PR, that code is deployed to an actual Kubernetes cluster and becomes accessible via a unique URL. When the PR is closed or merged, all related resources are automatically deleted.
The value here is that when a PM or designer asks "Where can I check this right now?", you can simply share a single URL. Code reviewers can also review while seeing the actual behavior in action, noticeably improving review quality.
ApplicationSet and Generators
The engine behind this entire structure is ArgoCD's ApplicationSet controller. An ApplicationSet is a resource that declaratively defines "under what conditions should which ArgoCD Applications be created" — and the component that creates those conditions is the Generator.
ApplicationSet: A controller that dynamically creates and deletes multiple ArgoCD Applications from a single template. When a Generator provides parameter sets, it injects those parameters into the template to create Applications.
Pull Request Generator
It periodically polls the API of Git hosting platforms (GitHub, GitLab, Gitea, Bitbucket) to detect open PRs. For each detected PR, it provides the following template parameters:
| Parameter | Description | Example Value |
|---|---|---|
{{.number}} |
PR number | 142 |
{{.branch}} |
Source branch name | feature/login-ui |
{{.head_sha}} |
Latest commit SHA | a1b2c3d4e5f6... |
{{.head_short_sha}} |
Commit SHA (7 characters) | a1b2c3d |
{{.labels}} |
List of labels on the PR | ["preview","frontend"] |
When a PR is closed or merged, the corresponding Application is automatically deleted. This is the core mechanism of PR Preview.
Polling vs Webhook: By default, the Pull Request Generator polls the GitHub API at the interval set in
requeueAfterSeconds. If you need immediate response, you can also configure a GitHub Webhook in ArgoCD to receive PR events in real-time. Polling is not the only method.
Matrix Generator
It creates all possible combinations (Cartesian product) of two child Generator outputs. This is a situation frequently encountered in practice — for example, if 3 PRs are open and there are 2 target clusters, a total of 6 Applications are created.
Pull Request Generator output: [PR-1, PR-2, PR-3]
Cluster Generator output: [dev-cluster, staging-cluster]
Matrix result: [PR-1×dev, PR-1×staging, PR-2×dev, PR-2×staging, PR-3×dev, PR-3×staging]Overall Architecture Flow
Developer opens PR
↓
CI (GitHub Actions) builds & pushes container image
↓
Pull Request Generator detects open PR (polling or Webhook)
↓
Combined with Matrix Generator → Application created
↓
ArgoCD deploys to PR-dedicated namespace (preview-pr-142)
↓
Unique URL assigned (pr-142.preview.example.com)
↓
On PR close/merge, Application deleted → Finalizer cleans up namespace and resourcesFinalizer: Cleanup logic that runs before a resource is deleted in Kubernetes. In ArgoCD, adding
resources-finalizer.argocd.argoproj.ioto an Application ensures that when the Application is deleted, all resources it deployed are cleaned up together. If you forget this, Pods and Services will remain orphaned even after the PR is closed.
Practical Implementation
Example 1: Basic PR Preview on a Single Cluster
This is the most basic configuration. I started with this pattern too — it deploys apps to isolated namespaces per PR on a single cluster. The key cost management point is targeting only PRs with the preview label.
Why Kustomize: This example uses Kustomize. With
namePrefixandcommonLabels, you can easily isolate resources per PR, making it a great way to reuse existing manifests without a separate Helm chart.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: pr-preview
namespace: argocd
spec:
goTemplate: true
generators:
- pullRequest:
github:
owner: my-org
repo: my-app
tokenRef:
secretName: github-token
key: token
labels:
- preview
requeueAfterSeconds: 30
template:
metadata:
name: 'preview-{{.number}}'
annotations:
notifications.argoproj.io/subscribe.on-deployed.github: ""
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: previews
source:
repoURL: 'https://github.com/my-org/my-app.git'
targetRevision: '{{.head_sha}}'
path: k8s/overlays/preview
kustomize:
namePrefix: 'pr-{{.number}}-'
commonLabels:
app.kubernetes.io/instance: 'pr-{{.number}}'
destination:
server: https://kubernetes.default.svc
namespace: 'preview-pr-{{.number}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- PruneLast=trueHere's a summary of the key settings:
| Setting | Role |
|---|---|
labels: [preview] |
Creates environments only for PRs with the preview label. Prevents unnecessary costs |
requeueAfterSeconds: 30 |
Polls GitHub API every 30 seconds to check PR status |
finalizers |
Cleans up all deployed resources when the Application is deleted |
goTemplate: true |
Enables Go template syntax (dot notation like .number) |
targetRevision: '{{.head_sha}}' |
Tracks the latest commit of the PR. Auto-redeploys on new commits |
CreateNamespace=true |
Automatically creates the namespace if it doesn't exist |
prune: true |
Removes resources from the cluster that were deleted from Git |
requeueAfterSeconds and Rate Limits: Polling every 30 seconds means 120 API calls per hour. If you apply ApplicationSets to multiple repos or share tokens across environments, you may hit GitHub's API rate limit (5,000 per hour). For many repos, it's safer to increase to 60 seconds or more, or use Webhooks in parallel.
The k8s/overlays/preview directory needs at minimum a kustomization.yaml like this:
# k8s/overlays/preview/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- resource-limits.yaml # Resource limits for PreviewWith just this configuration, opening a PR and adding the preview label will start spinning up an environment within 30 seconds.
Example 2: Multi-Cluster Preview with Matrix Generator
As teams grow, requests like "I want to see it on the dev cluster too, and also verify on staging" emerge. I needed this pattern once our team exceeded 10 people — I remember struggling for a while when I first set it up because I missed a cluster label. You combine the Matrix Generator with the Pull Request Generator.
Why Helm: In multi-cluster environments, you need to inject different values per cluster (ingress host, image tag, etc.), and Helm's
parametersoverride is better suited for this kind of dynamic value injection.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: pr-preview-multi-cluster
namespace: argocd
spec:
goTemplate: true
generators:
- matrix:
generators:
- pullRequest:
github:
owner: my-org
repo: my-app
tokenRef:
secretName: github-token
key: token
labels:
- preview
requeueAfterSeconds: 30
- clusters:
selector:
matchLabels:
env: preview
template:
metadata:
name: 'preview-{{.number}}-{{.name}}'
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: previews
source:
repoURL: 'https://github.com/my-org/my-app.git'
targetRevision: '{{.head_sha}}'
path: charts/my-app
helm:
parameters:
- name: image.tag
value: 'pr-{{.number}}-{{.head_short_sha}}'
- name: ingress.host
value: 'pr-{{.number}}.{{.name}}.preview.example.com'
destination:
server: '{{.server}}'
namespace: 'preview-pr-{{.number}}'
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueHere, the clusters Generator selects only clusters registered in ArgoCD that have the env: preview label. Since the Matrix Generator multiplies PRs by clusters, when PR-142 is opened, two Applications — preview-142-dev-cluster and preview-142-staging-cluster — are automatically created.
{{.name}} is the cluster name from the Cluster Generator, and {{.server}} is that cluster's API server address. The power of the Matrix Generator is that you can freely mix parameters from different Generators in a single template.
Example 3: GitHub Actions CI Pipeline Integration
If ArgoCD handles deployment, CI is responsible for building images and commenting the Preview URL on the PR. Honestly, the most frustrating part here was matching image tag conventions. If the tag built in CI differs by even one character from the tag in the ApplicationSet template, the image won't be found and the Pod won't start.
# .github/workflows/pr-preview.yml
name: PR Preview
on:
pull_request:
types: [opened, synchronize, labeled]
jobs:
build:
if: contains(github.event.pull_request.labels.*.name, 'preview')
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
pull-requests: write
steps:
- uses: actions/checkout@v4
- name: Login to GHCR
run: echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Build and Push Image
run: |
IMAGE=ghcr.io/my-org/my-app:pr-${{ github.event.number }}-$(echo ${{ github.event.pull_request.head.sha }} | cut -c1-7)
docker build -t $IMAGE .
docker push $IMAGE
- name: Comment Preview URL
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `🚀 Preview 환경이 준비되고 있습니다!\n\nhttps://pr-${context.issue.number}.preview.example.com\n\n배포 완료까지 1~2분 소요됩니다.`
})The critical part is using head_short_sha (7 characters) in the image tag. Since the ApplicationSet template's image.tag value is pr-{{.number}}-{{.head_short_sha}}, CI must also truncate the SHA to 7 characters when creating the tag.
Pros and Cons Analysis
Honestly, after introducing PR Preview, our team's review culture changed noticeably. But there's no free lunch. Here are the pros and cons I've experienced in production.
Pros
| Item | Details |
|---|---|
| Fast feedback loop | Code reviewers can directly verify features with a single URL. Review quality improves noticeably |
| Fully automated | Zero manual intervention from PR opening to environment deletion |
| Isolated testing | Each PR runs in an independent namespace with no cross-environment interference |
| Non-developer collaboration | PMs, QA, and designers can verify features directly without setting up a development environment |
| GitOps compliance | All infrastructure state is declaratively defined in Git, ensuring traceability and reproducibility |
Cons and Considerations
| Item | Details | Mitigation |
|---|---|---|
| Increased infrastructure costs | Resource consumption spikes with full stack deployment per PR | preview label filtering, ResourceQuota settings, TTL-based auto-cleanup |
| Initial setup complexity | Many things to configure: ApplicationSet, CI, DNS, Ingress, secrets, etc. | Start with the basic pattern from Example 1 and expand incrementally |
| DB state management | Seeding/migration/isolation of stateful services is tricky | Choose between temporary DB instances per PR or shared DB + schema isolation strategy |
| Network complexity | Wildcard DNS, TLS certificates, inter-service communication setup | Automate with ExternalDNS + cert-manager |
| Security exposure | Unfinished code may be exposed externally | Route through OAuth2 Proxy, restrict to internal network |
ResourceQuota: A Kubernetes resource that sets upper limits on CPU, memory, Pod count, etc. per namespace. Applying it to Preview environments prevents a single PR from monopolizing cluster resources.
Most Common Mistakes in Practice
-
Forgetting the Finalizer — If you don't add
resources-finalizer.argocd.argoproj.ioto the Application, the Application gets deleted but the deployed resources (Pods, Services, Ingress) remain orphaned. A few days later, when you runkubectl get pods --all-namespaces, you'll inevitably find a parade of mysterious Pods. -
Creating environments for all PRs without a label filter — At first you might think "Wouldn't it be convenient to create them all?" but in an active repository with 20 simultaneous open PRs, the cluster will scream. The
labelsfilter is not optional — it's mandatory. -
Image tag convention mismatch — I once got a Slack alert at 2 AM and found all Preview environments in CrashLoopBackOff. The cause was simple: CI built with
pr-142-a1b2c3d(7-character SHA) but the template referenced{{.head_sha}}(full 40-character SHA). Don't confusehead_shawithhead_short_sha.
Tip about selfHeal:
selfHeal: trueautomatically reverts to the Git state, but when debugging in a Preview environment by temporarily modifying resources, ArgoCD will keep reverting your changes. If your team debugs frequently, consider settingselfHeal: falsefor Preview environments.
Conclusion
A PR Preview environment is the most effective DevOps investment that gifts your team the experience of "code review = verify with a single URL click."
I remember when I first built this and our PM said "So I don't have to keep asking 'where can I check this?' anymore, right?" It's true that the initial setup isn't trivial (as I honestly mentioned in the cons section), but once it's built, it runs automatically for every PR from then on.
Three steps to get started right now:
-
Create the GitHub Token Secret — Register the token with
kubectl create secret generic github-token --from-literal=token=ghp_xxxx -n argocd. Only repo read permission is needed. (In production environments, using GitHub App authentication instead of Personal Access Tokens is recommended for security.) -
Deploy the basic ApplicationSet — Copy Example 1's YAML as-is, modify only the org/repo information, and apply with
kubectl apply -f pr-preview-appset.yaml. A Kustomize overlay must be prepared at thek8s/overlays/previewpath. -
Add the
previewlabel to a PR — Open a test PR, add thepreviewlabel, and verify in the ArgoCD UI that apreview-{PR number}Application is created and synced. If there's no response within 30 seconds, check the token permissions and ArgoCD ApplicationSet controller logs (kubectl logs -n argocd -l app.kubernetes.io/name=argocd-applicationset-controller).
Next post: How to safely connect shared databases to Preview environments — PR-level schema isolation and seed data automation strategies
References
Essential Documentation (Read First)
- ArgoCD Pull Request Generator Official Docs
- ArgoCD Matrix Generator Official Docs
- ArgoCD Application Pruning & Resource Deletion
Practical Guides
- Create Temporary Argo CD Preview Environments Based On Pull Requests — Codefresh
- Automate CI/CD on pull requests with Argo CD ApplicationSets — Red Hat Developer
- Setting up Preview Environments for Pull Requests with Argo CD and GitHub Actions — Medium
- From PR → Preview → Production with GitHub Actions + ArgoCD — Medium
- The What and Why of Ephemeral Preview Environments on Kubernetes — Northflank
- Comprehensive Guide to Preview Environment Solutions for Kubernetes — Signadot