Platform Engineering and Internal Developer Platforms: How Backstage and Golden Paths Enable Developer Self-Service
Honestly, when I first heard about platform engineering, my reaction was "isn't this just DevOps with a new name?" As someone who had memorized 100 lines of Kubernetes YAML for deployments and nearly taken down production with a Terraform (a tool for provisioning cloud infrastructure by declaring it as code) script mistake—that was the reality of the DevOps model where "developers own the infrastructure too." Platform engineering takes one step further. The idea is: "let a dedicated team absorb that pain, so developers can focus purely on code."
The essence of platform engineering is treating internal infrastructure and developer tooling as a single 'product,' with developers inside the organization as its customers. The output of that is the IDP (Internal Developer Platform). According to the State of Platform Engineering Report Vol. 4 (surveying 518 engineers globally), over 55% of organizations have already adopted platform engineering, and Gartner predicts that by 2026, 80% of large organizations will operate a platform team. This is not a passing trend.
This article is written primarily for full-stack and backend developers with limited infrastructure experience. We'll look at how the three layers of an IDP actually fit together—along with code you can start using on your team right away.
Core Concepts
DevOps vs. Platform Engineering: What's the Difference?
This is a confusion that comes up often in practice. The two methodologies share the same goal but differ in approach.
| Category | DevOps | Platform Engineering |
|---|---|---|
| Philosophy | Developers directly own infrastructure | Platform team absorbs cognitive load |
| Developer role | Owns deployment and operations | Focuses on code and business logic |
| Infrastructure access | Direct manipulation (kubectl, AWS Console) | Abstracted via a self-service portal |
| Key risks | Cognitive overload, security mistakes | Platform team bottlenecks, upfront investment cost |
The key is the abstraction layer. Instead of developers writing Kubernetes YAML by hand every time or configuring security group rules, the platform provides validated configurations.
The Three Layers of an IDP
An IDP is not a single tool—it is a system of three interlocking layers.
graph TD
A["Self-Service Portal (developer touchpoint) — Backstage / Port / Cortex*"]
B["Orchestration Layer — Humanitec / Crossplane"]
C["Infrastructure Automation Layer — Terraform / Pulumi / ArgoCD"]
A --> B
B --> C*Cortex is a tool specialized in software catalog management and service quality scoring. It is more accurately understood as a catalog-centric platform than a full-stack self-service portal like Backstage or Port.
When a developer requests "give me one staging environment" from the portal, the orchestration layer executes the IaC and provisions the actual cloud resources. Developers don't need to know the internal implementation.
Golden Path: The Core Philosophy of the IDP
A Golden Path is a templated route that encodes security, governance, and operational best practices. Rather than configuring everything from scratch each time, developers choose a validated path—gaining both consistency and speed.
For example, when starting a new microservice, without a golden path, one person might leave security groups too open and another might forget to set resource limits. With a golden path, you start from scaffolding where the organization's established best practices are applied automatically.
Practical Application
Example 1: Building a Software Catalog and Golden Path with Backstage
→ This corresponds to the self-service portal layer among the three IDP layers.
When a new developer joins and it takes weeks to figure out "what services does our company have, who owns them, and where is the API documentation?"—that's a familiar story. I remember scouring Slack channels in my first month just to understand service dependencies. Backstage's software catalog solves this problem.
# catalog-info.yaml added to each service repository
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: payment-service
description: B2B payment processing service
tags:
- payment
- critical
annotations:
github.com/project-slug: my-org/payment-service
grafana/dashboard-selector: "payment-service"
pagerduty.com/service-id: "P1234ABC"
spec:
type: service
lifecycle: production
owner: payments-team
dependsOn:
- component:user-service
- resource:payments-database
providesApis:
- payment-api| Field | Description |
|---|---|
annotations |
Auto-connects with external systems (GitHub, Grafana, PagerDuty) |
dependsOn |
Automatically generates a service dependency graph |
providesApis |
Auto-links API documentation to catalog entries |
owner |
Clarifies service ownership and routes on-call alerts |
A catalog file alone is not enough. You can also automate new service creation itself—and that's exactly what the Backstage scaffolder does.
# Backstage scaffolder template (golden path entry point)
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: nodejs-service-template
title: Node.js Microservice (Golden Path)
description: Standard template with security, monitoring, and CI/CD configuration included
spec:
owner: platform-team
type: service
parameters:
- title: Service Basic Information
required:
- name
- owner
properties:
name:
title: Service Name
type: string
owner:
title: Team Name
type: string
ui:field: OwnerPicker
steps:
- id: fetch
name: Fetch Template
action: fetch:template
input:
url: ./skeleton
values:
name: ${{ parameters.name }}
owner: ${{ parameters.owner }}
- id: publish
name: Push to GitHub
action: publish:github
input:
allowedHosts:
- github.com
repoUrl: github.com?owner=my-org&repo=${{ parameters.name }}-service
defaultBranch: main
- id: register
name: Register in Backstage Catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}With this single template, every time a new service is started, GitHub repository creation, CI/CD pipeline wiring, monitoring setup, and Backstage catalog registration are all handled automatically. Real-world cases show onboarding that used to take weeks being reduced to hours.
Example 2: Automating Governance with Policy-as-Code
→ This corresponds to the security layer within the orchestration / infrastructure automation layers of the three IDP layers.
If your organization has a security team manually verifying a pre-deployment checklist, you can automate policies using OPA (Open Policy Agent, a tool for declaring policies as code and validating them automatically).
A quick note—attaching OPA to a CI/CD pipeline versus attaching it to a Kubernetes admission controller serve different roles. At the CI/CD stage, policies are checked before deployment to fail the build (preventive), while at the admission controller stage, resources are blocked in real time the moment they are created in the actual cluster (real-time defense). Applying both creates a double safety net.
# OPA Rego policy example: production environment deployment rules
# Rego uses Prolog-style syntax. Read a 'deny[msg] { ... }' block as:
# "if all conditions inside are true, reject that request."
package kubernetes.admission
# Resource limits required in production namespace
deny[msg] {
input.request.kind.kind == "Pod"
input.request.namespace == "production"
container := input.request.object.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container '%v' has no memory limit. Configuration is required for production deployment.", [container.name])
}
# Prohibit use of the latest tag
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Container '%v' is using the ':latest' tag. Please pin to a specific version.", [container.name])
}
# Block root containers
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.runAsUser == 0
msg := sprintf("Container '%v' is running as root. Please switch to a non-privileged user.", [container.name])
}All three policies are the rules that are most frequently violated in practice. Convera (a global B2B payments company) combined this kind of policy automation with a Humanitec-based IDP to achieve a change failure rate below 5% and a 30% reduction in release lead time.
Example 3: Declaring Infrastructure as Kubernetes Resources with Crossplane
→ This corresponds to the orchestration layer among the three IDP layers.
Unlike the days when a developer saying "I need a PostgreSQL database" meant waiting days for the Ops team, Crossplane lets you request infrastructure in the same declarative style you already use with Kubernetes. When I first saw this, I didn't know what a Composition was and read the official docs three times—but splitting it into the developer perspective and the platform team perspective made it click much faster.
# DB request written by a developer (Crossplane Claim)
# Only business-perspective parameters need to be specified
apiVersion: database.example.com/v1alpha1
kind: PostgreSQLInstance
metadata:
name: my-app-db
namespace: my-team
spec:
parameters:
storageGB: 20
version: "14"
environment: staging
writeConnectionSecretToRef:
name: my-app-db-credentials# Composition defined by the platform team (simplified)
# When a developer's Claim comes in, this determines what actual AWS resources to create
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: postgresql-aws
spec:
compositeTypeRef:
apiVersion: database.example.com/v1alpha1
kind: PostgreSQLInstance
resources:
- name: rds-instance
base:
apiVersion: rds.aws.upbound.io/v1beta1
kind: Instance
spec:
forProvider:
region: ap-northeast-2
instanceClass: db.t3.micro
engine: postgres
patches:
- fromFieldPath: spec.parameters.storageGB
toFieldPath: spec.forProvider.allocatedStorage
- fromFieldPath: spec.parameters.version
toFieldPath: spec.forProvider.engineVersionA Crossplane Composition is a template that the platform team defines in advance, specifying "when a PostgreSQL request comes in, which AWS resources to create and how to assemble them." Developers don't need to know these implementation details.
Developers don't need to know the complex AWS RDS settings like VPCs, subnets, or parameter groups—they simply specify business-perspective parameters (capacity, version, environment). The platform team defines the best practices once in a Composition, and they are automatically applied to all subsequent requests.
Pros and Cons Analysis
Advantages
| Item | Details |
|---|---|
| Reduced cognitive load | Developers can work via self-service without deep infrastructure knowledge, focusing on core development |
| Faster deployments | Multiple cases of transitioning from weekly releases to multiple daily deployments (State of Platform Engineering Vol. 4) |
| Security automation | Policy-as-Code can automate a significant portion of manual security checklists |
| Standardization | Golden paths consistently apply best practices across the entire team |
| Faster onboarding | Cases reported of new developer onboarding reduced from weeks to hours |
Disadvantages and Caveats
| Item | Details | Mitigation |
|---|---|---|
| High initial investment | Approximately 47% of platform initiatives operate on an annual budget under $1M | Start small and expand incrementally (begin with 1–2 golden paths) |
| Dedicated team required | The platform itself is a product, requiring continuous maintenance personnel | Secure at least 2–3 dedicated engineers before starting |
| Cultural resistance | Developer pushback against new workflows | Involve developers from the design stage through change management |
| Difficulty measuring outcomes | About 30% of platform teams measure no outcomes at all (State of Platform Engineering Vol. 4) | Establish a DORA metrics-based measurement framework |
| Vendor lock-in | Heavy dependence on specific tools increases switching costs | Use plugin architectures and standard interfaces |
DORA Metrics are software delivery performance measurement criteria defined by Google's DevOps Research and Assessment team. They consist of four measures: Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Time to Restore Service.
The Most Common Mistakes in Practice
- "Choose the tools first" — Many people mistakenly believe that installing Backstage is where platform engineering begins. In reality, the "Platform as a Product" mindset—treating developers as customers—comes first. Tools come after.
- Starting too big — I've frequently seen teams burn out trying to build a perfect IDP from day one. The approach that works best in practice is to turn the single most painful workflow into a golden path, then expand incrementally based on developer feedback.
- Not measuring ROI — According to State of Platform Engineering Vol. 4, about 30% of platform teams measure no outcomes at all, which causes them to fail at securing budget. It is strongly recommended to measure baselines for deployment frequency, onboarding time, and incident counts before adopting the platform.
Closing Thoughts
Platform engineering is not a collection of tools—it is a mindset shift that treats developers as customers and internal infrastructure as a product. Building a catalog with Backstage, automating policies with OPA, and managing infrastructure declaratively with Crossplane may look like a lot of configuration at first, but once a single golden path is in place, the entire team's workflow changes. It was only after experiencing this that I finally admitted my first impression of "just DevOps with a new name" was wrong.
Three steps you can start with right now:
- The starting point is finding the single workflow that consumes the most time on your team. Whether it's new service onboarding, Kubernetes deployments, or environment provisioning—that task is your first golden path candidate. You can run a local Backstage in under 30 minutes with the
npx @backstage/create-app@latestcommand. - Ask 5 developers: "What's the most frustrating part of your infrastructure-related work?" Features that solve real pain get adopted far faster than features a platform team built on assumptions. When I scoped my first golden path based on these interview results, the adoption rate was noticeably different.
- It is recommended to measure your DORA metrics baseline now. Recording deployment frequency, lead time, and change failure rate—even in a spreadsheet—lets you demonstrate the platform's impact with numbers after adoption. This data will be your strongest argument when requesting additional budget six months down the line.
References
- Platform Engineering in 2026: What It Actually Is and How to Build an IDP | Java Code Geeks
- What is an Internal Developer Platform (IDP)? | internaldeveloperplatform.org
- Platform Engineering in 2026: 5 Shifts Driving the Rise of IDPs | Growin
- Announcing the State of Platform Engineering Report Vol. 4 | platformengineering.org
- Platform Engineering in 2026: The Numbers Behind the Boom | DEV Community
- Internal Developer Platforms: Top 5 Use Cases & 5 Key Components | Octopus Deploy
- Platform Engineering Face-Off: To IDP or Not To IDP? | The New Stack
- Platform Engineering Tools Compared: Backstage, Port, Cortex, Humanitec | Encore Cloud
- Internal Developer Platform IDP 2026 Complete Guide | Calmops
- Use Case: Build Internal Developer Platforms | Humanitec
- Navigating Internal Developer Platforms in 2025 | Infisical