Complete Backstage Setup Guide for AWS: Step-by-Step
Deploy a production-ready Internal Developer Platform on AWS EKS in 7 steps — with RDS, Cognito, S3 TechDocs, Helm, and golden path templates
Published February 12, 2026 · 25 min read
Quick Answer
How do you set up Backstage on AWS?
Step 1: Create a Backstage app with npx @backstage/create-app@latest. Step 2: Provision RDS PostgreSQL via Terraform for the catalog database. Step 3: Configure AWS Cognito as the OAuth2 identity provider. Step 4: Build a Docker image, push to ECR, and deploy to EKS using the official Helm chart. Step 5: Integrate AWS services — S3 for TechDocs, ECS/EKS for catalog entities. Step 6: Add golden path software templates. Step 7: Configure TechDocs publishing to S3. Total setup: 4-6 hours basic, 2-3 weeks production-ready with templates and monitoring.
Executive Summary
Backstage, created by Spotify and now a CNCF incubating project, has become the de facto standard for building Internal Developer Platforms. But while Backstage's plugin architecture is powerful, deploying it on AWS with production-grade infrastructure — managed database, enterprise authentication, scalable documentation hosting — requires careful orchestration of multiple services.
This Backstage AWS setup guide walks you through every step: from scaffolding your Backstage app to deploying it on EKS with RDS PostgreSQL, Cognito authentication, S3-backed TechDocs, and golden path software templates. Each step includes production-ready code you can copy directly into your infrastructure.
By the end, you will have a fully operational Internal Developer Platform serving your engineering team — with HTTPS, monitoring, automated backups, and self-service templates that reduce onboarding from weeks to hours.
Target Architecture
Before diving into the steps, here is the architecture we are building. Every AWS service maps to a specific Backstage concern — compute, storage, identity, and networking:
┌──────────────────────────────────────────────────────────────────────┐ │ Route 53 (DNS) │ │ backstage.yourcompany.com │ └──────────────────────────┬───────────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────┐ │ Application Load Balancer (ALB) │ │ ACM TLS Certificate (HTTPS) │ └──────────────────────────┬───────────────────────────────────────────┘ │ ▼ ┌──────────────────────────────────────────────────────────────────────┐ │ Amazon EKS Cluster │ │ │ │ ┌────────────────────────────────────────────────────────────────┐ │ │ │ Backstage Pod (Node.js) │ │ │ │ ┌──────────────┐ ┌──────────────┐ ┌───────────────────────┐ │ │ │ │ │ Catalog │ │ Scaffolder │ │ TechDocs │ │ │ │ │ │ Backend │ │ Backend │ │ Backend │ │ │ │ │ └──────┬───────┘ └──────┬───────┘ └──────────┬────────────┘ │ │ │ └─────────┼────────────────┼────────────────────┼───────────────┘ │ │ │ │ │ │ └────────────┼────────────────┼────────────────────┼───────────────────┘ │ │ │ ▼ ▼ ▼ ┌────────────────┐ ┌────────────────┐ ┌─────────────────────┐ │ RDS PostgreSQL│ │ GitHub/GitLab │ │ S3 Bucket │ │ (Catalog DB) │ │ (Templates & │ │ (TechDocs HTML) │ │ db.t3.medium │ │ Repos) │ │ │ └────────────────┘ └────────────────┘ └─────────────────────┘ │ ▼ ┌────────────────┐ │ AWS Cognito │ │ (OAuth2 / SSO) │ └────────────────┘
Monthly cost for this stack: approximately $320-$485 depending on node size and traffic. That is under $5 per developer per month for a team of 100 engineers.
Prerequisites
Before starting this Backstage AWS setup guide, confirm you have the following in place:
- AWS Account with admin or PowerUser IAM permissions. You will provision EKS, RDS, S3, Cognito, ALB, ACM, and Route 53 resources.
- EKS Cluster (v1.27+) running with at least 2
t3.mediumworker nodes. If you do not have one, useeksctl create clusteror Terraform. - PostgreSQL 14+ — we provision this via RDS in Step 2, but if you already have a managed instance you can skip that step.
- Node.js 18+ and Yarn 1.x installed locally for Backstage development. Backstage uses Yarn classic workspaces.
- Docker installed for building the production container image.
- Helm 3 installed for deploying to EKS.
- Terraform 1.5+ installed for provisioning AWS infrastructure.
- kubectl configured to talk to your EKS cluster (
aws eks update-kubeconfig --name your-cluster). - A domain name with a Route 53 hosted zone (e.g.,
backstage.yourcompany.com). - GitHub or GitLab account for source control integration and software template repositories.
Tip:
If you are starting from scratch, our AWS Landing Zone with Terraform guide covers VPC, EKS, and IAM setup. Complete that first, then return here for Backstage.
Step 1: Install Backstage CLI and Create Your App
Backstage ships a scaffolding CLI that generates a monorepo with a React frontend and Node.js backend. This is your starting point:
# Create a new Backstage app (interactive wizard) npx @backstage/create-app@latest # When prompted: # App name: backstage-internal # Select database: PostgreSQL # Navigate into the project cd backstage-internal # Verify the project structure ls -la packages/ # app/ → React frontend (port 3000) # backend/ → Node.js API (port 7007)
The generated project uses Yarn workspaces. Start the development server to verify everything works locally:
# Install dependencies yarn install # Start in development mode (uses SQLite by default) yarn dev # Open http://localhost:3000 — you should see the Backstage UI # Backend API runs at http://localhost:7007
At this point Backstage is running locally with an in-memory SQLite database. Next, we replace SQLite with a production-grade RDS PostgreSQL instance.
Key Files to Understand
app-config.yaml— Primary configuration. Database connections, auth providers, integrations, and plugin settings all live here.app-config.production.yaml— Production overrides merged on top of the base config whenNODE_ENV=production.packages/backend/src/index.ts— Backend entry point. Register plugins and middleware here.packages/app/src/App.tsx— Frontend entry point. Add plugin pages and routes here.catalog-info.yaml— Backstage's own service catalog entry (self-referencing).
Step 2: Configure PostgreSQL on RDS
Backstage stores its catalog, scaffolding state, and search index in PostgreSQL. AWS RDS gives us automated backups, Multi-AZ failover, and managed patching. Here is the Terraform module:
# modules/rds-backstage/main.tf
resource "aws_db_subnet_group" "backstage" {
name = "backstage-db-subnet"
subnet_ids = var.private_subnet_ids
tags = { Name = "backstage-db-subnet" }
}
resource "aws_security_group" "backstage_rds" {
name_prefix = "backstage-rds-"
vpc_id = var.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [var.eks_node_sg_id]
description = "Allow EKS nodes to reach PostgreSQL"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "backstage-rds-sg" }
}
resource "aws_db_instance" "backstage" {
identifier = "backstage-catalog"
engine = "postgres"
engine_version = "14.10"
instance_class = "db.t3.medium"
allocated_storage = 20
max_allocated_storage = 100
storage_encrypted = true
db_name = "backstage"
username = "backstage_admin"
password = var.db_password
db_subnet_group_name = aws_db_subnet_group.backstage.name
vpc_security_group_ids = [aws_security_group.backstage_rds.id]
multi_az = true
backup_retention_period = 7
deletion_protection = true
skip_final_snapshot = false
tags = { Name = "backstage-catalog-db" }
}
output "rds_endpoint" {
value = aws_db_instance.backstage.endpoint
}
output "rds_port" {
value = aws_db_instance.backstage.port
}Apply the Terraform configuration:
# Set the password securely (never commit to Git) export TF_VAR_db_password=$(aws secretsmanager get-secret-value \ --secret-id backstage/db-password \ --query SecretString --output text) terraform init terraform plan -out=tfplan terraform apply tfplan
Now update app-config.production.yaml to point Backstage at the RDS instance:
# app-config.production.yaml
backend:
database:
client: pg
connection:
host: ${POSTGRES_HOST}
port: ${POSTGRES_PORT}
user: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
database: backstage
ssl:
require: true
rejectUnauthorized: trueSecurity Note:
Store the database password in AWS Secrets Manager or SSM Parameter Store. Inject it as an environment variable in your Kubernetes deployment — never hardcode credentials in config files. We cover this in the Kubernetes manifests in Step 4.
Step 3: Set Up Authentication with AWS Cognito
By default, Backstage has no authentication — anyone who can reach the URL gets full access. For production, we configure AWS Cognito as an OAuth2/OIDC identity provider. This enables Single Sign-On with your corporate directory (Active Directory, Okta, Google Workspace) through Cognito federation.
Create a Cognito User Pool
# modules/cognito-backstage/main.tf
resource "aws_cognito_user_pool" "backstage" {
name = "backstage-users"
auto_verified_attributes = ["email"]
username_attributes = ["email"]
password_policy {
minimum_length = 12
require_uppercase = true
require_numbers = true
require_symbols = true
}
account_recovery_setting {
recovery_mechanism {
name = "verified_email"
priority = 1
}
}
tags = { Name = "backstage-cognito" }
}
resource "aws_cognito_user_pool_client" "backstage" {
name = "backstage-app"
user_pool_id = aws_cognito_user_pool.backstage.id
generate_secret = true
allowed_oauth_flows_user_pool_client = true
allowed_oauth_flows = ["code"]
allowed_oauth_scopes = ["openid", "email", "profile"]
callback_urls = ["https://backstage.yourcompany.com/api/auth/cognito/handler/frame"]
logout_urls = ["https://backstage.yourcompany.com"]
supported_identity_providers = ["COGNITO"]
explicit_auth_flows = [
"ALLOW_REFRESH_TOKEN_AUTH",
"ALLOW_USER_SRP_AUTH"
]
}
resource "aws_cognito_user_pool_domain" "backstage" {
domain = "backstage-yourcompany"
user_pool_id = aws_cognito_user_pool.backstage.id
}
output "cognito_client_id" {
value = aws_cognito_user_pool_client.backstage.id
}
output "cognito_client_secret" {
value = aws_cognito_user_pool_client.backstage.client_secret
sensitive = true
}
output "cognito_issuer_url" {
value = "https://cognito-idp.${var.aws_region}.amazonaws.com/${aws_cognito_user_pool.backstage.id}"
}Configure Backstage Auth Provider
Add the Cognito provider to your app-config.production.yaml:
# app-config.production.yaml (auth section)
auth:
environment: production
providers:
oidc:
production:
metadataUrl: ${COGNITO_ISSUER_URL}/.well-known/openid-configuration
clientId: ${COGNITO_CLIENT_ID}
clientSecret: ${COGNITO_CLIENT_SECRET}
prompt: auto
scope: "openid email profile"
signIn:
resolvers:
- resolver: emailMatchingUserEntityProfileEmailInstall the OIDC auth provider backend module in your Backstage project:
# From the root of your Backstage project yarn --cwd packages/backend add @backstage/plugin-auth-backend-module-oidc-provider
After deploying (Step 4), users visiting backstage.yourcompany.com will be redirected to the Cognito hosted UI login page. Authenticated sessions are stored server-side with a cookie-based session ID.
Step 4: Deploy to EKS
With PostgreSQL and Cognito configured, it is time to containerize Backstage and deploy it to your EKS cluster.
Dockerfile
# Dockerfile (multi-stage build)
FROM node:18-bookworm-slim AS build
WORKDIR /app
COPY package.json yarn.lock ./
COPY packages/backend/package.json packages/backend/
COPY packages/app/package.json packages/app/
RUN yarn install --frozen-lockfile --network-timeout 600000
COPY . .
RUN yarn tsc
RUN yarn build:backend --config ../../app-config.yaml
# --- Production stage ---
FROM node:18-bookworm-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3 build-essential && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=build /app/yarn.lock /app/package.json /app/packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
COPY --from=build /app/packages/backend/dist/bundle.tar.gz .
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
RUN yarn install --frozen-lockfile --production --network-timeout 600000
COPY app-config.yaml app-config.production.yaml ./
ENV NODE_ENV=production
USER node
CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]Build and Push to ECR
# Create an ECR repository aws ecr create-repository --repository-name backstage --region us-east-1 # Authenticate Docker to ECR aws ecr get-login-password --region us-east-1 | \ docker login --username AWS --password-stdin \ 123456789012.dkr.ecr.us-east-1.amazonaws.com # Build and push docker build -t backstage:latest . docker tag backstage:latest \ 123456789012.dkr.ecr.us-east-1.amazonaws.com/backstage:v1.0.0 docker push \ 123456789012.dkr.ecr.us-east-1.amazonaws.com/backstage:v1.0.0
Helm Deployment
Use the official Backstage Helm chart for deployment. Create a values-production.yaml file with your AWS-specific overrides:
# values-production.yaml
backstage:
image:
registry: 123456789012.dkr.ecr.us-east-1.amazonaws.com
repository: backstage
tag: "v1.0.0"
replicas: 2
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "1000m"
extraEnvVars:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: backstage-db-credentials
key: host
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: backstage-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: backstage-db-credentials
key: password
- name: COGNITO_ISSUER_URL
valueFrom:
secretKeyRef:
name: backstage-auth
key: issuer-url
- name: COGNITO_CLIENT_ID
valueFrom:
secretKeyRef:
name: backstage-auth
key: client-id
- name: COGNITO_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: backstage-auth
key: client-secret
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7007"
prometheus.io/path: "/metrics"
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789012:certificate/abc-123
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
hosts:
- host: backstage.yourcompany.com
paths:
- path: /
pathType: Prefix
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/backstage-irsaCreate the Kubernetes secrets and deploy:
# Create namespace
kubectl create namespace backstage
# Create database credentials secret
kubectl create secret generic backstage-db-credentials \
--namespace backstage \
--from-literal=host=backstage-catalog.abc123.us-east-1.rds.amazonaws.com \
--from-literal=username=backstage_admin \
--from-literal=password=$(aws secretsmanager get-secret-value \
--secret-id backstage/db-password \
--query SecretString --output text)
# Create auth secret
kubectl create secret generic backstage-auth \
--namespace backstage \
--from-literal=issuer-url=https://cognito-idp.us-east-1.amazonaws.com/us-east-1_XXXXX \
--from-literal=client-id=YOUR_COGNITO_CLIENT_ID \
--from-literal=client-secret=YOUR_COGNITO_CLIENT_SECRET
# Add Helm repo and install
helm repo add backstage https://backstage.github.io/charts
helm repo update
helm install backstage backstage/backstage \
--namespace backstage \
--values values-production.yaml \
--wait --timeout 10m
# Verify the deployment
kubectl get pods -n backstage
kubectl get ingress -n backstageStep 5: Configure AWS Integrations
Backstage becomes truly powerful when it reflects your actual AWS infrastructure. Configure service discovery so your ECS services, EKS workloads, and Lambda functions appear automatically in the service catalog.
IAM Role for Service Account (IRSA)
Backstage needs AWS permissions to read infrastructure state. Use IRSA to grant fine-grained access without static credentials:
# IAM policy for Backstage IRSA
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TechDocsS3Access",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::backstage-techdocs-*",
"arn:aws:s3:::backstage-techdocs-*/*"
]
},
{
"Sid": "ECSReadAccess",
"Effect": "Allow",
"Action": [
"ecs:ListClusters",
"ecs:ListServices",
"ecs:DescribeServices",
"ecs:DescribeTaskDefinition"
],
"Resource": "*"
},
{
"Sid": "EKSReadAccess",
"Effect": "Allow",
"Action": [
"eks:ListClusters",
"eks:DescribeCluster",
"eks:ListNodegroups"
],
"Resource": "*"
},
{
"Sid": "CostExplorerAccess",
"Effect": "Allow",
"Action": [
"ce:GetCostAndUsage"
],
"Resource": "*"
}
]
}Catalog Entities for AWS Resources
Register your AWS resources in Backstage by creating catalog-info.yaml files. Here is an example for an EKS-hosted microservice:
# catalog-info.yaml for an EKS-deployed service
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: payment-service
description: Stripe payment processing microservice
tags:
- nodejs
- payments
- production
annotations:
backstage.io/techdocs-ref: dir:.
github.com/project-slug: yourcompany/payment-service
backstage.io/kubernetes-id: payment-service
backstage.io/kubernetes-namespace: payments
links:
- url: https://grafana.yourcompany.com/d/payment-svc
title: Grafana Dashboard
- url: https://us-east-1.console.aws.amazon.com/ecs/v2/clusters/prod
title: AWS Console
spec:
type: service
lifecycle: production
owner: team-payments
system: checkout-platform
providesApis:
- payment-api
dependsOn:
- resource:rds-payments-db
- component:notification-service
---
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
name: rds-payments-db
description: PostgreSQL database for payment records
spec:
type: database
owner: team-payments
system: checkout-platformStep 6: Add Software Templates (Golden Paths)
Software templates are the heart of Backstage's developer experience. They let developers spin up fully configured services — complete with Git repo, CI/CD pipeline, Kubernetes manifests, Terraform modules, and documentation — in under five minutes. Here is a production-ready golden path template for a Node.js microservice on EKS:
# templates/node-eks-service/template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: node-eks-microservice
title: Node.js EKS Microservice
description: |
Creates a production-ready Node.js service deployed to EKS with:
CI/CD via GitHub Actions, Terraform RDS, Grafana dashboards, and TechDocs.
tags:
- nodejs
- eks
- recommended
spec:
owner: platform-team
type: service
parameters:
- title: Service Details
required:
- serviceName
- owner
- description
properties:
serviceName:
title: Service Name
type: string
pattern: "^[a-z0-9-]+$"
ui:autofocus: true
ui:help: "Lowercase alphanumeric with dashes (e.g., user-auth-api)"
description:
title: Description
type: string
maxLength: 200
owner:
title: Owner Team
type: string
ui:field: OwnerPicker
ui:options:
catalogFilter:
kind: Group
- title: Infrastructure
required:
- environment
- needsDatabase
properties:
environment:
title: Target Environment
type: string
enum: ["staging", "production"]
default: "staging"
needsDatabase:
title: Requires PostgreSQL Database?
type: boolean
default: true
instanceSize:
title: RDS Instance Size
type: string
enum: ["db.t3.micro", "db.t3.small", "db.t3.medium"]
default: "db.t3.small"
ui:widget: select
steps:
- id: fetch-skeleton
name: Fetch Skeleton
action: fetch:template
input:
url: ./skeleton
values:
serviceName: ${{ parameters.serviceName }}
description: ${{ parameters.description }}
owner: ${{ parameters.owner }}
needsDatabase: ${{ parameters.needsDatabase }}
instanceSize: ${{ parameters.instanceSize }}
- id: publish-github
name: Publish to GitHub
action: publish:github
input:
allowedHosts: ["github.com"]
repoUrl: github.com?owner=yourcompany&repo=${{ parameters.serviceName }}
description: ${{ parameters.description }}
defaultBranch: main
protectDefaultBranch: true
- id: create-argocd-app
name: Create ArgoCD Application
action: argocd:create-resources
input:
appName: ${{ parameters.serviceName }}
argoInstance: production
namespace: ${{ parameters.serviceName }}
repoUrl: https://github.com/yourcompany/${{ parameters.serviceName }}
path: k8s/overlays/${{ parameters.environment }}
- id: register-catalog
name: Register in Backstage Catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps['publish-github'].output.repoContentsUrl }}
catalogInfoPath: /catalog-info.yaml
output:
links:
- title: Repository
url: ${{ steps['publish-github'].output.remoteUrl }}
- title: Open in Backstage
icon: catalog
entityRef: ${{ steps['register-catalog'].output.entityRef }}Register the template in your app-config.yaml:
# app-config.yaml (catalog section)
catalog:
locations:
- type: file
target: ./templates/node-eks-service/template.yaml
rules:
- allow: [Template]
- type: url
target: https://github.com/yourcompany/backstage-templates/blob/main/all-templates.yaml
rules:
- allow: [Template]When a developer clicks "Create" in the Backstage UI and selects this template, they fill in a form (service name, owner, database requirements) and Backstage orchestrates the entire provisioning chain. A new microservice with CI/CD, Kubernetes manifests, monitoring, and documentation is ready in minutes.
Step 7: Set Up TechDocs with S3
TechDocs is Backstage's documentation-as-code feature. Developers write Markdown in their service repository, and Backstage renders it as a searchable documentation site. For production, store the generated HTML in S3 rather than locally on the pod filesystem.
Create the S3 Bucket
# modules/s3-techdocs/main.tf
resource "aws_s3_bucket" "techdocs" {
bucket = "backstage-techdocs-${var.environment}"
tags = { Name = "backstage-techdocs" }
}
resource "aws_s3_bucket_versioning" "techdocs" {
bucket = aws_s3_bucket.techdocs.id
versioning_configuration { status = "Enabled" }
}
resource "aws_s3_bucket_server_side_encryption_configuration" "techdocs" {
bucket = aws_s3_bucket.techdocs.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_s3_bucket_public_access_block" "techdocs" {
bucket = aws_s3_bucket.techdocs.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_lifecycle_configuration" "techdocs" {
bucket = aws_s3_bucket.techdocs.id
rule {
id = "cleanup-old-versions"
status = "Enabled"
noncurrent_version_expiration {
noncurrent_days = 30
}
}
}Configure Backstage TechDocs Publisher
# app-config.production.yaml (techdocs section)
techdocs:
builder: external
generator:
runIn: local
publisher:
type: awsS3
awsS3:
bucketName: backstage-techdocs-production
region: us-east-1
bucketRootPath: /With builder: external, TechDocs are generated in your CI/CD pipeline rather than on the Backstage pod. This keeps Backstage lightweight and avoids installing MkDocs dependencies in the production image. Add a CI step to build and publish docs:
# .github/workflows/techdocs.yaml
name: Publish TechDocs
on:
push:
branches: [main]
paths: ["docs/**", "mkdocs.yml"]
jobs:
publish:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/techdocs-publisher
aws-region: us-east-1
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 18
- name: Install techdocs-cli
run: npm install -g @techdocs/cli
- name: Install MkDocs dependencies
run: pip install mkdocs-techdocs-core
- name: Generate and publish
run: |
techdocs-cli generate --no-docker
techdocs-cli publish \
--publisher-type awsS3 \
--storage-name backstage-techdocs-production \
--entity default/Component/${{ github.event.repository.name }}Production Hardening
The steps above give you a functional Backstage deployment. For production readiness, apply these hardening measures:
HTTPS and Network Security
- TLS Termination: The ALB ingress annotation already handles HTTPS via ACM certificates. Verify that
alb.ingress.kubernetes.io/ssl-redirect: "443"forces all HTTP traffic to HTTPS. - Network Policies: Restrict pod-to-pod traffic so only Backstage can reach the RDS security group. Apply a Kubernetes NetworkPolicy limiting egress to port 5432 on the RDS CIDR.
- WAF Integration: Attach an AWS WAF WebACL to the ALB to block common web exploits (SQL injection, XSS). The AWS managed rule group
AWSManagedRulesCommonRuleSetcovers most threats. - Private Subnets: Ensure EKS nodes and RDS are in private subnets. Only the ALB should be in public subnets.
Monitoring and Alerting
- Prometheus Metrics: Backstage exposes metrics on
/metrics. The pod annotations in the Helm values enable Prometheus scraping. Key metrics:backstage_catalog_entities_total,backstage_scaffolder_task_duration,http_request_duration_seconds. - CloudWatch Container Insights: Enable on EKS for node-level CPU, memory, and network metrics. Set alarms for pod restarts and high memory usage.
- Grafana Dashboards: Import the community Backstage Grafana dashboard (ID 18913) for catalog health, template usage, and API latency visualization.
- PagerDuty/OpsGenie: Route alerts for Backstage downtime through your existing incident management workflow. A 503 on the Backstage URL should trigger an on-call page.
Backups and Disaster Recovery
- RDS Automated Backups: The Terraform module sets
backup_retention_period = 7. Test restoring from a snapshot quarterly. Multi-AZ deployment gives automatic failover with <60s downtime. - S3 Versioning: TechDocs bucket has versioning enabled. Deleted or overwritten docs can be recovered from previous versions.
- GitOps State: All Backstage configuration lives in Git —
app-config.yaml, Helm values, Terraform modules. The cluster itself is recoverable by re-runninghelm install. - RTO/RPO: With this setup, Recovery Time Objective is under 30 minutes (re-deploy from Git + restore RDS snapshot). Recovery Point Objective is 5 minutes (RDS continuous backup window).
Scaling
- Horizontal Pod Autoscaler: Configure HPA targeting 70% CPU utilization. Two replicas handle up to 200 concurrent users; scale to 4 for 500+.
- RDS Read Replicas: If catalog queries become slow (>500ms p99), add a read replica and point Backstage's search index to it.
- CDN for TechDocs: Place CloudFront in front of the S3 TechDocs bucket to reduce latency for globally distributed teams. Cache TTL of 1 hour balances freshness with performance.
# HPA manifest for Backstage
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: backstage
namespace: backstage
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backstage
minReplicas: 2
maxReplicas: 6
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80Case Study: Deploying Backstage on AWS for 80 Engineers
B2B SaaS Company — FinTech Sector
Challenge: An 80-engineer FinTech company running 120+ microservices on AWS had severe developer experience problems. Onboarding took 3 weeks. Developers filed an average of 14 ops tickets per week just to provision infrastructure. Service ownership was unclear — when incidents hit, teams spent 20+ minutes figuring out who to page.
Solution: The platform team followed this exact Backstage AWS setup guide over a 4-week rollout:
- Week 1: Deployed Backstage on EKS with RDS and Cognito (federated to their Okta instance). Imported all 120 services into the catalog using automated
catalog-info.yamlgeneration scripts. - Week 2: Built 5 golden path templates: Node.js API, Python data pipeline, React frontend, Terraform module, and Lambda function. Each template included GitHub Actions CI/CD, Kubernetes manifests, Grafana dashboards, and TechDocs scaffolding.
- Week 3: Enabled TechDocs with S3 publisher. Migrated existing Confluence documentation to Markdown stored alongside service code. Set up the Kubernetes plugin for live pod status in the catalog.
- Week 4: Company-wide rollout with training sessions. Added cost tracking via the AWS Cost Explorer plugin so each team could see their infrastructure spend.
Results (after 3 months):
- Developer onboarding: 3 weeks → 2 days (85% reduction)
- Ops ticket volume: 14/week → 3/week (79% reduction)
- Incident time-to-responder: 20 min → 3 min (service catalog ownership lookup)
- New service provisioning: 2-3 days → 15 minutes (golden path templates)
- Monthly AWS infrastructure cost: $420/month for the entire Backstage platform
- Estimated annual savings in recovered developer productivity: $380,000
The CTO summarized: "Backstage on AWS was the highest-ROI infrastructure investment we made in 2025. The service catalog alone eliminated half our incident response confusion. And developers actually enjoy using the golden path templates — adoption was organic after the first week."
AWS Cost Breakdown
Here is what this Backstage stack costs on AWS per month. These are us-east-1 on-demand prices as of early 2026:
Service Spec Cost/Month ───────────────────────────────────────────────────────────── EKS Control Plane 1 cluster $ 73.00 EKS Worker Nodes 2x t3.medium $ 150.72 RDS PostgreSQL db.t3.medium, Multi-AZ $ 130.00 S3 (TechDocs) ~10 GB stored $ 0.23 ALB 1 load balancer $ 25.00 Route 53 1 hosted zone $ 0.50 ACM Certificate 1 cert $ 0.00 CloudWatch Logs ~5 GB/month $ 3.50 ECR ~2 GB images $ 0.20 ───────────────────────────────────────────────────────────── Total $ 383.15 Per-developer (80 engineers): $ 4.79
Savings plan: reserve the RDS instance for 1 year to cut database costs by 40%. Use Spot instances for EKS worker nodes in non-production to save an additional 60-70% on compute. With these optimizations, total cost drops below $250/month.
Common Pitfalls and How to Avoid Them
- Pitfall: Using SQLite in production. SQLite is the default for local development, but it does not support concurrent writes. In production with multiple replicas, catalog updates will corrupt. Always use RDS PostgreSQL.
- Pitfall: Skipping authentication. An unauthenticated Backstage instance with scaffolder access means anyone on the network can create Git repos and provision infrastructure. Configure Cognito (or your IdP) before exposing Backstage externally.
- Pitfall: Building TechDocs on the Backstage pod. The
localbuilder installs Python, MkDocs, and plugin dependencies on the Backstage container. This bloats the image to 2+ GB and introduces security surface area. Useexternalbuilder with CI/CD publishing to S3. - Pitfall: Not setting resource limits. Backstage's Node.js backend can consume significant memory during large catalog syncs. Without resource limits, a single pod can OOM-kill other workloads. Always set memory limits (1Gi is a safe starting point).
- Pitfall: Too many templates at launch. Teams that build 20 templates before anyone uses them waste effort and lose momentum. Start with 3-5 high-impact templates, measure adoption, and iterate based on developer feedback.
Need Help Deploying Backstage on AWS?
Our platform engineering team has deployed Backstage for companies ranging from 20-person startups to 500-engineer enterprises. We handle infrastructure provisioning, template development, plugin customization, and developer training. Average time to production: 2 weeks.
Frequently Asked Questions
How long does it take to deploy Backstage on AWS?
A basic Backstage deployment on AWS EKS takes 4-6 hours including RDS setup, Cognito authentication, and initial configuration. Production hardening with TechDocs, software templates, monitoring, and HTTPS adds another 2-3 days. Full adoption with 10+ golden path templates and team training typically takes 2-4 weeks. Most teams start seeing developer productivity gains within the first week.
What AWS services does Backstage require?
Core requirements: EKS (Kubernetes cluster for hosting Backstage), RDS PostgreSQL (catalog and scaffolder database), S3 (TechDocs storage), Cognito or external IdP (authentication), ALB (load balancing), ACM (TLS certificates), and Route 53 (DNS). Optional services for enhanced functionality: ElastiCache for session caching, CloudWatch for monitoring, ECR for container images, and CloudFront for TechDocs CDN distribution.
How much does running Backstage on AWS cost per month?
Typical monthly costs for a production Backstage deployment: EKS cluster ($73 control plane + $150-300 worker nodes), RDS PostgreSQL db.t3.medium Multi-AZ ($130), S3 TechDocs ($5-20), ALB ($25), Route 53 ($1). Total: approximately $320-485/month. For a team of 80 engineers, this works out to under $5/developer/month — significantly less than the productivity savings from reduced onboarding time and self-service infrastructure.
Can I use Backstage with AWS Cognito for authentication?
Yes, Backstage fully supports AWS Cognito as an OAuth2/OIDC provider. Create a Cognito User Pool with an app client, configure the callback URL to your Backstage domain, and add the OIDC provider configuration to app-config.yaml. Cognito supports federation with SAML providers (Okta, Active Directory, Google Workspace), making it an excellent choice for enterprise SSO integration with Backstage.
Should I use Helm or raw Kubernetes manifests to deploy Backstage?
Helm is recommended for production Backstage deployments on AWS. The official Backstage Helm chart provides sensible defaults for resource limits, health checks, liveness probes, ingress configuration, and PostgreSQL connection management. It simplifies upgrades (helm upgrade), rollbacks (helm rollback), and environment-specific overrides via values files. Raw manifests work for learning or highly customized setups but add maintenance overhead for production operations.
Related Articles
Building Internal Developer Platforms with Backstage
Deep dive into service catalogs, software templates, TechDocs, and measuring developer experience improvements.
Platform Engineering 2.0: AI-Powered Internal Developer Portals
How AI-enhanced IDPs reduce cognitive load by 90% and enable natural language infrastructure provisioning.
Designing an Internal Developer Platform for 50+ Engineers
Architecture guide for building self-service IDPs with Backstage, Argo CD, and Terraform at scale.
About HostingX IL
HostingX IL specializes in Platform Engineering and DevOps services for B2B technology companies. We design, deploy, and manage Internal Developer Platforms using Backstage on AWS, enabling self-service infrastructure and reducing developer cognitive load. Learn more about our Platform Engineering & Automation Services or contact us for a free consultation.
HostingX Solutions
Expert DevOps and automation services accelerating B2B delivery and operations.
Services
Subscribe to our newsletter
Get monthly email updates about improvements.
© 2026 HostingX Solutions LLC. All Rights Reserved.
LLC No. 0008072296 | Est. 2026 | New Mexico, USA
Terms of Service
Privacy Policy
Acceptable Use Policy