Compliance frameworks like SOC2 Type II and ISO 27001 are mandatory for B2B SaaS companies selling to enterprise customers. But traditional compliance approaches treat infrastructure as a black box: auditors review screenshots, spreadsheets, and manually-generated reports to verify controls.
Compliance as Code inverts this model. By implementing compliance requirements directly in Infrastructure as Code (Terraform), policy engines (OPA), and automated logging (CloudTrail/Stackdriver), you create provable, auditable, and continuously-enforced controls. This guide shows you how to translate SOC2 and ISO 27001 requirements into executable code.
Traditional compliance workflows rely on human processes:
Compliance as Code shifts from trust-based to enforcement-based controls. Instead of hoping engineers don't create public S3 buckets, your infrastructure code makes it impossible.
Compliance frameworks specify controls, but don't prescribe implementation. Here's how common requirements translate to infrastructure:
| Requirement | Framework | Infrastructure Control |
|---|---|---|
| Encryption at rest | SOC2 CC6.1 | Terraform enforces encrypted EBS/RDS/S3 |
| Access logging | ISO 27001 A.12.4.1 | CloudTrail/Stackdriver auto-enabled |
| Network segmentation | SOC2 CC6.6 | VPC/subnets defined in code |
| Patch management | ISO 27001 A.12.6.1 | Kubernetes auto-update policies |
| Access control | GDPR Art. 32 | IAM policies in Terraform, OPA enforcement |
| Audit trails | HIPAA 164.312(b) | Immutable log retention (18-24 months) |
SOC2 Requirement: "Systems must be encrypted at rest and in transit."
Instead of documenting a policy that "engineers should enable encryption," encode it in Terraform modules that make encryption non-optional:
# modules/secure-s3-bucket/main.tf
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
# Compliance: Block all public access (SOC2 CC6.1)
# This setting CANNOT be overridden by downstream consumers
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
kms_master_key_id = var.kms_key_arn # Managed encryption key
}
}
}
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
# Compliance: SOC2 CC6.1 - Prevent public data exposure
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = "Enabled" # Compliance: GDPR Art. 32 - Data recovery
}
}
# Audit Evidence: Terraform state = proof of configuration
# Auditor question: "How do you ensure encryption?"
# Answer: "It's impossible to create an unencrypted bucket. Here's the code."Now, all S3 buckets created through your infrastructure automatically comply with encryption and public access requirements. No training, no trust, no quarterly reviews. The code enforces policy.
ISO 27001 Requirement: "Access to systems must be logged and reviewed."
Use OPA (Open Policy Agent) to enforce policies before changes reach production. Example: Block deployment of containers without resource limits (prevents resource exhaustion attacks):
# opa-policies/kubernetes-compliance.rego
package kubernetes.compliance
# Compliance: ISO 27001 A.12.1.3 - Capacity management
# All pods must have CPU/memory limits to prevent DoS
deny[msg] {
input.kind == "Pod"
container := input.spec.containers[_]
not container.resources.limits
msg := sprintf("Pod %s missing resource limits (ISO 27001 A.12.1.3)", [input.metadata.name])
}
# Compliance: SOC2 CC6.6 - Logical access controls
# Privileged containers require security review
deny[msg] {
input.kind == "Pod"
container := input.spec.containers[_]
container.securityContext.privileged == true
# Only allow if explicitly approved
not input.metadata.annotations["security-review-approved"]
msg := sprintf("Privileged pod %s requires security review", [input.metadata.name])
}SOC2 CC7.2: "The entity monitors system components and the operation of those components for anomalies that are indicative of malicious acts."
Implement centralized, immutable logging:
# terraform/audit-logging.tf
# AWS CloudTrail - All API calls logged
resource "aws_cloudtrail" "audit" {
name = "compliance-audit-trail"
s3_bucket_name = aws_s3_bucket.audit_logs.id
include_global_service_events = true
is_multi_region_trail = true
enable_log_file_validation = true # Cryptographic integrity
# Compliance: Logs immutable, tamper-evident (SOC2 CC7.2)
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::*/"] # Log all S3 access
}
}
}
# Kubernetes Audit Logs
resource "aws_eks_cluster" "main" {
name = "production-cluster"
enabled_cluster_log_types = [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
}
# Log Retention: ISO 27001 A.12.4.1 requires 12-18 month retention
resource "aws_cloudwatch_log_group" "k8s_audit" {
name = "/aws/eks/production/audit"
retention_in_days = 545 # 18 months
}
# Compliance Evidence: Every kubectl command, API call, S3 access = logged
# Auditor question: "Show me all database access by user X in Q2 2024"
# Answer: Query CloudWatch/Loki with correlation IDSOC2 CC7.1: "The entity uses detection and monitoring procedures to identify anomalies."
Integrate security scanning into CI/CD pipelines:
# .github/workflows/compliance-checks.yml
name: Compliance Checks
on: [pull_request]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
# 1. Terraform Security Scan
- name: Run tfsec (Terraform security)
run: |
tfsec . --minimum-severity MEDIUM
# Blocks PRs with security issues
# 2. Container Image Scan
- name: Run Trivy (Container vulnerabilities)
run: |
trivy image --severity HIGH,CRITICAL myapp:${{ github.sha }}
# 3. Infrastructure Policy Check
- name: Run Checkov (IaC compliance)
run: |
checkov -d . --framework terraform --check CKV_AWS_18,CKV_AWS_19
# CKV_AWS_18: S3 bucket encryption
# CKV_AWS_19: S3 bucket logging
# 4. Secret Detection
- name: Run GitGuardian
run: |
ggshield scan repo .
# Prevents hardcoded secrets in code
# Compliance: SOC2 CC7.3 - Vulnerabilities identified and remediated
# Evidence: Every commit = scanned, results in audit logGDPR (General Data Protection Regulation) adds data residency and privacy requirements beyond traditional security frameworks:
EU customer data must remain in EU regions:
# terraform/gdpr-regions.tf
variable "gdpr_compliant_regions" {
type = list(string)
default = ["eu-west-1", "eu-central-1", "eu-west-2"]
}
# Policy: All EU customer data must use GDPR regions
resource "aws_db_instance" "eu_customers" {
count = var.customer_region == "EU" ? 1 : 0
allocated_storage = 100
engine = "postgres"
instance_class = "db.t3.large"
# GDPR Compliance: Force EU region
availability_zone = data.aws_availability_zones.eu.names[0]
# Prevent accidental cross-region replication
backup_retention_period = 30
copy_tags_to_snapshot = true
tags = {
GDPRCompliant = "true"
DataRegion = "EU"
}
}Users can request complete data deletion. Your infrastructure must support this:
Compliance isn't a one-time certification. Auditors require evidence of continuous monitoring and remediation. Implement these automated checks:
# Daily compliance check (AWS Config / Cloud Custodian)
policies:
- name: enforce-s3-encryption
resource: s3
filters:
- type: bucket-encryption
state: false # No encryption
actions:
- type: notify
violation_desc: "SOC2 CC6.1 Violation: Unencrypted S3 bucket"
to: ["security@company.com"]
- type: auto-remediate
encryption: AES256 # Automatically fix
- name: detect-public-databases
resource: rds
filters:
- PubliclyAccessible: true
actions:
- type: notify
violation_desc: "ISO 27001 A.13.1.3 Violation: Public RDS instance"
- type: mark-for-op
op: modify-db
days: 1 # Grace period before auto-fixSOC2 CC6.2: "Logical and physical access controls restrict access rights to authorized users."
Instead of quarterly spreadsheet reviews, automate access certification:
When auditors arrive for SOC2 or ISO 27001 certification, they'll request specific evidence. Here's how Compliance as Code makes this trivial:
| Auditor Request | Traditional Response | Compliance as Code |
|---|---|---|
| "Show encryption is enabled" | Screenshots of AWS console | Link to Terraform module (line 15) |
| "Who accessed DB on June 5?" | Manually grep CloudWatch logs | SQL query: SELECT * FROM audit WHERE date='2024-06-05' |
| "Prove vulnerability patching" | Email threads, Jira tickets | CI/CD scan results (automated PRs) |
| "Change management process" | Word doc describing approval flow | GitHub PRs with required approvals |
| "Disaster recovery testing" | Annual DR drill notes | Monthly Terraform destroy/rebuild (automated) |
The key difference: Traditional compliance = reactive documentation. Compliance as Code = proactive enforcement with automatic evidence generation.
Symptom: Engineers complain that security reviews block deployments.
Solution: Shift left. Run compliance checks in CI/CD before code reaches production. Engineers get instant feedback, not a security ticket 3 days later.
Success Metric:
Time from commit to production should decrease after implementing Compliance as Code. Automated checks (5 minutes) replace manual security reviews (3-5 days).
Symptom: OPA policies block legitimate deployments.
Solution: Implement exception workflows. Allow temporary policy overrides with audit trail:
# Pod requires privileged mode for legitimate reason (e.g., container runtime)
apiVersion: v1
kind: Pod
metadata:
name: container-runtime
annotations:
security-exception: "true"
exception-reason: "Required for Docker-in-Docker build agent"
approved-by: "security-team@company.com"
approved-date: "2025-01-15"
review-date: "2025-07-15" # Exceptions expire
spec:
containers:
- name: dind
image: docker:dind
securityContext:
privileged: true # Normally blocked by OPAImplementing Compliance as Code requires deep expertise in Terraform, cloud security, OPA policy design, and audit logging architecture. Most teams spend 4-6 months building this infrastructure, then ongoing maintenance becomes a distraction from product development.
HostingX's Managed Compliance Platform provides SOC2/ISO 27001-ready infrastructure out of the box:
Our platform engineering team has helped 40+ B2B SaaS companies achieve SOC2 Type II and ISO 27001 certification. Average time to audit-readiness: 8 weeks (vs 6+ months DIY).
Most companies treat compliance as a painful checkbox exercise—something to survive, not embrace. But when implemented as code, compliance transforms from liability to competitive advantage.
Enterprise buyers increasingly require:
Companies that can demonstrate continuous, automated compliance—not just annual audits—win deals faster. Security becomes a sales accelerator, not a blocker. And infrastructure teams shift from "gatekeepers who slow things down" to "enablers who make enterprise sales possible."
That's the promise of Compliance as Code: provable security, continuous assurance, and infrastructure that enterprise customers trust.
HostingX IL provides Platform Engineering and Compliance services for B2B SaaS companies. Our managed infrastructure includes SOC2, ISO 27001, and GDPR compliance controls built into every layer—from Terraform modules to Kubernetes policies to audit logging. Learn more about our SecOps & Compliance Services.
HostingX IL
Scalable automation & integration platform accelerating modern B2B product teams.
Services
Subscribe to our newsletter
Get monthly email updates about improvements.
Copyright © 2025 HostingX IL. All Rights Reserved.