Published: January 2, 2025
•
Updated: January 2, 2025
If you're a B2B SaaS company, SOC 2 Type I is no longer optional—it's table stakes. Enterprise customers won't sign contracts without it. VCs ask about it in due diligence. Partners require it before integrations.
The problem? Traditional SOC 2 takes 6-12 months and costs $50,000-$150,000. Most of that time is spent on manual evidence collection: screenshotting access logs, exporting user lists, documenting change approvals, proving backups ran successfully.
But here's the secret: 90% of SOC 2 evidence can be automated. Using GitHub Actions, Terraform, and Infrastructure as Code, you can collect audit-ready evidence continuously, reduce audit prep from months to days, and achieve SOC 2 Type I in 90 days.
SOC 2 is organized around 5 Trust Service Criteria (TSC), each with specific control objectives:
Governance, policies, background checks, training. Proves you have a security-conscious culture.
Evidence: Policy documents, training completion records, background check confirmations, org chart
Change management, risk assessments, monitoring systems, code review processes.
Evidence: GitHub PR approvals, CI/CD logs, vulnerability scan reports, monitoring dashboards
User provisioning, MFA, least privilege, access reviews, system access logs.
Evidence: IAM policies, access logs, MFA enforcement reports, quarterly access reviews
Backups, encryption, patching, infrastructure management, capacity planning.
Evidence: Backup test logs, encryption config, patch management reports, infrastructure change logs
Infrastructure changes require approval, testing, rollback plans.
Evidence: Terraform plan approvals, deployment logs with approvers, rollback documentation
Goal: Establish baseline security controls and documentation framework.
Goal: Automate evidence collection for CC6, CC7, and CC8 using GitHub Actions and Infrastructure as Code.
# .github/workflows/access-audit.yml name: Quarterly Access Review on: schedule: - cron: '0 0 1 */3 *' # Every quarter workflow_dispatch: jobs: access-audit: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Export AWS IAM Users run: | aws iam list-users --output json > evidence/iam-users-${DATE}.json aws iam list-roles --output json > evidence/iam-roles-${DATE}.json env: DATE: $(date +%Y-%m-%d) - name: Export GitHub Org Members run: | gh api orgs/YOUR_ORG/members --paginate > evidence/github-members-${DATE}.json gh api orgs/YOUR_ORG/teams --paginate > evidence/github-teams-${DATE}.json env: GH_TOKEN: ${{ secrets.GH_ORG_TOKEN }} - name: Generate Access Review Report run: | python scripts/generate-access-review.py \ --iam-users evidence/iam-users-${DATE}.json \ --github-members evidence/github-members-${DATE}.json \ --output evidence/access-review-${DATE}.pdf - name: Upload to S3 Compliance Bucket run: | aws s3 cp evidence/ s3://compliance-evidence/access-reviews/${DATE}/ --recursive - name: Create GitHub Issue for Review run: | gh issue create \ --title "Q${QUARTER} Access Review Required" \ --body "Access review artifacts generated. Review in 7 days: s3://compliance-evidence/access-reviews/${DATE}/" \ --assignee @security-team \ --label "compliance,access-review"
Audit Evidence Generated: Timestamped user lists, role assignments, quarterly review artifacts with approver sign-offs
# Terraform: Enforce Encryption Everywhere # terraform/policies/encryption.rego (Open Policy Agent) package terraform.encryption deny[msg] { resource := input.resource_changes[_] resource.type == "aws_s3_bucket" not resource.change.after.server_side_encryption_configuration msg := sprintf("S3 bucket '%s' must have encryption enabled", [resource.name]) } deny[msg] { resource := input.resource_changes[_] resource.type == "aws_db_instance" resource.change.after.storage_encrypted == false msg := sprintf("RDS instance '%s' must have encryption at rest", [resource.name]) } deny[msg] { resource := input.resource_changes[_] resource.type == "aws_ebs_volume" resource.change.after.encrypted == false msg := sprintf("EBS volume '%s' must be encrypted", [resource.name]) } # Evidence: Every Terraform plan that passes = proof of encryption enforcement
# .github/workflows/encryption-validation.yml name: Monthly Encryption Audit on: schedule: - cron: '0 0 1 * *' # Monthly jobs: validate-encryption: runs-on: ubuntu-latest steps: - name: Check S3 Bucket Encryption run: | aws s3api list-buckets --query 'Buckets[*].Name' --output text | \ while read bucket; do encryption=$(aws s3api get-bucket-encryption --bucket $bucket 2>&1) if [[ $encryption == *"ServerSideEncryptionConfigurationNotFoundError"* ]]; then echo "FAIL: $bucket - No encryption configured" exit 1 else echo "PASS: $bucket - Encryption enabled" fi done > evidence/s3-encryption-${DATE}.log - name: Check RDS Encryption run: | aws rds describe-db-instances --query 'DBInstances[*].[DBInstanceIdentifier,StorageEncrypted]' --output text > evidence/rds-encryption-${DATE}.log - name: Upload Evidence run: aws s3 cp evidence/ s3://compliance-evidence/encryption/${DATE}/ --recursive
# .github/workflows/terraform-approval.yml name: Terraform Change Approval on: pull_request: paths: - 'terraform/**' jobs: plan: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Terraform Plan run: | cd terraform terraform init terraform plan -out=tfplan.binary terraform show -json tfplan.binary > tfplan.json - name: Run OPA Policy Check run: | conftest test tfplan.json --policy terraform/policies/ - name: Post Plan to PR uses: actions/github-script@v6 with: script: | const fs = require('fs'); const plan = fs.readFileSync('terraform/tfplan.json', 'utf8'); github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: \`## Terraform Plan\n\n\${plan}\n\n**Required Approvals:** 2 from @platform-team\` }); - name: Require Approvals uses: hmarr/auto-approve-action@v2 with: review-message: "Auto-approved for compliance tracking" # Evidence: PR with Terraform plan + 2 approvals + merged commit = audit trail
# .github/workflows/backup-test.yml name: Weekly Backup Restoration Test on: schedule: - cron: '0 2 * * 0' # Every Sunday at 2am jobs: test-backup: runs-on: ubuntu-latest steps: - name: Get Latest RDS Snapshot run: | SNAPSHOT=$(aws rds describe-db-snapshots \ --db-instance-identifier prod-db \ --query 'DBSnapshots[0].DBSnapshotIdentifier' \ --output text) echo "SNAPSHOT_ID=$SNAPSHOT" >> $GITHUB_ENV - name: Restore Snapshot to Test Instance run: | aws rds restore-db-instance-from-db-snapshot \ --db-instance-identifier backup-test-${DATE} \ --db-snapshot-identifier $SNAPSHOT_ID \ --db-instance-class db.t3.small - name: Wait for Instance Ready run: | aws rds wait db-instance-available \ --db-instance-identifier backup-test-${DATE} - name: Run Data Integrity Check run: | python scripts/verify-backup-integrity.py \ --instance backup-test-${DATE} \ --output evidence/backup-test-${DATE}.json - name: Cleanup Test Instance run: | aws rds delete-db-instance \ --db-instance-identifier backup-test-${DATE} \ --skip-final-snapshot - name: Upload Evidence run: aws s3 cp evidence/backup-test-${DATE}.json s3://compliance-evidence/backups/ # Evidence: Weekly backup restoration logs proving RPO/RTO capability
Goal: Package all evidence, select auditor, complete audit fieldwork.
This matrix maps every SOC 2 control to automated evidence sources. Use it as your checklist:
┌─────────┬──────────────────────────────────┬────────────────────────────────────┬──────────────────┐ │ Control │ Requirement │ Evidence Source │ Frequency │ ├─────────┼──────────────────────────────────┼────────────────────────────────────┼──────────────────┤ │ CC6.1 │ Access provisioning │ GitHub: access-audit.yml │ Quarterly │ │ CC6.1 │ MFA enforcement │ Okta admin logs │ Weekly snapshot │ │ CC6.2 │ User onboarding/offboarding │ GitHub: user-lifecycle.yml │ On event │ │ CC6.6 │ Access reviews │ GitHub: access-review.yml + Issues │ Quarterly │ │ CC7.2 │ Encryption at rest │ GitHub: encryption-validation.yml │ Monthly │ │ CC7.2 │ Encryption in transit │ Terraform: tls-policy.rego │ On every deploy │ │ CC7.3 │ Backup execution │ AWS Backup job logs │ Daily │ │ CC7.3 │ Backup restoration testing │ GitHub: backup-test.yml │ Weekly │ │ CC7.4 │ Patch management │ GitHub: patch-audit.yml │ Monthly │ │ CC8.1 │ Infrastructure changes │ Terraform plan + PR approvals │ On every change │ │ CC8.1 │ Application code changes │ GitHub PR approvals (2+ reviewers) │ On every deploy │ │ CC8.1 │ Rollback capability │ ArgoCD deployment history │ Continuous │ └─────────┴──────────────────────────────────┴────────────────────────────────────┴──────────────────┘
# .github/workflows/vulnerability-scan.yml name: Weekly Vulnerability Scan on: schedule: - cron: '0 3 * * 1' # Every Monday at 3am jobs: scan: runs-on: ubuntu-latest steps: - name: Run Trivy Container Scan run: | for image in $(kubectl get pods -A -o jsonpath='{.items[*].spec.containers[*].image}' | tr ' ' '\n' | sort -u); do trivy image --format json --output trivy-${image//\//-}.json $image done - name: Run Dependency Scan run: | npm audit --json > evidence/npm-audit-${DATE}.json pip-audit --format json > evidence/pip-audit-${DATE}.json - name: Upload to Compliance Bucket run: aws s3 cp evidence/ s3://compliance-evidence/vuln-scans/${DATE}/ --recursive - name: Create Issues for Critical Findings run: python scripts/create-vuln-issues.py --input evidence/ --severity CRITICAL,HIGH
# terraform/monitoring.tf resource "aws_cloudtrail" "audit_trail" { name = "soc2-audit-trail" s3_bucket_name = aws_s3_bucket.compliance_logs.bucket event_selector { read_write_type = "All" include_management_events = true } insight_selector { insight_type = "ApiCallRateInsight" } # Evidence: CloudTrail logs prove who did what, when (required for CC6.1, CC8.1) } resource "aws_cloudwatch_log_metric_filter" "unauthorized_api_calls" { name = "UnauthorizedAPICalls" log_group_name = aws_cloudwatch_log_group.cloudtrail.name pattern = "{ ($.errorCode = "*UnauthorizedOperation") || ($.errorCode = "AccessDenied*") }" metric_transformation { name = "UnauthorizedAPICalls" namespace = "SOC2/Security" value = "1" } } resource "aws_cloudwatch_metric_alarm" "unauthorized_api_alarm" { alarm_name = "soc2-unauthorized-api-calls" comparison_operator = "GreaterThanThreshold" evaluation_periods = "1" metric_name = "UnauthorizedAPICalls" namespace = "SOC2/Security" period = "300" statistic = "Sum" threshold = "5" alarm_actions = [aws_sns_topic.security_alerts.arn] # Evidence: Alerts prove monitoring is active (CC7.2) }
┌────────────────────────────────────┬─────────────┬──────────────────────────────┐ │ Item │ Cost │ Notes │ ├────────────────────────────────────┼─────────────┼──────────────────────────────┤ │ SOC 2 Type I Audit │ $15k-$30k │ Depends on auditor, scope │ │ Compliance Platform (Vanta/Drata) │ $2k-$4k/yr │ Optional but helpful │ │ GitHub Actions (automation) │ $0 │ Free for public repos │ │ AWS resources (evidence storage) │ $50-$100/mo │ S3 + CloudTrail logs │ │ Security tooling (MFA, endpoint) │ $2k-$5k/yr │ Okta, CrowdStrike, etc. │ │ **Total (90 days):** │ **$20k-$40k**│ vs $75k-$150k traditional │ └────────────────────────────────────┴─────────────┴──────────────────────────────┘
Key Savings: By automating evidence collection, you eliminate the need for dedicated compliance personnel during the audit period. Traditional SOC 2 audits often require 1-2 full-time employees for 3-6 months just for evidence gathering.
Many companies start SOC 2 prep when a customer demands it. By then, you're 6 months away from a signed contract.
✅ Solution: Start SOC 2 prep when you have your first 10 paying customers or raise a Series A—whichever comes first.
Screenshotting AWS console, exporting CSV files manually, emailing evidence to auditors—this is how SOC 2 takes 9 months.
✅ Solution: Automate from Day 1. Use the GitHub Actions templates above. Store everything in S3 with timestamps.
You don't need a custom compliance platform. You don't need to hire a CISO. SOC 2 Type I is achievable with GitHub Actions, Terraform, and a compliance checklist.
✅ Solution: Use the 80/20 rule. Focus on the 20% of automation that covers 80% of evidence requirements (CC6, CC7, CC8).
SOC 2 Type I is a point-in-time audit. It proves your controls exist. But enterprise customers will eventually require SOC 2 Type II, which proves your controls work over time (typically 6-12 months).
The good news: If you've automated evidence collection using the workflows above, SOC 2 Type II is just a matter of time. The auditor will review 6-12 months of continuously collected evidence (access reviews, backup tests, vulnerability scans, change approvals). Because it's all automated, Type II requires almost no additional work.
The old way: Hire a compliance consultant, spend 9 months collecting evidence manually, pay $100k+.
The new way: Treat SOC 2 as Infrastructure as Code. Write GitHub Actions workflows, enforce policies with Terraform and OPA, collect evidence continuously. Achieve SOC 2 Type I in 90 days for $20k-$40k.
The ultimate unlock: Once evidence collection is automated, SOC 2 maintenance becomes a background process. Your engineers deploy infrastructure, GitHub Actions collect evidence, auditors review it annually. Compliance becomes a non-blocking constraint on growth.
This is the future of compliance: Automated, developer-centric, and built into your deployment pipeline from Day 1.
We implement the entire 90-day SOC 2 roadmap for B2B SaaS companies. GitHub Actions templates, Terraform policies, evidence collection automation, and audit prep—all included.
HostingX IL
Scalable automation & integration platform accelerating modern B2B product teams.
Services
Subscribe to our newsletter
Get monthly email updates about improvements.
Copyright © 2025 HostingX IL. All Rights Reserved.