← All posts
· 3 min read ·
TerraformAWSSalesforceInfrastructure

Terraform for Salesforce Infrastructure: Managing the Ecosystem Around Your Org

Your Salesforce org isn't an island. Here's how to use Terraform to manage the AWS infrastructure, Connected Apps, and integration layer that surrounds it.

Server room with infrastructure racks

Everyone talks about managing Salesforce metadata with version control. Fewer people talk about managing the infrastructure around the org - the AWS Lambda functions that process Platform Events, the API Gateway endpoints that Salesforce calls out to, the RDS instances that hold integration state, the ACM certificates that secure the whole thing.

That infrastructure is code too. It should live in git, be reviewed like code, and be deployed automatically.

What Lives Outside the Org

A mature Salesforce ecosystem typically has:

  • Event consumers - AWS Lambda or Azure Functions processing Platform Events
  • Integration middleware - MuleSoft or custom APIs that bridge Salesforce with ERPs, PIMs, and external services
  • Authentication infrastructure - OAuth 2.0 flows, JWT signing keys, Named Credential backing resources
  • Monitoring - CloudWatch dashboards, PagerDuty integrations, Datadog forwarders
  • Data pipelines - S3 buckets receiving Salesforce data exports, Glue jobs processing them

All of this should be Terraformed.

The Connected App Problem

Connected Apps in Salesforce - which back every OAuth integration - are metadata that can be version controlled. But their OAuth secrets live outside Salesforce in your integration layer. Managing the rotation of those secrets manually across environments is how you end up with a production incident on a Saturday.

The pattern I use:

# AWS Secrets Manager holds the Connected App consumer secret
resource "aws_secretsmanager_secret" "sf_connected_app" {
  name = "salesforce/${var.environment}/connected-app-secret"

  lifecycle {
    prevent_destroy = true
  }
}

# Lambda reads the secret at runtime - never hardcoded
resource "aws_lambda_function" "sf_event_consumer" {
  environment {
    variables = {
      SF_SECRET_ARN = aws_secretsmanager_secret.sf_connected_app.arn
      SF_ORG_URL    = var.salesforce_org_url
    }
  }
}

The Lambda reads the secret at runtime. Rotation is a Secrets Manager operation - no deployment required.

Platform Events → Lambda Architecture

This is the integration pattern I use most. Salesforce fires a Platform Event; a Lambda processes it and updates downstream systems.

resource "aws_sqs_queue" "platform_events" {
  name                       = "sf-platform-events-${var.environment}"
  visibility_timeout_seconds = 300
  message_retention_seconds  = 86400

  redrive_policy = jsonencode({
    deadLetterTargetArn = aws_sqs_queue.platform_events_dlq.arn
    maxReceiveCount     = 3
  })
}

resource "aws_lambda_event_source_mapping" "platform_events" {
  event_source_arn = aws_sqs_queue.platform_events.arn
  function_name    = aws_lambda_function.event_processor.arn
  batch_size       = 10
}

The dead-letter queue is non-negotiable. Platform Events replay, but your downstream processing must handle failures gracefully.

Environment Parity

The classic mistake is having a Terraformed production environment and a manually-configured sandbox. By the time you need to reproduce a production issue in sandbox, the infrastructure has drifted.

Use Terraform workspaces or separate state files per environment, but use the same modules:

module "sf_integration" {
  source      = "./modules/sf-integration"
  environment = terraform.workspace
  org_url     = var.org_urls[terraform.workspace]
}

Locking Down Org Access

Terraform can manage the AWS side of Salesforce’s IP allowlisting. If your org uses IP restrictions, manage the allowed ranges in Terraform so they’re consistent across NAT Gateways, VPN endpoints, and CI runners:

locals {
  sf_allowed_ips = concat(
    [aws_nat_gateway.main.public_ip],
    var.vpn_ip_ranges,
    var.ci_runner_ips
  )
}

When you add a new CI runner or rotate a NAT Gateway, the IP range update goes through code review and applies automatically.

The Payoff

The first time you need to rebuild an environment from scratch - a compliance audit requires an isolated replica, a client wants a proof-of-concept environment, a region failover is needed - you’ll understand why this matters. terraform apply and you have a complete, correctly-configured integration layer in 15 minutes.

That’s the promise of infrastructure as code. In the Salesforce ecosystem, it’s still underutilised - which means it’s a genuine competitive advantage for teams that invest in it.

← All posts