← All posts
· 6 min read ·
SalesforceFlowArchitectureAutomationPerformance

Salesforce Flow Architecture at Scale: Patterns That Don't Break

Practical patterns for building maintainable, performant Salesforce Flows in large orgs - covering bulkification, error handling, sub-flows, and governor limit avoidance.

Circuit connections representing flow architecture

Flow has become the dominant automation tool on the Salesforce platform. Process Builder is being retired, Workflow Rules are legacy, and Apex is overkill for most business logic. But Flow at scale - dozens of flows, hundreds of elements, high-volume orgs - requires deliberate architecture. These are the patterns I use in production.

The Core Problem: Flow Is Deceptively Easy

Building a Flow that works in a developer sandbox is easy. Building a Flow that:

  • Handles 200 records in a single transaction
  • Doesn’t fire 47 SOQL queries on every Opportunity update
  • Survives a production data import of 50,000 records
  • Is still understandable 6 months later when the original builder has left

…is a different skill set.

Pattern 1: Bulk-Safe Record-Triggered Flows

The single most common source of governor limit exceptions in Flow-heavy orgs.

The mistake: Building record-triggered flows as if they’ll always fire on one record. They won’t. Bulk data loads, mass updates, and integration upserts all fire triggers in batches of up to 200 records.

The rule: Every record-triggered flow must be bulk-safe.

What this means in practice:

  • Never use Get Records inside a loop. One Get Records per loop iteration = up to 200 SOQL queries. Use a single Get Records before the loop, store in a collection, and reference the collection inside the loop.
  • Avoid nested loops. A loop within a loop on a 200-record batch = 40,000 iterations. Use collection filtering and assignments instead.
  • Use bulk DML. Use Update Records / Create Records with collections, not individual records inside loops.
// Anti-pattern (200 SOQL queries on bulk load):
Loop over trigger records
  └── Get Records: fetch related Contact

// Correct pattern (1 SOQL query):
Get Records: fetch all related Contacts where AccountId IN {triggerIds}
Loop over trigger records
  └── Assignment: filter collection for this record's contacts

Flow doesn’t show you SOQL query counts during testing. Use the Flow Analyzer and test with DataLoader on 200+ records before deploying.

Pattern 2: Entry Criteria and Flow Optimisation Mode

Every record-triggered flow fires on every matching record unless you configure it otherwise.

Always set entry criteria on the flow start element. Even if the logic seems simple: Status = 'Closed' as an entry condition means the flow only runs when Status is Closed, not on every Opportunity save.

Use “Only when a record is updated to meet the condition requirements” instead of “Every time a record is saved and meets the condition requirements” when the trigger logic only needs to fire once (e.g. on transition to a status, not on every subsequent save).

Optimise Mode = Fast Field Updates - if your flow only updates fields on the triggering record (no related record DML, no callouts), use this mode. It runs before-save and consumes fewer resources.

Pattern 3: Sub-Flow Architecture

Once you have more than ~10 flows, organise them into a composition model.

Orchestrator flows - thin, contain only routing logic and sub-flow calls. No DML. They read conditions and delegate.

Worker flows - do one thing. Create a related record, send an email, update a collection of records. Reusable across orchestrators.

Utility flows - reusable helpers. Format a phone number, calculate a date offset, validate an email format. Called from workers.

Opportunity After-Save (Orchestrator)
├── [If Stage = Closed Won] → Sub-flow: Create Onboarding Task Set
├── [If Stage = Closed Lost] → Sub-flow: Log Loss Reason to CRM Analytics
└── [If Amount > 50000] → Sub-flow: Notify Enterprise Sales Ops

Benefits: Each worker flow is independently testable. Changes to one business rule don’t require touching the orchestrator. New paths add a worker and a routing condition.

Pattern 4: Error Handling That Actually Works

Default Flow error handling is a red screen for users and a vague “An error occurred” in the log. Neither helps.

Fault paths on every DML element. Every Create, Update, Delete records element should have a fault connector. At minimum, the fault path should:

  1. Log the error (custom object, Platform Event, or {!$Flow.FaultMessage} to a field)
  2. Show the user a meaningful message
  3. Not silently succeed
[Create Records: Generate Invoice]
    │ (success)          │ (fault)
    ▼                    ▼
Continue flow      [Assignment: Set ErrorMessage = {!$Flow.FaultMessage}]

                    [Screen: "Invoice creation failed. Reference: {!ErrorMessage}"]

Use Custom Metadata for error thresholds. If you’re looping over records and want to fail gracefully after N errors rather than stopping on the first, store the threshold in Custom Metadata. Increment an error counter in a loop variable and branch to a fault handler when it exceeds the threshold.

Log to a Platform Event for async failures. In auto-launched flows triggered by integrations, use a Platform Event to write error details asynchronously. This avoids the error record locking that occurs when multiple flows try to write to the same error log object simultaneously.

Pattern 5: Flow Packaging and Deployment

Flows in unmanaged source have one critical gotcha: active flows cannot be overwritten by deployment - only inactive (Draft) flows can be pushed via SFDX.

The standard pattern:

  1. Deploy flow in Draft status
  2. Activate via post-deploy script (sf data record update --sobject FlowDefinition --where "DeveloperName='MyFlow'" --values "ActiveVersion=X") or a post-deploy Apex class
  3. Deactivate old version via the same mechanism

For CI/CD, keep flows in version control as source (.flow-meta.xml), deploy as Draft, activate in pipeline. Never manually activate flows in production - it breaks traceability.

Use the Flow Version number as part of your deployment smoke test: after deploy, query SELECT VersionNumber, Status FROM Flow WHERE DeveloperName = 'MyFlow' and assert the expected version is Active.

Pattern 6: When to Use Apex Instead

Flows have governor limits different from (and sometimes lower than) Apex. Know when to stop:

SituationUse FlowUse Apex
Simple field updates on triggerYesOverkill
Related record creationYesOverkill
Complex conditional routingYesConsider
Looping over 200+ records with SOQL eachNoYes
External callout with retry logicNoYes
Bulk processing in background jobsNoYes (Batchable)
Platform-specific transactions (savepoints)NoYes

The test: if building it in Flow requires workarounds that make it significantly more complex than Apex would be, use Apex.

Monitoring and Maintenance

Flow Error Emails go to the running user or the admin - configure the admin notification address explicitly in Setup. Don’t let flow errors go to a personal inbox that nobody checks.

Paused Flow Interviews accumulate in orgs with Scheduled Paths or Wait elements. Query SELECT Id, FlowVersionViewId, Status FROM FlowInterview WHERE Status = 'WAITING' regularly. Paused interviews from deleted/inactive flows block org cleanup.

Flow Code Coverage doesn’t show in the standard test run - Flows are tested via Apex tests that exercise the trigger, or via Flow tests (available for screen flows). Aim for explicit test classes that fire the flow’s trigger conditions in bulk.

The investment in architecture upfront pays back fast. The worst Flow orgs I’ve inherited are the ones where every developer built their own flows in isolation, with no shared patterns, no error handling, and no bulk testing. The best ones read like a well-structured codebase - consistent, predictable, and easy to extend.

← All posts