API-led connectivity is MuleSoft’s architectural pattern for structuring integrations into three layers. In theory it’s elegant. In practice it’s implemented brilliantly on some projects and completely backwards on others. Here’s what the pattern actually looks like when applied to a Salesforce-centred stack.
The Three Layers
System APIs expose underlying systems as reusable assets. One Salesforce System API, one SAP System API, one Dynamics System API. They translate the system’s native protocol into a clean REST interface. They know nothing about business processes.
Process APIs orchestrate System APIs to execute business logic. A create-order Process API might call the Salesforce OMS System API to create an order, the ERP System API to reserve stock, and the payments System API to authorise payment - in that order, with rollback if any step fails.
Experience APIs adapt Process APIs for specific consumers. The same create-order Process API might be called by a B2C Experience API (Commerce Cloud storefront), a B2B Experience API (EDI integration), and a mobile Experience API (with a slimmer payload). Each Experience API transforms and filters for its consumer.
Mobile App ──► Mobile Experience API ─────┐
├──► Create Order Process API ──► Salesforce System API
B2C Store ──► Commerce Experience API ─────┤ ├──► ERP System API
└──► Payments System API
EDI ────────► B2B Experience API ──────────┘
What Goes in a Salesforce System API
The Salesforce System API wraps Salesforce REST/SOAP/Composite API calls. Its responsibilities:
- Authentication (OAuth 2.0 client credentials flow, token caching)
- SOQL translation from generic query parameters
- Error mapping (INVALID_FIELD, DUPLICATE_VALUE, etc. → standardised error codes)
- Rate limit handling (429 → retry with exponential backoff)
# RAML fragment - Salesforce System API
/accounts:
get:
queryParameters:
email:
type: string
externalId:
type: string
responses:
200:
body:
application/json:
type: Account[]
The Mule flow behind GET /accounts:
<flow name="get-accounts">
<salesforce:query config-ref="Salesforce_Config">
<salesforce:salesforce-query>
SELECT Id, Name, BillingCity, Email__c
FROM Account
WHERE Email__c = '#[attributes.queryParams.email]'
</salesforce:salesforce-query>
</salesforce:query>
<ee:transform>
<ee:message>
<ee:set-payload><![CDATA[%dw 2.0
output application/json
---
payload map (acc) -> {
id: acc.Id,
name: acc.Name,
city: acc.BillingCity,
email: acc.Email__c
}
]]></ee:set-payload>
</ee:message>
</ee:transform>
</flow>
Error Handling Strategy
The most common anti-pattern: every flow has its own error handler written differently by whoever built it. Define a global error handling strategy up front.
A three-tier approach works well:
Tier 1 - Retriable errors (network timeout, 429, 503): caught in the flow with exponential backoff up to 3 retries. If retries exhausted, escalate to tier 2.
Tier 2 - Dead Letter Queue: failed messages published to a persistent queue (Anypoint MQ or an external broker). An ops Flow processes the DLQ on a schedule and can replay or alert.
Tier 3 - Circuit breaker: if a downstream system is failing consistently, stop calling it. Mule 4 doesn’t have a native circuit breaker but you can implement one with Object Store to count consecutive failures.
<error-handler name="global-error-handler">
<on-error-continue type="CONNECTIVITY, HTTP:TIMEOUT">
<until-successful maxRetries="3" millisBetweenRetries="2000">
<!-- retry the operation -->
</until-successful>
</on-error-continue>
<on-error-continue type="ANY">
<anypoint-mq:publish config-ref="MQ_Config" destination="dead-letter-queue"/>
<set-payload value="#[output application/json --- {error: error.description, timestamp: now()}]"/>
</on-error-continue>
</error-handler>
Avoiding the God Process API
The most damaging anti-pattern is the God Process API - one massive Mule flow that calls 8 systems, does data transformation, applies business rules, sends emails, and updates 3 Salesforce objects. It starts as “just one integration” and becomes unmaintainable within six months.
Signs you’re building a God Process API:
- The flow diagram needs to be zoomed out to see both ends
- You’re passing a “context object” with 40+ fields through the whole flow
- Testing requires mocking 6 external systems
The fix: decompose. If a Process API needs to do more than 3–4 distinct system interactions, it should be split into sub-flows or separate Process APIs with a higher-level orchestrator.
Salesforce Platform Events vs MuleSoft
A frequent architectural question: should Salesforce-to-Salesforce (or Salesforce-to-external) communication go through MuleSoft, or directly via Platform Events?
Use Platform Events directly when:
- The integration is Salesforce-to-Salesforce within the same trust boundary
- Latency requirements are strict (sub-second triggers)
- The event pattern is well-defined and unlikely to change
Use MuleSoft when:
- The integration crosses systems with different authentication models
- You need transformation, enrichment, or orchestration logic
- You need a durable retry mechanism
- The same event needs to go to multiple consumers with different formats
API Manager and Security
Every published API in an API-led architecture should be managed through Anypoint API Manager with:
- Client ID enforcement - every consumer registers and gets a client ID/secret
- SLA tiers - rate limits per client (free tier: 10 req/min, premium: 1000 req/min)
- JWT validation for Experience APIs consumed by browser/mobile clients
- IP allowlisting for System APIs (only Process APIs should call them)
The System API layer especially should never be directly callable from external clients. Enforce this via network policy (VPC/private link) and API Manager policy in parallel - never rely on just one.
Monitoring and Observability
Anypoint Monitoring gives you dashboards per API and per flow, but you need to add custom business metrics. I instrument Process APIs with:
<monitoring:custom-event config-ref="Monitoring_Config"
eventName="order-created"
message="#['Order created: ' ++ vars.orderId ++ ' value: ' ++ vars.orderTotal]"/>
Feed these events to your observability platform (Datadog, Splunk, Elastic) alongside the Anypoint platform metrics for a complete picture.
When Not to Use MuleSoft
MuleSoft is expensive and operationally heavy. For simple integrations, it’s overkill. Use direct callouts or Platform Events if:
- You have fewer than 3 integration points
- All integrations are Salesforce-native
- Your team doesn’t have MuleSoft expertise to maintain it
The API-led pattern is valuable independent of MuleSoft - you can implement it with AWS API Gateway + Lambda, Azure API Management, or even plain Node.js. The architecture matters more than the tool.