“It’s slow” is one of the most common complaints I inherit on Salesforce projects. It’s also one of the most fixable once you know where to look. The culprits are usually the same: unindexed SOQL, N+1 queries in Apex, LWC re-renders on every keystroke, and org-wide settings that nobody touched since the org was set up in 2015.
Start with the Developer Console and Debug Logs
Before optimising anything, measure. Enable a debug log for the user experiencing slowness:
Setup → Debug Logs → New → User → Enable for 30 minutes
In the Developer Console, use the Performance Tree (Execution Overview) to identify where time is spent. The tree shows:
- Time in SOQL execution vs Apex execution vs callouts
- Which queries ran and how long each took
- DML operations and their row counts
A query taking 800ms on a table of 2 million records is different from one taking 800ms on 10,000 records. Both are problems but the solutions are different.
SOQL Optimisation
Index Usage
Salesforce auto-indexes: Id, Name, OwnerId, CreatedDate, LastModifiedDate, SystemModstamp, and any field marked as ExternalId or with a custom index. Everything else is a full table scan.
Query patterns that prevent index usage:
WHERE Name LIKE '%smith'- leading wildcard forces full scanWHERE UPPER(Email__c) = 'USER@EXAMPLE.COM'- function on indexed fieldWHERE Status__c != 'Closed'- negative filters on non-selective fieldsWHERE LastModifiedDate > :dt AND Status__c = 'Active'- compound filter where neither field alone is selective
Add a custom index to fields used in large-volume queries:
Setup → Object Manager → [Object] → Fields & Relationships → [Field] → Edit → Check "Index this field"
For compound filter queries on very large tables, consider a deterministic external ID that encodes the filter criteria.
Selective Queries
Salesforce uses selectivity thresholds to decide whether to use an index:
- Standard indexed field: query must return < 30% of total records
- Custom indexed field: query must return < 10% of total records
If your query returns more than this, Salesforce falls back to a full table scan regardless of the index.
Design implications:
- Filter on
CreatedDateranges rather than open-ended date conditions - Use
LIMITclauses in batch queries - For large objects (> 1M records), consider archiving old records to a custom archive object
Relationship Queries
// Slow: separate SOQL inside a loop
for (Account acc : accounts) {
List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :acc.Id]; // N+1!
}
// Fast: one query with relationship
Map<Id, Account> accountsWithContacts = new Map<Id, Account>(
[SELECT Id, (SELECT Id, Name FROM Contacts) FROM Account WHERE Id IN :accountIds]
);
The relationship query (inner SELECT) is always better than a loop with a separate query - unless the parent record set is very large (> 1,000 parents with deep child sets), in which case you may need to query children separately with WHERE ParentId IN :parentIds.
LWC Performance
Wire vs Imperative
Wire adapters re-execute when their tracked parameters change. This is convenient but can cause cascading refreshes:
// This re-fires every time ANY reactive property in the component changes
@wire(getOpportunities, { accountId: '$accountId', status: '$status' })
opportunities;
If accountId and status are both bound to input fields, every keystroke triggers a wire refresh. Debounce input values before binding them as wire parameters.
// Debounced input handler
handleStatusChange(event) {
clearTimeout(this._debounceTimer);
this._debounceTimer = setTimeout(() => {
this.status = event.target.value;
}, 300);
}
Rendering Optimisation
Every tracked property change triggers a re-render check. Avoid storing large arrays or complex objects as tracked properties if only a small portion changes:
// Slow: entire array is reactive
@track records = []; // change to any element = full re-render
// Better: use getter with @api or derive from a plain property
get visibleRecords() {
return this.allRecords.filter(r => r.isVisible);
}
Use @api for properties passed from parent - they’re reactive but changes are predictable. Reserve @track for complex objects where deep change detection is needed.
Lazy Loading and Pagination
For lists that could grow large, never load everything at once. Use server-side pagination:
// Component with offset-based pagination
@track records = [];
@track totalRecords = 0;
currentOffset = 0;
pageSize = 50;
loadMore() {
getRecords({ offset: this.currentOffset, pageSize: this.pageSize })
.then(result => {
this.records = [...this.records, ...result.records];
this.totalRecords = result.total;
this.currentOffset += this.pageSize;
});
}
Batch Architecture for Large-Scale Processing
Database.Batchable is the right tool for processing > 50,000 records. But poorly designed batches are slow and governor-limit-prone.
Optimal batch size: the default batch size is 200. For SOQL-heavy batches, reduce to 50–100. For simple DML-only batches, increase to 2,000. Profile with debug logs to find the sweet spot.
Avoid SOQL in execute(): query everything you need in start() via the QueryLocator, or pre-compute a Map in a constructor. Each execute() call has its own governor limits - a SOQL query in execute() that joins across 3 objects will count against the per-execute limit.
// Good: query in start(), process efficiently in execute()
public Database.QueryLocator start(Database.BatchableContext ctx) {
return Database.getQueryLocator([
SELECT Id, Status__c, Account.Industry, (SELECT Id FROM Cases__r LIMIT 5)
FROM Opportunity
WHERE StageName = 'Closed Won'
AND CloseDate = LAST_N_DAYS:30
]);
}
public void execute(Database.BatchableContext ctx, List<Opportunity> scope) {
List<Opportunity> toUpdate = new List<Opportunity>();
for (Opportunity opp : scope) {
// Process in-memory - no SOQL here
if (opp.Cases__r.size() > 0) {
opp.Has_Cases__c = true;
toUpdate.add(opp);
}
}
update toUpdate;
}
Org-Wide Settings That Kill Performance
Sharing rules - complex sharing rule trees add join overhead to every query. Review your sharing model: if you have > 50 sharing rules on a single object, consider refactoring to a territory or role-based model.
Workflow rules and Process Builder - every saved record runs through all active workflow rules and PB processes sequentially. If you have 40 active workflow rules on Opportunity, each Opportunity save evaluates all 40. Audit and consolidate into Flows or trigger handlers.
Triggers calling triggers - a before-update trigger that updates a parent, which fires another trigger, which fires another. Map your trigger execution paths. An OpportunityTrigger → AccountTrigger → ContactTrigger chain is invisible until you look at the debug log and see 800ms of Apex execution for a simple field change.
Large data volumes (LDV) without the right data strategy - if your Account, Order, or Case object has > 1M records, standard SOQL and UI patterns stop working as expected. LDV requires: indexed lookups only, pagination everywhere, no roll-up summary fields on the LDV object, and careful skinny table design.
Performance Budgets
Set performance budgets for your team and measure against them:
| Operation | Target | Alert Threshold |
|---|---|---|
| Page load (LWC record page) | < 2s | > 4s |
| SOQL query (production) | < 200ms | > 500ms |
| Apex trigger execution | < 500ms | > 1s |
| Batch execute() per chunk | < 10s | > 20s |
Use Transaction Security Policies or custom logging to alert when production operations exceed thresholds. Performance degrades incrementally - catching regressions early is far cheaper than investigating a “the system is slow” complaint six months post-go-live.