In orgs with significant transaction volume, the SOQL governor limit (100 queries per transaction) starts feeling tight fast. The usual culprit: reference data queries that run on every transaction - picklist config records, custom settings lookups, feature flags, permission matrices. This data changes rarely but gets queried constantly. Platform Cache solves exactly this problem.
The Three Cache Scopes
Salesforce Platform Cache has three distinct scopes, each suited to different use cases:
Org Cache - shared across all users and all transactions in the org. Data placed here is visible to any user. Best for reference data, configuration, and computed aggregates that don’t vary by user.
Session Cache - scoped to a single user session. Each user has their own cache space. Best for user-specific preferences, computed personalization data, or anything that varies per user but shouldn’t be recomputed every request.
Transaction Cache (Cache.Transaction) - scoped to a single Apex transaction. Doesn’t persist across transactions. Use this to deduplicate queries within a single complex transaction - if five different methods all query the same config record, put it in Transaction Cache after the first query.
Partitions and Capacity
Before you can use Org Cache or Session Cache, you need to configure a partition in Setup under Platform Cache. Each partition has:
- An API name (used in cache keys)
- A capacity allocation in MB, split between Org and Session caches
- An expiration policy (default TTL)
Cache keys follow the format: partition_name.key_name
// Referencing a partition named 'main'
Cache.OrgPartition orgCache = Cache.Org.getPartition('local.main');
Cache.SessionPartition sessionCache = Cache.Session.getPartition('local.main');
The local prefix refers to your own org’s cache (as opposed to a partner org’s). For most implementations, local is all you’ll use.
The Basic Put/Get Pattern
Wrap your cache access in a utility class that handles the null check and fallback query:
public class ConfigCache {
private static final String PARTITION = 'local.main';
private static final Integer DEFAULT_TTL = 3600; // 1 hour in seconds
public static List<Feature_Flag__c> getFeatureFlags() {
Cache.OrgPartition partition = Cache.Org.getPartition(PARTITION);
String cacheKey = 'featureFlags';
List<Feature_Flag__c> flags = (List<Feature_Flag__c>) partition.get(cacheKey);
if (flags == null) {
flags = [
SELECT Name, Is_Enabled__c, Rollout_Percentage__c
FROM Feature_Flag__c
WHERE Is_Active__c = true
ORDER BY Name
];
partition.put(cacheKey, flags, DEFAULT_TTL);
}
return flags;
}
public static void invalidateFeatureFlags() {
Cache.OrgPartition partition = Cache.Org.getPartition(PARTITION);
partition.remove('featureFlags');
}
}
This is the cache-aside pattern: check the cache, miss → query → populate. Simple, predictable, and easy to reason about.
TTL Strategy
TTL (time-to-live) is the most consequential configuration decision. Get it wrong and you’re either serving stale data or negating the benefit of caching.
General guidance:
| Data Type | Suggested TTL |
|---|---|
| Rarely-changing config (feature flags, custom settings) | 1–4 hours |
| Reference data (country codes, product catalogs) | 6–24 hours |
| Computed aggregates (report summaries) | 15–60 minutes |
| User preferences | Session duration |
| Anything that must be immediately consistent | Do not cache |
Always pair a TTL with an explicit invalidation path. When the underlying data changes, clear the cache immediately rather than waiting for the TTL to expire:
public class FeatureFlagService {
public static void updateFlag(Id flagId, Boolean isEnabled) {
Feature_Flag__c flag = new Feature_Flag__c(
Id = flagId,
Is_Enabled__c = isEnabled
);
update flag;
// Invalidate cache immediately so next request gets fresh data
ConfigCache.invalidateFeatureFlags();
}
}
What to Cache vs What Not to Cache
Good candidates:
- Custom Settings and Custom Metadata (these can also use
getInstance()which has its own caching, but for large datasets, Platform Cache gives more control) - Computed aggregates that are expensive to calculate (e.g., territory hierarchies, complex pricing rules)
- Results of callouts to external systems for reference data
- Permission matrices derived from Profile/Role combinations
- Picklist label-to-value mappings
Do not cache:
- Records that must always reflect the latest database state (opportunities in active negotiation, inventory counts)
- Records containing PII if you’re using Org Cache (it’s shared - user A might see user B’s cached data if keys aren’t isolated properly)
- Large record collections that approach partition capacity limits
- Data that changes faster than your TTL
Session Cache for User-Specific Data
public class UserPreferenceCache {
private static final String PARTITION = 'local.main';
public static Map<String, Object> getUserPreferences() {
Cache.SessionPartition partition = Cache.Session.getPartition(PARTITION);
Map<String, Object> prefs = (Map<String, Object>) partition.get('userPrefs');
if (prefs == null) {
User_Preference__c storedPrefs = [
SELECT Default_View__c, Items_Per_Page__c, Theme__c
FROM User_Preference__c
WHERE OwnerId = :UserInfo.getUserId()
LIMIT 1
];
prefs = new Map<String, Object>{
'defaultView' => storedPrefs.Default_View__c,
'itemsPerPage' => storedPrefs.Items_Per_Page__c,
'theme' => storedPrefs.Theme__c
};
partition.put('userPrefs', prefs, 1800); // 30 minutes
}
return prefs;
}
}
Each user gets their own Session Cache namespace - no risk of data leaking between users.
Monitoring Cache Effectiveness
Cache hit rate data is available through Event Monitoring (requires the add-on license). The PlatformCache event type logs cache reads, writes, hits, and misses. Pull this data into a dashboard to verify your cache is performing as expected.
A simple proxy metric without Event Monitoring: compare SOQL usage before and after implementing caching. In Apex debug logs, the CUMULATIVE_LIMIT_USAGE section reports total SOQL queries issued. If cache is working, this number should drop significantly for transactions that used to hit the same query repeatedly.
Platform Cache is one of those features where the setup cost is low and the return is immediate. For any org running into SOQL limits on high-traffic processes, it should be the first tool you reach for.