Configuration speed is often viewed as a technical metric. Well, we won’t agree – it has its business cost.
Ask an architect how fast Advanced Configurator is and you get a number. Ask a sales rep and you get also a number, but of dropped deals: external users usually do not have the patience that internal ones do.
Both are valid, and both describe the same underlying issue that shows up in revenue.
This article walks through the five modeling patterns that account for the majority of configurator slowdowns the Veloce team observes in CML health checks: why each one taxes the solver, how to recognize it in your own code, and what the fix looks like.
Agentforce Revenue Management (formerly Salesforce Revenue Cloud Advanced) includes Product Configurator – the application sales teams use to configure complex products and bundles on a quote. It runs on two engines:
Because Advanced is the path forward, most performance conversations in Revenue Cloud now revolve around it – and most of those, in turn, come back to CML.
So, slowdowns in the Advanced Configurator can have several causes – Product Catalog Management setup, pricing procedures, integrations, the size of the transaction. Among them, CML is the one the architect writes directly.
Everything else is configured but CML is authored – line by line. Which means it is also the layer where a small modeling decision can translate into a difference in response time.
Before looking at specific issues’ patterns, it helps to have a mental model of what happens between the click and the result.
When a user configures a product, the constraint engine does not run your CML top-to-bottom like a flow. Instead, it takes every variable in the model – every attribute, every relation, every selection – and works out which combinations are valid. It does this by trying values, checking whether the active constraints still hold, and either moving forward or backtracking (undoing a choice and trying a different value).
Two numbers from the debug log tell you almost everything you need to know about how hard that work is:
Every issue covered further comes back to the same goal: keeping both numbers low. When your model is fast, the solver has narrow domains and rarely needs to backtrack. When it is slow, domains are wide and backtracks climb into the tens of thousands.
.png)
The patterns below are the modeling choices a competent engineer writes on a first pass – each one makes the CML shorter, more readable, or faster to build, and none of them looks wrong in code review.
They just cost the solver more work than they should, and at a small catalog size that cost is invisible. Once the model grows, the cost compounds, and a configurator that used to be fast starts to drag.
This is typically the single most expensive pattern. It shows up when free-standing expression variables are used outside of constraints. The engine has no way to propagate restrictions between them, so it falls back on a tree search – essentially, trying combinations until one works.
A classic illustration is the cryptarithmetic puzzle SEND + MORE = MONEY, where each letter represents a unique digit. The straightforward CML for this puzzle is:
int S = [0..9];
int E = [0..9];
int N = [0..9];
int D = [0..9];
int M = [0..9];
int O = [0..9];
int R = [0..9];
int Y = [0..9];
int SEND = 1000 * S + 100 * E + 10 * N + D;
int MORE = 1000 * M + 100 * O + 10 * R + E;
int MONEY = 10000 * M + 1000 * O + 100 * N + 10 * E + Y;
Because the expressions sit outside any constraint, the solver has no way to prune impossible digit combinations early. It walks a tree with roughly 1.4+ million paths before arriving at a valid answer – and in a real configurator this means a timeout.
.png)
Wrap the same expressions into constraints, and everything changes:
constraint(SEND == (1000 * S + 100 * E + 10 * N + D));
constraint(MORE == (1000 * M + 100 * O + 10 * R + E));
constraint(MONEY == (10000 * M + 1000 * O + 100 * N + 10 * E + Y));
Now the solver can use arc consistency: every time one variable narrows, it immediately propagates that restriction to the others. The search space collapses from 1.4 million paths to under 100.
The puzzle is illustrative, not representative – real CML rarely contains cryptarithmetic. But the underlying pattern is extremely common: a calculated value used as a driver for selection logic, sitting outside any constraint. Moving the expression into a constraint is often a one-line change with a dramatic performance effect.
Trade-off to be aware of: Constraints are stricter than expression variables, so switching mid-flight can expose logic that was previously silently tolerated. Validate the outputs against a known-good set of configurations before deploying.
Every variable and relation in CML has a domain – the set of values it can take. If you do not specify it, the engine assumes the widest possible range. That default is rarely what you want.
This is what an unbounded model fragment looks like:
relation rackSystem42U: RackSystem42U;
...
constraint(rackSystem42U[RackSystem42U] <= 1);
At first glance this is fine: the constraint caps the relation cardinality at 1. But until the solver actually evaluates that constraint, it treats rackSystem42U as if it could contain any number of children. The search space the engine considers is much larger than what the constraint eventually allows.
The bounded version is a single-character change:
relation rackSystem42U: RackSystem42U[0..1];
Now the cardinality is declared upfront, and the engine never considers impossible values in the first place. The same logic applies to numeric attributes: int quantity has an effective domain of roughly 4.3 billion values, while int quantity = [1..10] has a domain of ten.
As Salesforce's own CML best practices put it, narrowing a domain without reducing the valid solution space is the cheapest performance win available in CML.
Trade-off to be aware of: Future-proofing becomes a conscious decision. If a domain expands later (a new size, a new tier), the model has to be updated – which is a small cost for the performance you get in return.
A constraint on the bundle (root) level that references a grandchild's attribute looks convenient from the outside – one central rule, all logic in one place. From the solver's point of view, it is the opposite: every cross-level reference introduces internal "virtual" variables and constraints that multiply the work the engine has to do.
Here is a typical example that shows up in CML health checks:
type RackBundle : LineItem {
...
constraint(
isSystem42uSelected ->
rackSystem42U[RackSystem42U].fireSuspensionModule[FireSuspensionModule] == 1
);
}
A boolean at the bundle level drives a selection three levels deep. The solver cannot evaluate this as a single check – it effectively materializes intermediate attributes and constraints to reach down into the child type.
The recommended pattern is to push the rule down to the level where the data lives, using a parent or root reference to read the trigger attribute:
type RackSystem42U {
...
boolean isRootSystem42uSelected = parent(isSystem42uSelected);
relation fireSuspensionModule : FireSuspensionModule;
...
constraint(
isRootSystem42uSelected -> fireSuspensionModule[FireSuspensionModule] == 1
);
}
The constraint now sits next to the relation it controls. The solver evaluates it in place, without generating virtual attributes or traversing levels. And as a bonus, the rule is easier to read and maintain: engineers looking at RackSystem42U see everything that applies to it in one file.
Trade-off to be aware of: In a model that was designed entirely around bundle-level rules, moving them to child types can touch many files at once. For live production models, refactor incrementally – start with the hottest paths identified in the debug log.
It is natural to write an intermediate boolean to make a complex condition readable:
boolean isPremium = (tier == "Premium" && region == "EU");
constraint(isPremium -> supportPlan == "24x7");
This works logically. But the solver treats isPremium and a named constraint differently. Because tier and region are used outside a constraint, the solver cannot pull them into its search algorithm – it can only evaluate the expression once their values are set, then fire the implication.
A named constraint works in both directions. The solver can use it to narrow tier and region the moment it knows the result has to be true:
constraint premiumEuSupport =
(tier == "Premium" && region == "EU") -> supportPlan == "24x7";
On large models, that two-way propagation removes backtracks the boolean version leaves in.
Trade-off to be aware of: This pattern is not always a problem. Keeping tier and region outside a constraint guarantees the solver will never overwrite their values to find a solution – which is sometimes exactly what you want, even at the cost of some performance.
The choice depends on whether stability of those attributes matters more than search efficiency in a given scenario.
The Visual Builder is useful for getting started, but its output is not optimized for performance. It generates a unique type and a dedicated relation for every product, even when those products share the same set of attributes and behavior.
In a small catalog this is invisible. In a catalog with hundreds of similar SKUs, it multiplies the work the engine has to do on every configuration – the solver instantiates distinct types and constraints where a single generic type would have sufficed.
The fix is architectural: replace repetitive generated types with a generic type and use inheritance or shared attributes to capture the common behavior. This reduces the number of variables and constraints the solver has to instantiate, and the performance improvement usually scales linearly with how much duplication was removed.
Trade-off to be aware of: Consolidating types is a deeper refactor than tweaking a single constraint. Existing product associations, PCM bundle definitions, and downstream references may all need updates – which is the main reason teams put this off, even when they know the duplication is hurting them. Plan for it as part of a catalog cleanup, not as a hotfix.
.png)
You do not need special tools to check whether your model has any of these issues. Open your CML in the editor and walk through the list below.
If you can answer "no" to a check, you have a likely performance win waiting. If you are unsure, that is what the Apex debug log is for – pull the RLM_CONFIGURATOR_STATS section after a configuration session and look at Number of Backtracks and Total Execution Time.
nt x = [1..100], not int x)? Does every relation have an explicit cardinality (relation r : T[0..1], not relation r : T)? For decimal attributes, does the declared precision match the precision actually used downstream? A double weight = [0..100]; defaults to four decimals, but if the model only ever uses two, declaring it as double(2) weight = [0..100]; saves the solver work it never needed to do.int total = a + b;) anywhere that the result drives selection logic? If yes, consider wrapping them into constraints so the solver can prune the search space.parent/root/relation access? Deep cross-level references are a red flag for performance.Number of Backtracks and Total Execution Time for a representative configuration? As the official guidance states, healthy models run under 100 ms with fewer than 1,000 backtracks.If half of these are "no" or "not sure," the model has substantial room for improvement. If most of them are "yes," and the configurator is still slow, the problem may be structural – the bundle hierarchy, the PCM layout, or integration overhead – and a deeper review is the right next step.
What makes CML performance particularly tricky is that none of these issues are "wrong code." That is why these problems survive UAT and only surface when the catalog has grown past the threshold where unoptimized logic starts to matter. And by then, the cost of fixing it is higher than the cost of getting it right the first time.
If you are in that place now, the diagnostic checklist above is a starting point. If you want a second unbiased opinion on a model that is already in production, that is exactly what the Veloce CML health check engagement is designed for.
My configuration used to be fast and now it is slow. What changed?
Almost always: the catalog grew, or new constraints were added on top of a model that was fine at the original scale. CML performance degrades non-linearly – a model may run well up to a certain size and then cross a threshold where domain sizes and backtracks explode.
How do I know if my CML is actually the bottleneck?
Look at the RLM_CONFIGURATOR_STATS section of the Apex debug log. If Number of Backtracks is in the thousands and Total Execution Time is over a second, the solver is working too hard – and CML is likely the primary cause. If backtracks are low but execution is still slow, the issue may be elsewhere (PCM, pricing, integrations).
Can the Visual Builder produce performant CML?
Generated output creates a unique type and a dedicated relation for every product – even when those products share the same attributes. At scale, that duplication costs the solver. For production performance, the CML usually needs to be reviewed and consolidated by hand.
Is there a way to profile CML the same way I would profile Apex?
The closest equivalents are the solver statistics in the debug log – total execution time, backtrack counts, constraint violation counts, and the per-variable choice point breakdown. There is no line-by-line profiler, but those statistics are enough to narrow a problem down to a specific area of the model.
Will my model break if I add domain bounds to attributes that did not have them?
Only if the actual values fall outside the bounds you set. The safe practice is to inspect the data (for numeric attributes, the range of values that ever occur) and set the domain slightly wider. Add domain bounds in a sandbox, run your regression suite, and confirm before deploying.
Is a full CML rewrite the only way to fix performance?
Rarely. Most performance issues come from a handful of targeted changes. A full rewrite is the last resort, not the starting point. The diagnostic checklist above is designed to find the targeted fixes first.