LaunchDarkly renewal calls have become one of the most dreaded calendar events in engineering leadership. The pattern is remarkably consistent: a team signs up at a reasonable price point, grows their flag usage organically over 18-24 months, and then receives a renewal quote that is 2-3x their current spend. Enterprise minimums of $70,000+ per year are standard. Some teams report renewal quotes exceeding $200,000 for what started as a $30,000 contract.
The instinct is to switch providers. But switching feature flag platforms is a 6-12 month migration project that touches every service, every SDK integration, and every developer workflow. It is expensive, risky, and disruptive. For most organizations, the smarter move is to reduce LaunchDarkly costs while staying on the platform.
The core insight is this: many LaunchDarkly customers are paying more than they need to because stale feature flags are still being evaluated in production, inflating Monthly Active User (MAU) counts and driving up costs. Cleaning up those flags is faster, cheaper, and less risky than migrating to a different provider.
This guide covers seven practical strategies for reducing your LaunchDarkly bill without switching platforms.
Understanding what drives LaunchDarkly costs
Before optimizing, you need to understand the billing model. LaunchDarkly prices primarily on two dimensions:
| Billing Dimension | What It Measures | Why It Matters |
|---|---|---|
| Monthly Active Users (MAU) | Unique contexts/users that evaluate at least one flag per month | This is the primary cost driver for most teams |
| Seats | Number of team members with dashboard access | Usually a smaller portion of the bill |
| Experimentation | A/B testing and feature experiments | Add-on, priced separately |
| Data Export | Event streaming for analytics | Add-on, priced separately |
MAU is where the money is. Every unique user (or context, in LaunchDarkly's newer terminology) that evaluates a flag during a billing period counts toward your MAU total. Here is the critical detail that most teams miss: a user who evaluates 1 flag and a user who evaluates 500 flags count the same toward MAU. But a user who evaluates zero flags does not count at all.
This means the optimization target is not "reduce the number of flag evaluations per user" -- it is "reduce the number of users who evaluate any flags at all, and eliminate unnecessary evaluation paths."
Strategy 1: Audit and remove stale flags
This is the single highest-impact action you can take. Stale flags -- flags that have been at 100% (fully rolled out) for weeks or months -- are still being evaluated by every user who hits the code path. Each evaluation confirms what is already known: the flag is on. But that evaluation still counts the user toward your MAU.
How stale flags inflate costs:
Consider a team with 50 active flags and 100,000 MAU. If 30 of those flags are stale (fully rolled out, serving no operational purpose), every user who hits those code paths is counted toward MAU purely to evaluate flags that return a constant value. If even 15% of your MAU are users who only interact with stale flag code paths and never touch an active flag, you are paying for 15,000 users who generate zero value from flag evaluation.
The audit process:
- Export your flag list from LaunchDarkly with creation dates and last-modified timestamps
- Identify flags at 100% rollout in production for more than 30 days
- Cross-reference with code to find flags where both the evaluation and the "off" path still exist in the codebase
- Prioritize by traffic -- high-traffic code paths with stale flags are the costliest
- Remove the flag from code and archive it in LaunchDarkly
Quick wins to look for:
| Flag Type | Typical Age When Stale | Estimated MAU Impact |
|---|---|---|
| Release flags at 100% | 30+ days post-rollout | High (evaluated on every page load) |
| Concluded experiment flags | 14+ days post-conclusion | Medium (evaluated in experiment paths) |
| Migration flags (migration completed) | 60+ days post-migration | Medium to high |
| Flags for features that were removed | Any age | Low to medium |
| Flags with 0 code references | Any age | Zero (safe to archive immediately) |
A realistic cleanup of stale flags reduces MAU by 15-30% for most organizations, which translates directly to a 15-30% reduction in your LaunchDarkly bill.
Automating the stale flag pipeline
Manual flag cleanup is a quarterly chore that nobody enjoys and most teams skip. The organizations that maintain low flag counts automate the detection and removal process.
The pipeline looks like this:
Continuous Detection
├── Monitor flag status in LaunchDarkly (API polling or webhooks)
├── Track flag age and rollout percentage
├── Detect when a flag reaches 100% in production
└── Start a configurable grace period (e.g., 14-30 days)
↓
Automated Alerting
├── Notify flag owner when grace period expires
├── Create a cleanup ticket in your issue tracker
├── Escalate if no action within 7 days
└── Report stale flag counts in weekly engineering digest
↓
Automated Removal
├── Analyze code to find all flag evaluation points
├── Generate cleanup PR removing flag and dead code
├── Run tests to verify removal is safe
└── Submit PR for review
FlagShark automates this entire pipeline. It monitors your repositories for feature flag additions and removals via GitHub PR analysis, tracks each flag's lifecycle from creation through rollout, and automatically generates cleanup PRs when flags become stale. The detection uses tree-sitter AST parsing across 11 languages, which means it understands the syntax of your code rather than relying on regex pattern matching -- so it correctly handles flag keys in string variables, destructured imports, and other patterns that simple text search misses.
The key advantage of automation is not just the time savings (though those are significant). It is the consistency. Automated systems do not forget to check, do not deprioritize cleanup during crunch periods, and do not let stale flags accumulate silently for months.
Strategy 2: Optimize client-side flag evaluation
Client-side SDKs are the largest MAU driver for most organizations because every user who loads your web application or mobile app triggers flag evaluations. Server-side evaluations, by contrast, often use service accounts or system contexts that count as fewer unique MAU.
The client-side MAU problem:
User visits your website
→ LaunchDarkly client SDK initializes
→ SDK evaluates ALL flags in the client configuration
→ User is counted as 1 MAU
→ This happens even if the user only sees a login page
Optimization techniques:
Reduce the client-side flag set
By default, the LaunchDarkly client SDK evaluates every flag associated with the client-side environment. If you have 200 flags and only 15 are relevant to the client, you are evaluating 185 unnecessary flags for every user.
Use LaunchDarkly's client-side flag availability setting to control which flags are available to client SDKs:
- Go to each flag's settings in LaunchDarkly
- Under "Client-side SDK availability," uncheck "SDKs using Client-side ID" for flags that are only used server-side
- This prevents the flag from being included in client-side evaluation payloads
Impact: Reducing the client-side flag set does not directly reduce MAU (a user evaluating 1 flag or 200 flags counts the same), but it reduces payload size and latency. More importantly, it makes it easier to identify which flags are actually driving client-side costs.
Delay evaluation for unauthenticated users
If your application has significant anonymous traffic (marketing pages, documentation, pricing pages), every anonymous visitor who triggers a flag evaluation counts as a unique MAU. For content-heavy sites, this can represent 60-80% of your total MAU.
Options to reduce anonymous MAU:
| Approach | Implementation | MAU Reduction Potential |
|---|---|---|
| Defer SDK initialization until after login | Only initialize LD client when user authenticates | 40-70% for apps with high anonymous traffic |
| Use server-side evaluation for anonymous paths | Evaluate flags server-side for marketing pages | 30-50% depending on architecture |
| Static rendering for flag-free pages | Pre-render pages that do not use flags | Variable, depends on content |
| Bootstrap with default values | Serve defaults without SDK call, lazy-load SDK | 20-40% for apps with gradual feature exposure |
The most effective approach depends on your architecture, but deferring SDK initialization until authentication is consistently the highest-impact change for SaaS applications.
Use the Relay Proxy for server-side consolidation
LaunchDarkly's Relay Proxy acts as a local cache and evaluation engine. For server-side evaluations, the Relay Proxy can reduce the number of distinct connections to LaunchDarkly's servers and improve evaluation performance. While it does not directly reduce MAU, it can reduce costs in high-volume architectures by minimizing streaming connections and enabling more efficient evaluation patterns.
Strategy 3: Consolidate contexts and user keys
LaunchDarkly counts unique contexts (formerly "users") as MAU. If your application creates multiple contexts for the same real user, each context counts separately.
Common sources of context duplication:
| Duplication Source | Example | Impact |
|---|---|---|
| Anonymous + authenticated | Same user counted twice (once before login, once after) | 2x MAU per user |
| Multiple context kinds | User context + device context + session context | 2-3x MAU per user |
| Inconsistent user keys | Email in one service, user ID in another | 2x MAU per user |
| Test/staging environments sharing production keys | Development contexts counted against production MAU | 5-20% MAU inflation |
Fix anonymous-to-authenticated transitions
The most common duplication is anonymous users who later authenticate. If your application initializes the LaunchDarkly SDK with an anonymous context and then re-initializes with an authenticated context, the same user is counted twice.
Before (2 MAU per user):
// Page load: anonymous context
const anonymousContext = {
kind: 'user',
anonymous: true,
key: generateRandomKey(), // New key every time
};
ldClient.identify(anonymousContext);
// After login: authenticated context
const authContext = {
kind: 'user',
key: user.id,
email: user.email,
};
ldClient.identify(authContext);
After (1 MAU per user):
// Page load: use a stable device identifier
const context = {
kind: 'user',
key: getOrCreateDeviceId(), // Stable across sessions
anonymous: true,
};
ldClient.identify(context);
// After login: identify with real user, same SDK instance
const authContext = {
kind: 'user',
key: user.id,
email: user.email,
};
ldClient.identify(authContext);
// The anonymous context is no longer evaluated
Better yet, defer initialization entirely until authentication (see Strategy 2).
Audit multi-context usage
LaunchDarkly's multi-context feature lets you evaluate flags against multiple context kinds simultaneously (user, organization, device). Each context kind with a unique key counts as a separate MAU.
If you are using multi-contexts, audit whether every context kind is necessary for flag targeting. If you are targeting flags based only on user attributes, you do not need device or organization contexts -- and removing them reduces your MAU count.
Strategy 4: Negotiate your contract effectively
LaunchDarkly's published pricing is a starting point, not a final price. There is significant room for negotiation, especially at renewal.
Negotiation leverage points:
| Leverage | How to Use It |
|---|---|
| Competitor quotes | Get quotes from Unleash, Split.io, or Statsig. You do not need to intend to switch; you need the numbers for comparison |
| MAU reduction plan | Present your stale flag cleanup plan with projected MAU reductions. Negotiate pricing based on post-cleanup MAU, not current |
| Multi-year commitment | Offer a 2-3 year commitment in exchange for lower per-MAU pricing |
| Usage data | Show that your MAU is declining (after cleanup) to argue for a lower tier |
| Feature consolidation | Drop add-ons you do not use (Experimentation, Data Export) to reduce total spend |
| Annual prepayment | Offer to pay annually upfront for a discount |
Timing matters: Start the negotiation 90 days before renewal. If you wait until the renewal deadline, you lose leverage because switching costs are high and LaunchDarkly knows it.
Typical negotiation outcomes:
| Starting Position | Tactic | Result |
|---|---|---|
| 2x renewal quote | Competitor quotes + MAU cleanup plan | 1.2-1.4x current price |
| Enterprise minimum ($70K+) | Downgrade to Pro + API access | 40-60% cost reduction |
| Per-seat pricing too high | Consolidate to fewer seats + API automation | 20-30% seat cost reduction |
Strategy 5: Use LaunchDarkly's built-in cleanup features
LaunchDarkly has several built-in features specifically designed to help manage flag lifecycle and reduce waste. Most teams underutilize them.
Code References
Code References links flag keys to their locations in your codebase. This makes it immediately visible which flags have zero code references (safe to archive) and which flags are used in the most files (highest cleanup impact).
Setup is a one-time CI pipeline addition:
# GitHub Actions
- name: LaunchDarkly Code References
uses: launchdarkly/find-code-references@v2
with:
accessToken: ${{ secrets.LD_ACCESS_TOKEN }}
projKey: my-project
Once running, check the Code References tab weekly and archive any flag with zero references.
Flag health indicators
LaunchDarkly surfaces flag health data in the dashboard:
- Stale flags: Flags that have not been modified recently
- Inactive flags: Flags that are not being evaluated
- Ready for removal: Flags that Code References identifies as having no code usage
Make it a habit: Schedule a 30-minute weekly review of the "Ready for removal" list. Archiving 2-3 flags per week is 100-150 flags per year -- enough to prevent accumulation.
Flag triggers and scheduled changes
Use scheduled changes to automatically disable experiment flags after a set period. Use flag triggers to connect your monitoring to flag state, so flags that cause issues can be automatically disabled.
These features do not reduce costs directly, but they prevent flags from lingering in active states longer than necessary, which reduces the likelihood of stale flag accumulation.
Strategy 6: Shift evaluations server-side
If your architecture allows it, moving flag evaluations from client-side SDKs to server-side SDKs can substantially reduce MAU.
Why this works: Server-side evaluations use service accounts or system contexts, not individual user contexts. If your backend evaluates flags on behalf of users and serves pre-evaluated content, the backend service context counts as a single MAU instead of one MAU per user.
Before (10,000 MAU):
10,000 users → Client SDK → 10,000 unique contexts → 10,000 MAU
After (1 MAU for server-side + reduced client MAU):
10,000 users → Your server → Server SDK (1 service context) → 1 MAU
↓
Pre-evaluated flag values
↓
Rendered response to users → 0 client-side MAU for these flags
Practical implementation patterns:
| Pattern | How It Works | Best For |
|---|---|---|
| Server-side rendering with flag values | Evaluate flags server-side, embed results in HTML | Marketing sites, content pages |
| API-driven feature configuration | Backend endpoint returns feature config based on flags | SPAs, mobile apps |
| Edge evaluation | CDN or edge function evaluates flags before response | High-traffic, low-latency requirements |
The trade-off is latency and flexibility. Client-side evaluation enables instant flag updates without page reloads. Server-side evaluation requires a round trip but dramatically reduces MAU.
Hybrid approach: Use client-side evaluation for flags that require instant updates (experiments, progressive rollouts) and server-side evaluation for stable, fully-rolled-out features. This gives you the best of both worlds while minimizing MAU.
Strategy 7: Right-size your plan and MAU tier
LaunchDarkly's pricing tiers have significant MAU breakpoints. Knowing where you fall -- and where the next breakpoint is -- helps you make informed decisions about which optimizations to prioritize.
General pricing structure (note: LaunchDarkly's pricing changes; verify current rates):
| Tier | Approximate MAU Included | Best For |
|---|---|---|
| Pro | Up to 1,000 MAU (then overage) | Small teams, early-stage |
| Enterprise | Custom MAU commitment | Mid-size to large organizations |
| Custom | Fully negotiated | Very high-volume use cases |
Key optimization: If you are slightly above a MAU tier boundary, the strategies in this guide (stale flag cleanup, context consolidation, server-side shifting) can push you below the boundary and into a lower pricing tier. A 15% MAU reduction might save you 40% on your bill if it crosses a tier boundary.
Tracking MAU over time:
Build a simple dashboard that tracks your MAU week over week. Correlate MAU changes with flag cleanup efforts to demonstrate the impact of your optimization work. This data is invaluable during contract negotiations.
import requests
def get_mau_usage(api_key, project_key, environment_key):
"""Fetch current MAU usage from LaunchDarkly."""
response = requests.get(
f"https://app.launchdarkly.com/api/v2/usage/mau/sdks",
headers={"Authorization": api_key},
params={"project": project_key, "environment": environment_key}
)
return response.json()
Putting it all together: A 90-day cost reduction plan
Here is a practical timeline for implementing these strategies:
Days 1-14: Audit and baseline
| Action | Expected Outcome |
|---|---|
| Export complete flag inventory from LaunchDarkly | Full flag list with creation dates, status, and evaluation counts |
| Identify stale flags (100% for 30+ days) | Target list of flags to remove |
| Measure current MAU and identify top MAU drivers | Baseline for measuring improvement |
| Set up Code References if not already running | Visibility into code-level flag usage |
| Audit client-side vs. server-side flag distribution | Understand where MAU is generated |
Days 15-45: Quick wins
| Action | Expected MAU Impact |
|---|---|
| Archive flags with 0 code references | Minimal direct MAU impact, but reduces dashboard clutter |
| Remove stale release flags from code (top 20 by traffic) | 10-20% MAU reduction |
| Fix anonymous-to-authenticated context duplication | 5-15% MAU reduction |
| Disable client-side availability for server-only flags | Reduces payload, clarifies flag scope |
Days 46-75: Structural changes
| Action | Expected MAU Impact |
|---|---|
| Continue stale flag removal (next 30-50 flags) | Additional 5-10% MAU reduction |
| Defer client SDK initialization for unauthenticated paths | 10-30% MAU reduction (varies by traffic mix) |
| Implement server-side evaluation for stable features | 5-15% MAU reduction |
| Consolidate multi-context usage | 5-10% MAU reduction |
Days 76-90: Negotiation and automation
| Action | Expected Cost Impact |
|---|---|
| Compile MAU reduction data for negotiation | Evidence for lower pricing tier |
| Get competitor quotes for leverage | Negotiation leverage |
| Begin contract renegotiation | 20-40% bill reduction on top of MAU savings |
| Set up automated stale flag detection and cleanup | Prevents MAU re-inflation over time |
Realistic total outcome after 90 days:
| Metric | Estimated Improvement |
|---|---|
| MAU reduction from cleanup | 15-30% |
| MAU reduction from evaluation optimization | 10-25% |
| Contract renegotiation savings | 10-30% |
| Total cost reduction | 30-60% |
For a team paying $100,000/year for LaunchDarkly, this plan can realistically save $30,000-60,000 annually -- with the additional benefit of a cleaner codebase, faster builds, and fewer flag-related incidents.
The long game: Preventing cost creep
Cost reduction is a one-time effort. Cost prevention is an ongoing practice. The organizations that maintain low LaunchDarkly bills do three things consistently:
-
They enforce flag expiration. Every non-permanent flag gets an expiration date at creation time. When the date passes, automated systems notify the owner and create cleanup tickets.
-
They automate cleanup. Tools like FlagShark continuously monitor for stale flags and generate cleanup PRs automatically. This prevents the slow accumulation that leads to 2x renewal quotes.
-
They track MAU as a first-class metric. MAU appears on engineering dashboards alongside build times and deployment frequency. When MAU creeps up, the team investigates immediately rather than discovering the increase at renewal time.
LaunchDarkly is a powerful platform, and for most teams, switching providers is the wrong response to rising costs. The right response is to stop paying for stale flags that generate no value, optimize how and where you evaluate flags, and negotiate from a position of data rather than surprise. The strategies in this guide can reduce your LaunchDarkly bill by 30-60% in 90 days -- without writing a single line of migration code.