LaunchDarkly bills based on Monthly Active Users (now called Monthly Context Instances) -- the number of unique contexts that evaluate at least one flag during a billing period. At scale, every unnecessary evaluation path translates to real cost. If your contract is based on MAU/MCI tiers, cleaning up stale flags can meaningfully reduce your bill.
The math is straightforward: if a significant portion of your MAU is driven by stale flag evaluations that return a constant value, you are paying more than you need to. In our experience working with engineering teams, it is common for a quarter or more of flags to be stale at any given time, and those stale flags often touch high-traffic code paths.
This guide is a technical deep-dive into exactly how MAU accumulates, what inflates it, and how to systematically reduce it through code changes, SDK configuration, and flag lifecycle automation.
How LaunchDarkly MAU actually works
Before optimizing, you need to understand the billing mechanics at a precise level. The details matter because small misunderstandings compound into large cost discrepancies.
What counts as an MAU
A Monthly Active User is a unique context key that appears in at least one flag evaluation during the billing period. The critical details:
| Rule | Implication |
|---|---|
| A context is counted once per month regardless of how many flags it evaluates | 1 evaluation = 1 MAU. 10,000 evaluations = 1 MAU. |
| Anonymous contexts with unique keys each count as separate MAU | Random anonymous IDs generate unbounded MAU |
| Multi-context evaluations count as a single MCI (Monthly Context Instance) | {user: "123", org: "456"} = 1 MCI for billing purposes |
| Server-side and client-side evaluations both count | Moving evaluations server-side only helps if you change the context |
| Evaluations in all environments count toward MAU (unless on a separate project with a separate contract) | Your staging environment may be inflating your bill |
| Contexts that evaluate only archived flags still count | Archiving in the dashboard does not stop code-level evaluations |
That last point is worth emphasizing. Archiving a flag in LaunchDarkly does not remove it from your code. If your application still calls variation() with an archived flag key, the SDK still evaluates it (returning the default value), and the context still counts toward MAU. Archiving without code cleanup is cosmetic -- it cleans the dashboard but not the bill.
The anatomy of a flag evaluation
Here is what happens when your application evaluates a flag:
Application calls ldClient.variation("my-flag", context, defaultValue)
│
├─ SDK checks local flag store (synced from LD servers via streaming/polling)
│ ├─ Flag found → Evaluate targeting rules → Return value
│ └─ Flag not found → Return defaultValue (flag may be archived or deleted)
│
├─ SDK records evaluation event
│ ├─ Context key + flag key + timestamp
│ └─ Events are batched and sent to LD servers periodically
│
└─ LD servers process events
├─ Deduplicate context keys within the billing period
└─ Increment MAU counter for unique contexts
Key insight: The SDK sends evaluation events regardless of whether the flag is active, archived, or even exists in LaunchDarkly. As long as your code calls variation(), the context is counted.
Identifying your MAU composition
Use the LaunchDarkly API to understand where your MAU comes from:
import requests
from collections import defaultdict
from datetime import datetime, timedelta
LD_API_KEY = "api-key-here"
PROJECT_KEY = "default"
ENVIRONMENT_KEY = "production"
def get_flag_evaluation_stats():
"""Fetch evaluation statistics for all flags."""
headers = {"Authorization": LD_API_KEY}
# Get all flags
flags_response = requests.get(
f"https://app.launchdarkly.com/api/v2/flags/{PROJECT_KEY}",
headers=headers,
params={"env": ENVIRONMENT_KEY, "summary": True}
)
flags = flags_response.json()["items"]
flag_stats = []
for flag in flags:
env_data = flag.get("environments", {}).get(ENVIRONMENT_KEY, {})
created = datetime.fromisoformat(
flag["creationDate"].replace("Z", "+00:00")
)
age_days = (datetime.now(created.tzinfo) - created).days
flag_stats.append({
"key": flag["key"],
"age_days": age_days,
"on": env_data.get("on", False),
"last_modified": env_data.get("lastModified"),
"version": env_data.get("version", 0),
"archived": flag.get("archived", False),
"tags": flag.get("tags", []),
"client_side_availability": flag.get(
"clientSideAvailability", {}
),
})
return flag_stats
def identify_stale_flags(flag_stats, stale_threshold_days=30):
"""Identify flags that are stale and inflating MAU."""
stale = []
for flag in flag_stats:
if flag["archived"]:
continue
# Flag is ON and has not been modified recently
if flag["on"] and flag["age_days"] > stale_threshold_days:
last_mod = flag.get("last_modified")
if last_mod:
mod_date = datetime.fromtimestamp(last_mod / 1000)
days_since_mod = (datetime.now() - mod_date).days
if days_since_mod > stale_threshold_days:
stale.append({
**flag,
"days_since_modified": days_since_mod,
"client_exposed": flag["client_side_availability"].get(
"usingEnvironmentId", False
),
})
# Sort by client exposure first (higher MAU impact), then by age
stale.sort(key=lambda f: (
not f["client_exposed"], # Client-exposed flags first
-f["days_since_modified"] # Then oldest first
))
return stale
Run this audit and you will typically find that 25-40% of your flags qualify as stale. The flags marked client_exposed: True are the highest-priority targets because they drive MAU from every user who loads your front-end application.
The stale flag MAU problem in detail
Stale flags inflate MAU through a mechanism that is deceptively simple: code that should have been removed keeps running, and every execution counts a user.
How a single stale flag costs money
Consider a React application with a feature flag that controls a redesigned checkout flow. The flag was rolled out to 100% three months ago:
// This code evaluates the flag for every user who reaches checkout
function CheckoutPage({ cart }: { cart: Cart }) {
const showNewCheckout = useFlags()['release-new-checkout'];
if (showNewCheckout) {
return <NewCheckoutFlow cart={cart} />;
}
return <LegacyCheckoutFlow cart={cart} />;
}
The flag has been true for every user for 90 days. The LegacyCheckoutFlow component is dead code. But every user who visits the checkout page evaluates this flag, and that evaluation counts them as an MAU.
Now multiply this across your application. If you have 30 stale flags scattered across your front end, and your application gets 80,000 unique monthly visitors, all 80,000 visitors are guaranteed to be MAU -- even if only 50,000 of them interact with active, meaningful flags.
The 30,000 users who only hit stale flag paths are pure waste: you are paying for their MAU to evaluate flags that return constant values.
The cascading effect of stale client-side flags
Client-side flags are especially costly because the LaunchDarkly client SDK evaluates the entire client-side flag set when it initializes. This means:
User visits your site
→ Client SDK initializes
→ SDK downloads flag values for ALL client-side enabled flags
→ SDK evaluates each flag for the user's context
→ User counts as MAU
Even if the user never navigates to a page that uses a specific flag, the flag is evaluated during SDK initialization if it is marked as client-side available. This is why client-side flag hygiene is the single highest-leverage MAU optimization.
Audit your client-side flag count:
# Using the LaunchDarkly API, count client-side enabled flags
curl -s -H "Authorization: $LD_API_KEY" \
"https://app.launchdarkly.com/api/v2/flags/$PROJECT_KEY?summary=true" | \
jq '[.items[] | select(.clientSideAvailability.usingEnvironmentId == true)] | length'
If this number is significantly larger than the number of flags actually used in your front-end code, you have flags that are client-side available but only used server-side (or not used at all). Disabling client-side availability for these flags is a zero-risk optimization.
Technical strategies for reducing MAU
Strategy 1: Eliminate stale flag evaluations from code
The most direct way to reduce MAU is to remove stale flag evaluations from your codebase. Every flag evaluation you remove is one less reason for a user to count as an MAU.
Step 1: Identify stale flags with the LaunchDarkly API
Use the audit script from the previous section to generate a prioritized list of stale flags.
Step 2: Verify the flag is safe to remove
Before removing any flag from code, verify:
def is_flag_safe_to_remove(flag_key, project_key, environment_key, api_key):
"""Check if a flag is safe to remove from code."""
headers = {"Authorization": api_key}
base = "https://app.launchdarkly.com/api/v2"
# 1. Check flag is ON and has been stable
flag = requests.get(
f"{base}/flags/{project_key}/{flag_key}",
headers=headers
).json()
env = flag["environments"][environment_key]
if not env["on"]:
return False, "Flag is OFF -- removing would change behavior"
# 2. Check for active targeting rules (beyond "serve true to all")
rules = env.get("rules", [])
if len(rules) > 0:
return False, f"Flag has {len(rules)} targeting rules -- may still be in rollout"
# 3. Check fallthrough is serving the expected value
fallthrough = env.get("fallthrough", {})
variation_index = fallthrough.get("variation")
if variation_index is None:
return False, "Fallthrough uses percentage rollout -- not fully rolled out"
# 4. Check for prerequisites (other flags depend on this one)
prereqs = flag.get("prerequisites", [])
if prereqs:
return False, f"Flag has prerequisites: {prereqs}"
# 5. Check evaluation count is non-zero (flag is actually being evaluated)
# A flag with zero evaluations may have already been removed from code
insights = requests.get(
f"{base}/flags/{project_key}/{flag_key}/insights",
headers=headers
)
if insights.status_code == 200:
eval_count = insights.json().get("totalEvaluations", 0)
if eval_count == 0:
return True, "Flag has zero evaluations -- safe to archive in LD"
return True, "Flag appears safe to remove from code"
Step 3: Remove the flag from code
The removal itself depends on how the flag is used. The three most common patterns:
Pattern 1: Simple boolean guard (keep the "on" path)
// Before: flag guards a feature
if ldClient.BoolVariation("release-new-checkout", context, false) {
return handleNewCheckout(req)
}
return handleLegacyCheckout(req)
// After: remove flag, keep the "on" path
return handleNewCheckout(req)
// Also delete handleLegacyCheckout entirely
Pattern 2: Multi-variate flag (replace with the served value)
# Before: flag returns a string variant
checkout_version = ld_client.variation(
"checkout-version", context, "v1"
)
if checkout_version == "v3":
return render_v3_checkout(cart)
elif checkout_version == "v2":
return render_v2_checkout(cart)
else:
return render_v1_checkout(cart)
# After: replace with the value that has been served to 100% of users
return render_v3_checkout(cart)
# Also delete render_v2_checkout and render_v1_checkout
Pattern 3: Flag in a configuration object
// Before: flag mixed into a config object
const config = {
enableNewSearch: ldClient.variation('release-new-search', context, false),
maxResults: ldClient.variation('config-max-results', context, 50),
enableCaching: true,
};
// After: replace stale flags with their constant values
const config = {
enableNewSearch: true, // Was 100% ON for 4 months
maxResults: ldClient.variation('config-max-results', context, 50), // Still active
enableCaching: true,
};
The difficulty scales with codebase size. In a small application, you can do this manually in an afternoon. In a codebase with 200+ stale flags across 12 services in 3 languages, manual removal takes weeks of engineering time and is error-prone.
This is where automated cleanup tooling becomes essential. FlagShark detects stale flags using tree-sitter AST parsing, which means it understands your code's syntax tree rather than doing text search. It identifies the exact code locations where a flag is evaluated, determines the served value, and generates a cleanup PR that removes the flag evaluation, the dead code branch, and any unused imports. The PR is ready for review -- you verify and merge it rather than writing it from scratch.
The AST approach handles cases that regex-based detection misses:
| Pattern | Regex Detection | AST Detection |
|---|---|---|
| Flag key in a variable | Misses const key = "my-flag"; variation(key) | Resolves variable to string literal |
| Destructured SDK imports | May miss const { variation } = useLDClient() | Follows import bindings |
| Flag in a ternary expression | Matches string but cannot determine code branches | Understands expression structure |
| Flag in a switch statement | Cannot determine which case to keep | Parses switch semantics |
| Flag key built from concatenation | Misses "release-" + featureName | Limited (dynamic keys are inherently hard) |
Strategy 2: Reduce anonymous MAU with context key management
Anonymous users are the largest hidden source of MAU inflation. Every unique anonymous context key is a separate MAU, and the default behavior of most LaunchDarkly SDK integrations generates a new random key for each anonymous user.
The problem in concrete numbers:
| Scenario | Monthly Visitors | Anonymous MAU | Authenticated MAU | Total MAU |
|---|---|---|---|---|
| Default SDK behavior (random anonymous keys) | 100,000 | 100,000 | 20,000 | 120,000 |
| Stable device-based anonymous keys | 100,000 | 60,000 (returning visitors reuse key) | 20,000 | 80,000 |
| Defer SDK until authentication | 100,000 | 0 | 20,000 | 20,000 |
The difference between the first and third scenario is 100,000 MAU -- which at $1.00/MAU represents $100,000 per year.
Implementation: Stable anonymous keys
If you need flag evaluation for anonymous users (for example, an experiment on your landing page), use a stable key rather than a random one:
// Use localStorage to persist anonymous key across sessions
function getStableAnonymousKey(): string {
const STORAGE_KEY = 'ld-anonymous-key';
let key = localStorage.getItem(STORAGE_KEY);
if (!key) {
key = crypto.randomUUID();
localStorage.setItem(STORAGE_KEY, key);
}
return key;
}
// Initialize SDK with stable key
const context: LDContext = {
kind: 'user',
key: getStableAnonymousKey(),
anonymous: true,
};
const ldClient = initialize('client-side-id', context);
This ensures returning visitors reuse their anonymous key, reducing MAU by the percentage of your traffic that consists of repeat visitors (typically 30-60%).
Implementation: Deferred initialization
If anonymous users do not need flag evaluations, defer SDK initialization entirely:
// LaunchDarkly provider that only initializes after authentication
import { LDProvider } from 'launchdarkly-react-client-sdk';
function AuthenticatedLDProvider({ children }: { children: React.ReactNode }) {
const { user, isAuthenticated } = useAuth();
if (!isAuthenticated) {
// No SDK initialization for anonymous users = 0 anonymous MAU
return <>{children}</>;
}
return (
<LDProvider
clientSideID="your-client-side-id"
context={{
kind: 'user',
key: user.id,
email: user.email,
name: user.name,
}}
>
{children}
</LDProvider>
);
}
Trade-off: Anonymous users cannot be targeted by flags. If you run experiments on your marketing site or landing pages, you need anonymous evaluation. If your flags only control features for authenticated users, deferred initialization is the correct choice.
Strategy 3: Simplify context usage for cleaner evaluation
While LaunchDarkly's multi-context feature bills a multi-context evaluation as a single MCI (Monthly Context Instance), simplifying your context structure still has benefits: it reduces SDK payload size, simplifies targeting rules, and makes flag cleanup easier.
When to simplify multi-context to single-context:
If you only target flags based on user attributes, you may not need multi-context at all. Moving auxiliary attributes onto the user context reduces complexity without changing billing:
// Before: 3 context kinds, but flags only target by user
const context: LDMultiKindContext = {
kind: 'multi',
user: { key: user.id, name: user.name },
organization: { key: org.id, plan: org.plan },
device: { key: deviceId, type: deviceType },
};
// After: single context with all attributes
const context: LDSingleKindContext = {
kind: 'user',
key: user.id,
name: user.name,
email: user.email,
orgId: org.id, // Org as an attribute, not a separate context
orgPlan: org.plan,
deviceType: deviceType,
};
// Simpler to reason about and easier to clean up
When multi-context is still the right choice: If you target flags based on organization-level attributes (e.g., rolling out a feature to specific organizations regardless of user), multi-context gives you the granularity you need. The key is to only include context kinds you actually use in targeting rules.
Strategy 4: Separate non-production environments
If your staging, QA, or development environments share a LaunchDarkly project with production, their evaluations count toward the same MAU total.
Common sources of non-production MAU:
| Source | Typical MAU Impact | Fix |
|---|---|---|
| Staging environment with real-ish data | 5,000-20,000 MAU | Separate LD project for staging |
| Automated test suites evaluating flags | 1,000-10,000 MAU | Mock the SDK in tests |
| Local development with live SDK | 50-200 MAU per developer per month | Use offline mode or test data |
| Load testing against staging | 10,000-100,000 MAU per test run | Mock SDK or use separate project |
Mock the SDK in tests:
// Jest/Vitest: Mock LaunchDarkly to avoid MAU in test runs
vi.mock('launchdarkly-react-client-sdk', () => ({
useFlags: () => ({
'release-new-checkout': true,
'experiment-pricing-v2': 'control',
// Return the production values for stale flags
// Return controlled values for flags under test
}),
useLDClient: () => ({
variation: (key: string, defaultValue: unknown) => {
const testFlags: Record<string, unknown> = {
'release-new-checkout': true,
'experiment-pricing-v2': 'control',
};
return testFlags[key] ?? defaultValue;
},
}),
}));
Use offline mode for local development:
// Go: Initialize SDK in offline mode for local development
import (
ld "github.com/launchdarkly/go-server-sdk/v7"
"github.com/launchdarkly/go-server-sdk/v7/ldfiledata"
)
func initLDClient() *ld.LDClient {
if os.Getenv("USE_LOCALSTACK") == "true" {
// Offline mode: reads flags from a local file, zero MAU
dataSource := ldfiledata.DataSource().
FilePaths("./flags.json").
Reloader(ldfiledata.FileWatchingReloader())
config := ld.Config{
DataSource: dataSource,
Events: ldcomponents.NoEvents(), // No events = no MAU
}
client, _ := ld.MakeCustomClient("offline", config, 5*time.Second)
return client
}
// Production: normal initialization
client, _ := ld.MakeClient(os.Getenv("LD_SDK_KEY"), 5*time.Second)
return client
}
Strategy 5: Identify and fix high-cost flag patterns
Some code patterns generate disproportionate MAU relative to their value. Identifying these patterns requires combining LaunchDarkly's evaluation data with code analysis.
Pattern: Flag evaluation in a hot loop
# BAD: Evaluating a flag inside a loop that processes each user
# If this runs server-side with per-user contexts, each user = 1 MAU
def process_notifications(users):
for user in users:
context = create_ld_context(user)
if ld_client.variation("enable-new-notifications", context, False):
send_new_notification(user)
else:
send_legacy_notification(user)
# BETTER: Evaluate once with a service context, or evaluate outside the loop
def process_notifications(users):
# Use a single service context for the batch decision
service_context = Context.builder("notification-service").build()
use_new_notifications = ld_client.variation(
"enable-new-notifications", service_context, False
)
for user in users:
if use_new_notifications:
send_new_notification(user)
else:
send_legacy_notification(user)
In the first example, if this function processes 50,000 users, it generates 50,000 MAU. In the second example, it generates 1 MAU (the service context). The flag evaluation result is the same in both cases if the flag is 100% ON or 100% OFF.
Pattern: Flag evaluation on every API request
// BAD: Evaluating a flag in middleware that runs on every request
app.use(async (req, res, next) => {
const context = contextFromRequest(req);
const useNewRateLimiter = await ldClient.variation(
'ops-new-rate-limiter', context, false
);
if (useNewRateLimiter) {
applyNewRateLimiter(req, res, next);
} else {
applyLegacyRateLimiter(req, res, next);
}
});
// BETTER: Cache the flag value and re-evaluate periodically
let cachedRateLimiterFlag = false;
let lastEvaluated = 0;
app.use(async (req, res, next) => {
const now = Date.now();
// Re-evaluate every 60 seconds with a service context
if (now - lastEvaluated > 60_000) {
const serviceContext = { kind: 'service', key: 'api-server' };
cachedRateLimiterFlag = await ldClient.variation(
'ops-new-rate-limiter', serviceContext, false
);
lastEvaluated = now;
}
if (cachedRateLimiterFlag) {
applyNewRateLimiter(req, res, next);
} else {
applyLegacyRateLimiter(req, res, next);
}
});
Pattern: Redundant evaluation across components
// BAD: Same flag evaluated in multiple components during a single render
function App() {
return (
<>
<Header /> {/* Evaluates 'release-new-nav' */}
<Sidebar /> {/* Evaluates 'release-new-nav' again */}
<Content /> {/* Evaluates 'release-new-nav' a third time */}
</>
);
}
// BETTER: Evaluate once at the top level, pass the value down
function App() {
const { 'release-new-nav': useNewNav } = useFlags();
return (
<>
<Header useNewNav={useNewNav} />
<Sidebar useNewNav={useNewNav} />
<Content useNewNav={useNewNav} />
</>
);
}
This does not reduce MAU directly (the user counts as 1 MAU regardless of evaluation count), but it reduces evaluation events, streaming bandwidth, and SDK overhead. More importantly, it centralizes the flag usage to a single location, making future cleanup faster.
Building a MAU monitoring system
Reducing MAU is a one-time effort without monitoring to prevent regression. Build a simple system that tracks MAU over time and alerts when it increases unexpectedly.
Tracking MAU via the LaunchDarkly API
import requests
import json
from datetime import datetime
def track_mau_metrics(api_key, project_key):
"""Collect MAU metrics for monitoring and alerting."""
headers = {"Authorization": api_key}
base = "https://app.launchdarkly.com/api/v2"
# Get MAU usage
usage = requests.get(
f"{base}/usage/mau/sdks",
headers=headers,
params={"project": project_key}
).json()
# Get flag count by status
flags = requests.get(
f"{base}/flags/{project_key}",
headers=headers,
params={"summary": True}
).json()["items"]
active_flags = [f for f in flags if not f.get("archived", False)]
client_flags = [
f for f in active_flags
if f.get("clientSideAvailability", {}).get("usingEnvironmentId")
]
metrics = {
"timestamp": datetime.now().isoformat(),
"mau": {
"total": usage.get("_usage", [{}])[-1].get("total", 0),
},
"flags": {
"total_active": len(active_flags),
"client_side": len(client_flags),
"stale_candidates": len([
f for f in active_flags
if is_likely_stale(f)
]),
},
"estimated_waste_pct": calculate_waste_percentage(active_flags),
}
return metrics
def is_likely_stale(flag):
"""Heuristic to identify likely stale flags."""
env = flag.get("environments", {}).get("production", {})
if not env.get("on"):
return False
# Flag is ON with no targeting rules (serving to everyone)
rules = env.get("rules", [])
if len(rules) > 0:
return False
# Check age
created = datetime.fromisoformat(
flag["creationDate"].replace("Z", "+00:00")
)
age_days = (datetime.now(created.tzinfo) - created).days
return age_days > 30
def calculate_waste_percentage(flags):
"""Estimate percentage of MAU driven by stale flags."""
total = len(flags)
if total == 0:
return 0
stale = len([f for f in flags if is_likely_stale(f)])
# Rough estimate: stale flags as percentage of total
# Actual MAU impact depends on traffic patterns
return round((stale / total) * 100, 1)
Setting up alerts
Track these metrics weekly and alert on regressions:
| Metric | Healthy | Warning Threshold | Alert Threshold |
|---|---|---|---|
| Total MAU | At or below contracted amount | > 80% of contracted MAU | > 95% of contracted MAU |
| MAU week-over-week change | < 5% increase | 5-15% increase | > 15% increase |
| Stale flag count | < 15% of total flags | 15-30% of total flags | > 30% of total flags |
| Client-side stale flags | 0 | 1-5 | > 5 |
| New flags created vs. removed (monthly) | Ratio < 1.5:1 | Ratio 1.5:1 to 3:1 | Ratio > 3:1 |
The "new flags created vs. removed" ratio is the leading indicator. If you are creating flags faster than you are removing them, MAU inflation is inevitable. A healthy ratio is below 1.5:1 -- for every 3 flags created, at least 2 should be removed within 90 days.
A systematic MAU reduction playbook
Here is a step-by-step playbook for reducing MAU, ordered by effort-to-impact ratio:
Week 1: Low-effort, high-impact
| Action | Effort | MAU Impact |
|---|---|---|
| Archive flags with 0 evaluations in LaunchDarkly | 1 hour | Cleans dashboard; no direct MAU impact |
| Disable client-side availability for server-only flags | 2 hours | Reduces client SDK payload; marginal MAU reduction |
| Mock SDK in test suites | 4 hours | Eliminates test-driven MAU (1,000-10,000) |
| Set up offline mode for local development | 2 hours | Eliminates developer-driven MAU (50-200/dev) |
Week 2-3: Medium-effort, high-impact
| Action | Effort | MAU Impact |
|---|---|---|
| Remove top 10 stale flags from code (by traffic) | 8-16 hours | 5-15% MAU reduction |
| Implement stable anonymous keys | 4 hours | 20-40% reduction in anonymous MAU |
| Consolidate multi-context to fewer context kinds | 4-8 hours | 20-33% reduction per evaluation if removing a context kind |
Week 4-6: Higher-effort, structural improvements
| Action | Effort | MAU Impact |
|---|---|---|
| Remove remaining stale flags from code | 20-40 hours | Additional 10-20% MAU reduction |
| Defer SDK initialization for unauthenticated users | 8-16 hours | Eliminates anonymous MAU entirely |
| Move stable feature flags to server-side evaluation | 16-24 hours | Reduces client MAU proportionally |
| Set up automated stale flag detection and cleanup | 4 hours (with tooling) | Prevents MAU re-inflation |
Expected cumulative results
| Phase | Cumulative MAU Reduction | Typical Dollar Impact (at $1/MAU) |
|---|---|---|
| After Week 1 | 5-10% | $2,500-5,000/year |
| After Week 3 | 25-40% | $12,500-20,000/year |
| After Week 6 | 40-60% | $20,000-30,000/year |
These numbers assume a starting point of 50,000 MAU. Scale proportionally for your organization. At 200,000 MAU, the Week 6 savings are $80,000-120,000 per year.
Preventing MAU re-inflation
The most common failure mode is not the initial reduction -- it is the re-inflation that happens over the following 6-12 months as new stale flags accumulate. Preventing this requires two things: process and automation.
Process: Flag lifecycle governance
| Rule | Enforcement |
|---|---|
| Every flag gets an expiration date at creation | PR review checklist, LaunchDarkly tag enforcement |
| Stale flags are reviewed weekly | Automated weekly report to engineering leads |
| Flag removal PRs are created within 14 days of full rollout | Automated ticket creation |
| Client-side flag count is tracked as a team metric | Dashboard metric, reviewed in sprint retros |
Automation: Continuous cleanup
Manual governance works for small teams but breaks down at scale. At 50+ engineers creating flags across multiple services, automation is the only reliable approach.
FlagShark provides this automation layer by monitoring GitHub repositories for flag changes, tracking each flag's lifecycle status, and generating cleanup PRs when flags become stale. Because it operates at the code level using tree-sitter parsing, it catches flags that LaunchDarkly's Code References (which uses regex) may miss -- including flags referenced through variables, aliases, and wrapper functions across 11 programming languages.
The combination of LaunchDarkly for flag management and automated cleanup tooling for code-level flag removal creates a closed loop: flags are created, used, and removed without manual intervention at any stage of the lifecycle.
MAU is not an abstract billing metric. It is a direct measure of how many users are evaluating your feature flags, and every stale flag makes that number larger than it needs to be. The technical strategies in this guide -- stale flag removal, anonymous key management, context consolidation, and evaluation pattern optimization -- can reduce your MAU by 40-60% in six weeks. The monitoring and automation practices prevent the reduction from eroding over time. The result is a LaunchDarkly bill that reflects the actual value you get from feature flags, not the accumulated debt of flags you forgot to clean up.