LaunchDarkly is the industry standard for feature flag management. Over 4,000 companies rely on it to control feature rollouts, run experiments, and manage configurations across their applications. But here is a pattern that repeats across almost every LaunchDarkly customer: the platform is excellent at flag management, yet organizations still accumulate hundreds of stale flags that pollute their codebases.
This is not a failing of LaunchDarkly. It is a gap between flag management and flag cleanup, between the control plane and the code. LaunchDarkly tells you which flags exist and how they are targeted. It does not remove dead conditional branches from your Go services, clean up unused imports in your TypeScript components, or coordinate flag removal across your 30 repositories.
Understanding this gap is the first step toward true flag lifecycle management. This guide covers advanced best practices specifically for LaunchDarkly users, from organizational patterns within the platform to cleanup workflows that close the lifecycle loop.
Organizing flags for long-term maintainability
Most LaunchDarkly accounts start clean and organized. Six months later, the flag list is an unsearchable wall of hundreds of entries with inconsistent naming, missing tags, and no clear ownership. Prevention starts with organizational discipline from day one.
Naming conventions that scale
A flag key is permanent. Once SDKs evaluate it across your services, renaming requires coordinated code changes everywhere the key appears. Get the naming right the first time.
Recommended naming pattern:
{type}_{team}_{feature}_{detail}
Examples:
| Flag Key | Type | Team | Purpose |
|---|---|---|---|
release_checkout_redesigned-cart | Release | Checkout | New cart UI rollout |
experiment_growth_onboarding-video | Experiment | Growth | Test onboarding video impact |
ops_platform_circuit-breaker-payments | Operational | Platform | Payment service kill switch |
permission_billing_enterprise-sso | Permission | Billing | SSO feature access |
Why this works:
- Sorting: Flags of the same type group together in the LaunchDarkly dashboard
- Filtering: You can quickly find all experiment flags, all flags owned by a team, or all operational flags
- Lifecycle expectations: The type prefix immediately communicates the expected lifespan --
release_flags should be short-lived,ops_flags may be permanent - Cleanup prioritization: When auditing stale flags, you can focus on
release_andexperiment_prefixes, which are the most common sources of technical debt
Using tags effectively
Tags are LaunchDarkly's primary organizational mechanism, but most teams either ignore them or apply them inconsistently. A disciplined tagging strategy provides powerful filtering and reporting capabilities.
Recommended tag categories:
| Tag Category | Examples | Purpose |
|---|---|---|
| Team ownership | team:checkout, team:platform | Identify who is responsible |
| Service | service:api, service:web, service:mobile | Track where flag is evaluated |
| Lifecycle stage | stage:rolling-out, stage:cleanup-ready, stage:permanent | Track flag maturity |
| Quarter created | created:2025-q3 | Age tracking at a glance |
| Priority | priority:p1, priority:p2 | Cleanup prioritization |
| Jira/Linear ticket | ticket:FLAG-123 | Link to tracking issue |
Automation tip: Use the LaunchDarkly API to enforce tagging at creation time. A webhook-triggered function can check new flags and notify owners if required tags are missing:
import requests
def check_flag_tags(flag_key, project_key):
"""Verify a newly created flag has required tags."""
response = requests.get(
f"https://app.launchdarkly.com/api/v2/flags/{project_key}/{flag_key}",
headers={"Authorization": LD_API_KEY}
)
flag = response.json()
tags = flag.get("tags", [])
required_prefixes = ["team:", "service:", "stage:"]
missing = [p for p in required_prefixes if not any(t.startswith(p) for t in tags)]
if missing:
notify_owner(flag_key, f"Missing required tag categories: {missing}")
Project and environment structure
LaunchDarkly projects and environments control flag scope. The right structure prevents flag sprawl and simplifies cleanup.
Project structure recommendations:
| Approach | When to Use | Trade-offs |
|---|---|---|
| One project per product | Small-medium orgs, single product | Simple, all flags visible, can become crowded |
| One project per team | Large orgs, autonomous teams | Clean separation, harder to share flags |
| One project per domain | Domain-driven organizations | Aligned with business logic, moderate complexity |
Environment best practices:
- Mirror your deployment environments exactly (dev, staging, production)
- Use environment-level approvals for production flag changes
- Enable "Require comments" for production environments
- Configure default flag values per environment (flags should be OFF by default in production)
Leveraging Code References for cleanup
LaunchDarkly Code References is one of the platform's most powerful and underutilized features. It scans your repositories and links flag keys to the exact files and lines where they appear. This transforms flag cleanup from guesswork into precision work.
Setting up Code References
Code References requires the ld-find-code-refs CLI tool running in your CI pipeline:
# GitHub Actions example
name: LaunchDarkly Code References
on:
push:
branches: [main]
jobs:
code-refs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 11 # Required for accurate diff detection
- name: Update LaunchDarkly Code References
uses: launchdarkly/find-code-references@v2
with:
accessToken: ${{ secrets.LD_ACCESS_TOKEN }}
projKey: my-project
Using Code References for cleanup decisions
Once Code References is running, the LaunchDarkly dashboard shows exactly where each flag is used in code. This enables a powerful cleanup workflow:
Zero-reference flags: Flags with no code references are immediate cleanup candidates. They exist in LaunchDarkly but are not evaluated anywhere. These are safe to archive immediately.
Single-reference flags: Flags referenced in only one file are the easiest to remove. The blast radius is minimal and the change is contained.
Multi-reference flags: Flags referenced across many files or repositories require coordinated removal. Code References helps you identify the full scope before starting.
Code References cleanup workflow:
1. Filter flags by "0 code references" → Archive immediately
2. Filter flags by tag "stage:cleanup-ready" → Review code refs
3. For each cleanup-ready flag:
a. Check code references for total scope
b. Verify flag is 100% ON in production
c. Create removal ticket with file list from code refs
d. Assign to owning team based on tag
4. Track removal progress via code ref count → 0
Limitations of Code References
Code References is valuable but has important limitations that teams should understand:
| Capability | Code References | What You Still Need |
|---|---|---|
| Finds flag keys in code | Yes (regex-based) | AST-based detection for accuracy |
| Understands code context | No | Syntax-aware analysis of flag usage |
| Generates removal PRs | No | Automated code transformation |
| Tracks cross-repo dependencies | Per-repository only | Cross-repository coordination |
| Detects dynamic flag keys | Limited | Runtime analysis or advanced parsing |
| Removes dead code branches | No | Code transformation tooling |
Code References tells you where a flag is used. It does not tell you how to safely remove it, and it does not perform the removal for you. This is the gap that specialized cleanup tools fill.
Setting up flag maintainers and expiration
LaunchDarkly supports flag maintainers and scheduled flag removal, but most organizations do not use these features effectively.
Flag maintainers
Every flag should have an assigned maintainer. The maintainer is responsible for the flag's lifecycle, from creation through cleanup. In LaunchDarkly, you can assign maintainers via the dashboard or API:
# Assign maintainer via API
curl -X PATCH \
"https://app.launchdarkly.com/api/v2/flags/my-project/release_checkout_redesigned-cart" \
-H "Authorization: $LD_API_KEY" \
-H "Content-Type: application/json" \
-d '[{
"op": "replace",
"path": "/_maintainer",
"value": {"_links": {"self": {"href": "/api/v2/members/user-id-here"}}}
}]'
Maintainer accountability framework:
| Responsibility | Frequency | Enforcement |
|---|---|---|
| Review flag status | Weekly | Automated report to maintainer |
| Update lifecycle tags | On state change | Webhook validation |
| Initiate cleanup when rolled out | Within 14 days of 100% | Automated reminder |
| Respond to stale flag alerts | Within 5 business days | Escalation to manager |
| Complete flag removal | Within 30 days of cleanup initiation | Dashboard tracking |
Flag expiration and scheduled removal
LaunchDarkly supports flag expiration workflows through its Scheduled Flag Changes feature. You can schedule a flag to be turned off or archived at a future date:
# Schedule flag archival via API
curl -X POST \
"https://app.launchdarkly.com/api/v2/flags/my-project/experiment_growth_onboarding-video/expiring-targets/production" \
-H "Authorization: $LD_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"instructions": [{
"kind": "removeUserTargets",
"contextKind": "user",
"values": [],
"version": 1
}],
"comment": "Experiment concluded, scheduling cleanup",
"date": 1730000000000
}'
Best practice: Set an expiration date for every non-permanent flag at creation time. Build this into your flag creation workflow so that expiration is not an afterthought:
| Flag Type | Recommended Expiration | LaunchDarkly Action |
|---|---|---|
| Release flag | 30 days after full rollout | Schedule archive |
| Experiment flag | 14 days after experiment ends | Schedule archive |
| Ops/kill switch | 365-day review cycle | No auto-archive, manual review |
| Permission flag | Quarterly review | Alert maintainer |
Leveraging flag triggers and webhooks
LaunchDarkly triggers and webhooks enable event-driven flag management that automates significant portions of the lifecycle.
Flag triggers for automated responses
Triggers allow external systems to modify flag state. Useful patterns include:
Metric-based rollback:
Datadog alert fires (error rate > 5%)
→ Trigger disables the flag in production
→ Team is notified
→ Incident investigation begins
Deployment-linked rollout:
CI/CD deployment succeeds
→ Trigger enables flag for 10% of users
→ Monitoring confirms stability
→ Gradual increase to 100%
Webhooks for lifecycle tracking
LaunchDarkly webhooks notify your systems when flags change. Use them to maintain an audit trail and trigger cleanup workflows:
// Webhook handler for flag lifecycle events
app.post('/webhooks/launchdarkly', (req, res) => {
const event = req.body;
switch (event.kind) {
case 'flag.created':
// Log creation, start age tracking
trackFlagCreation(event.currentVersion.key, {
creator: event.member.email,
tags: event.currentVersion.tags,
created: new Date()
});
break;
case 'flag.archived':
// Check if code references still exist
const codeRefs = getCodeReferences(event.currentVersion.key);
if (codeRefs.length > 0) {
notifyTeam(
`Flag ${event.currentVersion.key} archived but still referenced in ${codeRefs.length} files. ` +
`Code cleanup required.`
);
createCleanupTicket(event.currentVersion.key, codeRefs);
}
break;
case 'flag.updated':
// Check if flag just reached 100% rollout
const prodEnv = event.currentVersion.environments?.production;
if (prodEnv?.on && isFullRollout(prodEnv)) {
scheduleCleanupReminder(event.currentVersion.key, 14); // 14-day grace period
}
break;
}
res.status(200).send('OK');
});
Closing the management-to-cleanup gap
LaunchDarkly provides powerful flag management capabilities: targeting, experimentation, approvals, audit logs, and Code References. But there is a fundamental gap between management and cleanup. LaunchDarkly manages the flag. Your code contains the flag. Cleaning up requires transforming the code, and that is a different problem entirely.
What LaunchDarkly manages vs. what it does not
| Lifecycle Stage | LaunchDarkly Coverage | Gap |
|---|---|---|
| Flag creation | Full (dashboard, API, SDKs) | None |
| Targeting and rollout | Full (rules, segments, percentages) | None |
| Experimentation | Full (metrics, statistical analysis) | None |
| Flag discovery in code | Partial (Code References, regex) | No AST-level understanding |
| Stale flag identification | Partial (age, but not code context) | No "is this flag safe to remove?" analysis |
| Code cleanup | None | Requires external tooling |
| Dead branch removal | None | Requires code transformation |
| Cross-repo removal coordination | None | Requires multi-repo orchestration |
| Cleanup PR generation | None | Requires automated code generation |
This gap is not a criticism. LaunchDarkly is a flag management platform, and it excels at that job. Code cleanup is a code transformation problem that requires different capabilities: syntax parsing, dependency analysis, and automated refactoring.
Combining LaunchDarkly with cleanup automation
The most effective flag lifecycle management combines LaunchDarkly's management capabilities with dedicated cleanup tooling. The workflow looks like this:
LaunchDarkly (Management Layer)
├── Flag creation and targeting
├── Rollout control and experimentation
├── Code References (discovery)
├── Webhooks (lifecycle events)
└── API (flag status queries)
↓
Integration Layer
├── Webhooks trigger cleanup workflows
├── API queries identify cleanup candidates
├── Code References provide initial scope
└── Flag status informs removal safety
↓
Cleanup Automation (Code Layer)
├── AST-based flag detection across repos
├── Safe code transformation
├── Dead branch removal
├── Automated PR generation
└── Cross-repo coordination
FlagShark integrates with LaunchDarkly to bridge this gap, using LaunchDarkly's API to identify flags that are fully rolled out and then automatically generating cleanup PRs that safely remove the flag from your codebase using tree-sitter-based code analysis. The LaunchDarkly flag status serves as the signal; the code transformation handles the actual cleanup.
SDK best practices for cleaner flag code
How you use the LaunchDarkly SDK in your codebase directly impacts how easy flags are to find, understand, and remove. Disciplined SDK usage makes cleanup dramatically simpler.
Centralize flag definitions
The most common anti-pattern is scattering flag key strings throughout your codebase:
// BAD: Flag keys scattered across files
// In checkout.tsx
const showNewCart = ldClient.variation('release_checkout_redesigned-cart', false);
// In cart-summary.tsx
const showNewCart = ldClient.variation('release_checkout_redesigned-cart', false);
// In cart-api.ts
const useNewCart = ldClient.variation('release_checkout_redesigned-cart', false);
When it is time to remove this flag, you need to find every occurrence of the string 'release_checkout_redesigned-cart' across your entire codebase. Instead, centralize flag definitions:
// GOOD: Centralized flag definitions
// flags.ts
export const FLAGS = {
REDESIGNED_CART: 'release_checkout_redesigned-cart',
ONBOARDING_VIDEO: 'experiment_growth_onboarding-video',
PAYMENT_CIRCUIT_BREAKER: 'ops_platform_circuit-breaker-payments',
} as const;
export type FlagKey = typeof FLAGS[keyof typeof FLAGS];
// Helper with typed defaults
export function useFlag(key: FlagKey, defaultValue: boolean = false): boolean {
return useLDClient().variation(key, defaultValue);
}
// checkout.tsx
import { FLAGS, useFlag } from './flags';
const showNewCart = useFlag(FLAGS.REDESIGNED_CART);
Benefits of centralization:
| Benefit | Impact on Cleanup |
|---|---|
| Single source of truth for flag keys | One file to update when removing a flag |
| TypeScript type safety | Compiler errors if you miss a reference |
| Consistent default values | No split-brain defaults across files |
| Easy flag inventory | flags.ts is your complete flag list |
| Simpler code search | Search for FLAGS.REDESIGNED_CART instead of a string |
Wrap flag evaluations in descriptive functions
Instead of evaluating flags inline, wrap them in functions that describe the behavior:
// BAD: Inline evaluation with unclear intent
if ldClient.BoolVariation("release_checkout_redesigned-cart", user, false) {
return renderNewCart(items)
}
return renderLegacyCart(items)
// GOOD: Wrapped in descriptive function
func shouldUseRedesignedCart(user ldcontext.Context) bool {
return ldClient.BoolVariation("release_checkout_redesigned-cart", user, false)
}
// Usage
if shouldUseRedesignedCart(user) {
return renderNewCart(items)
}
return renderLegacyCart(items)
When the flag is ready for removal, the wrapper function makes the change obvious: replace the function body with return true (or inline the winning path) and then remove the function entirely. The intent is clear and the cleanup is mechanical.
Avoid nested flag evaluations
Nested flags create exponential complexity and make cleanup order-dependent:
# BAD: Nested flags create 4 possible states
if ld_client.variation("new_checkout", user, False):
if ld_client.variation("express_shipping", user, False):
return express_new_checkout(cart)
return standard_new_checkout(cart)
else:
if ld_client.variation("express_shipping", user, False):
return express_legacy_checkout(cart)
return standard_legacy_checkout(cart)
Instead, flatten the logic:
# GOOD: Flat structure, independent flags
use_new_checkout = ld_client.variation("new_checkout", user, False)
use_express_shipping = ld_client.variation("express_shipping", user, False)
checkout_handler = select_checkout_handler(
new_checkout=use_new_checkout,
express_shipping=use_express_shipping
)
return checkout_handler(cart)
Flat flag evaluation means each flag can be removed independently without understanding the state of other flags.
Flag health monitoring with the LaunchDarkly API
The LaunchDarkly API provides the data needed to build a comprehensive flag health monitoring system. Regular API queries can identify flags that need attention before they become technical debt.
Key API endpoints for flag health
# List all flags with metadata
GET /api/v2/flags/{projectKey}?summary=true
# Get flag status across environments
GET /api/v2/flag-statuses/{projectKey}/{environmentKey}
# Get flag evaluation metrics
GET /api/v2/flags/{projectKey}/{flagKey}/insights
# Get code references for a flag
GET /api/v2/code-refs/repositories
Building a flag health dashboard
Query the LaunchDarkly API weekly to generate a flag health report:
import requests
from datetime import datetime, timedelta
def generate_flag_health_report(project_key, api_key):
"""Generate a weekly flag health report from LaunchDarkly."""
headers = {"Authorization": api_key}
base_url = "https://app.launchdarkly.com/api/v2"
# Fetch all flags
flags = requests.get(
f"{base_url}/flags/{project_key}?summary=true",
headers=headers
).json()["items"]
report = {
"total_flags": len(flags),
"stale_flags": [],
"cleanup_ready": [],
"no_code_refs": [],
"no_maintainer": [],
}
now = datetime.now()
for flag in flags:
created = datetime.fromisoformat(flag["creationDate"].replace("Z", "+00:00"))
age_days = (now - created.replace(tzinfo=None)).days
prod_env = flag.get("environments", {}).get("production", {})
maintainer = flag.get("_maintainer")
tags = flag.get("tags", [])
# Check for stale flags (>90 days, still active)
if age_days > 90 and not flag.get("archived"):
is_permanent = any(t.startswith("ops_") for t in [flag["key"]]) or \
"permanent" in tags
if not is_permanent:
report["stale_flags"].append({
"key": flag["key"],
"age_days": age_days,
"maintainer": maintainer.get("email") if maintainer else "NONE",
})
# Check for cleanup-ready flags (100% ON for 14+ days)
if prod_env.get("on"):
last_modified = prod_env.get("lastModified")
if last_modified:
modified_date = datetime.fromtimestamp(last_modified / 1000)
days_at_current = (now - modified_date).days
if days_at_current > 14:
report["cleanup_ready"].append({
"key": flag["key"],
"days_at_100": days_at_current,
})
# Check for missing maintainer
if not maintainer:
report["no_maintainer"].append(flag["key"])
return report
Flag health metrics to track:
| Metric | Healthy | Warning | Action Required |
|---|---|---|---|
| Flags without maintainer | 0% | < 10% | > 10% |
| Flags older than 90 days (non-permanent) | < 15% | 15-30% | > 30% |
| Flags 100% ON for 14+ days | < 5% | 5-15% | > 15% |
| Flags with 0 code references | 0% | < 5% | > 5% |
| Average flag age (release type) | < 30 days | 30-60 days | > 60 days |
| Flags created this month vs. removed | Ratio < 2:1 | Ratio 2:1 to 4:1 | Ratio > 4:1 |
The complete LaunchDarkly flag lifecycle
Putting it all together, here is the complete flag lifecycle for LaunchDarkly users who want to maintain a clean, debt-free codebase:
1. CREATE: Flag created in LaunchDarkly
→ Naming convention enforced
→ Required tags applied (team, service, stage)
→ Maintainer assigned
→ Expiration date set
→ Code uses centralized flag definition
→ Tracking ticket created
2. ROLLOUT: Flag gradually enabled
→ Targeting rules configured
→ Metrics and experiments attached
→ Code References updating on each deploy
→ Monitoring active
3. EVALUATE: Flag reaches 100%
→ Grace period begins (14 days)
→ Automated reminder sent to maintainer
→ Tag updated to stage:cleanup-ready
4. CLEANUP: Code removal initiated
→ Code References identify all usage locations
→ Cleanup PRs generated (manually or automated)
→ Dead code branches removed
→ Tests updated to remove flag branches
→ PRs reviewed and merged
5. ARCHIVE: Flag archived in LaunchDarkly
→ Verify 0 code references
→ Archive flag in dashboard
→ Close tracking ticket
→ Update team flag metrics
6. VERIFY: Post-cleanup confirmation
→ No remaining code references
→ No fallback-to-default errors in logs
→ Application behavior unchanged
→ Flag metrics dashboard updated
LaunchDarkly provides a best-in-class flag management platform. But management is only half of the flag lifecycle. The other half -- the code cleanup, dead branch removal, and cross-repository coordination -- requires deliberate practices, disciplined SDK usage, and, for most teams, complementary tooling that bridges the gap between the management platform and the codebase. The organizations that master both halves of the lifecycle are the ones that maintain the velocity advantages of feature flags without accumulating the technical debt that flags are notorious for creating.