Meta Ads Campaign Optimization for App Installs: A Weekly Checklist to Protect CPI and Scale Spend
Weekly Meta Ads optimization checklist for mobile game UA. Tactical steps to protect CPI, scale spend, and catch issues before they cost you.
Every UA manager has their Monday ritual. Check the dashboards. React to whatever looks broken. Make a few changes. Hope for the best.
This post replaces hope with a system: a weekly checklist that catches problems early, scales what's working, and keeps your account healthy. It's designed for teams running $50K-$500K/month on Meta app install campaigns.
Print this. Run through it every week. Stop firefighting.
Monday: Diagnose the Week
Step 1: Run the CPI Decomposition
Open your account breakdown and calculate:
CPI = CPM / IPM
Where IPM = (CTR × click-to-install rate) × 1000
Break this out by:
- Platform (iOS vs Android)
- Geo (T1 vs T2/T3)
- Placement (Feed vs Reels vs Stories)
- Campaign type (prospecting vs retargeting)
What you're looking for:
- Which component moved? CPM? CTR? Click-to-install rate?
- Which segment is dragging the average?
- Is the change structural (lasting) or noise (one-time)?
Step 2: Check Learning Phase Status
For each active ad set, note:
- How many are "Active"?
- How many are "Learning"?
- How many are "Learning Limited"?
Red flags:
- More than 30% in Learning Limited = structure is too fragmented
- Ad sets stuck in Learning for 7+ days = consolidate or kill
Step 3: Review the Quality Signal
Pull your proxy event performance (tutorial_complete, level_5, or equivalent):
| Metric | This Week | Last Week | Change | |--------|-----------|-----------|--------| | Install → Proxy Rate | | | | | Proxy Event CPA | | | | | D0 ROAS (if available) | | | |
What you're looking for:
- CPI flat but proxy rate declining = buying lower-quality users
- CPI improved but ROAS declined = false efficiency
Tuesday-Wednesday: Creative Health Check
Step 4: Identify Fatigued Creatives
Pull creative-level metrics for anything with $500+ spend in the last 7 days:
| Creative | Frequency | CTR 7d | CTR 28d | CTR Δ | Status | |----------|-----------|--------|---------|-------|--------| | | | | | | |
Fatigue signals:
- Frequency > 2.5 (prospecting) or > 5 (retargeting)
- CTR down 20%+ from 28-day average
- CPM up 15%+ with no market explanation
Step 5: Check Creative Concentration
What percentage of spend goes to your top 3 creatives?
- Healthy: Top 3 = 40-60% of spend
- Risky: Top 3 = 70%+ of spend (one fatigue event kills performance)
- Starved: Top 3 = 90%+ (you're not testing enough)
Step 6: Review Your Test Pipeline
| Test Status | Count | Notes | |-------------|-------|-------| | Concepts in testing | | | | Ready to launch | | | | In production | | |
Minimum velocity: 3-5 new concepts per week for accounts spending $100K+/month.
Thursday: Bidding and Budget Review
Step 7: Audit Bid Strategy Performance
For each campaign, check:
| Campaign | Bid Strategy | Actual CPA | Target CPA | Delivery % | |----------|--------------|------------|------------|------------| | | | | | |
Decision framework:
- Delivery < 70% + CPA on target = cap too tight, loosen 10-15%
- Delivery 100% + CPA above target = creative issue or audience exhaustion
- Delivery < 50% = something broken (disapprovals, audience too narrow, cap too aggressive)
Step 8: Check Budget Pacing
Calculate:
- MTD spend vs MTD target
- Projected end-of-month at current pace
- Headroom for scaling if performance allows
Common issue: Teams underspend early month, then panic-scale late month. Better to pace evenly.
Step 9: Review Daily Budget Distribution
| Ad Set | Budget | 7d Spend | 7d CPI | Status | |--------|--------|----------|--------|--------| | | | | | |
Sort by CPI. Your top performers should have the highest budgets. If they don't, you're leaving money on the table.
Action: Move 10-20% of budget from bottom performers to top performers. Don't do this daily—weekly is enough.
Friday: Structural Health
Step 10: Check Audience Overlap
In Ads Manager, run the Audience Overlap tool on your top 5 audiences.
Red flags:
- Overlap > 50% between any two audiences = consolidate
- Same user seeing ads from multiple campaigns = you're bidding against yourself
Step 11: Review Exclusions
Verify your exclusion lists are current:
| Exclusion | Last Updated | Status | |-----------|--------------|--------| | Recent installers | | | | Recent purchasers | | | | Existing customers | | |
Common mistake: Exclusion windows too short (7 days) or lists not synced properly.
Step 12: Validate Event Tracking
Compare Meta-reported conversions vs MMP-reported (or your backend):
| Event | Meta Count | MMP Count | Delta | |-------|------------|-----------|-------| | Install | | | | | Tutorial Complete | | | | | Purchase | | | |
Acceptable delta: 10-20% (attribution differences) Red flag: 30%+ delta or sudden change = tracking issue, fix immediately
Weekly Summary Template
Fill this out every Friday:
Week of: [Date]
Performance Summary:
- CPI: [This week] vs [Last week] ([+/-]%)
- Install Volume: [This week] vs [Last week]
- D0 ROAS: [This week] vs [Last week]
- Proxy Rate: [This week] vs [Last week]
Diagnosis:
- Main CPI driver: [CPM / CTR / CVR]
- Biggest opportunity: [Segment/creative/structure]
- Biggest risk: [What could hurt us next week]
Actions Taken:
1. [Action 1]
2. [Action 2]
3. [Action 3]
Next Week Plan:
- Creatives to launch: [Count]
- Tests to read: [Count]
- Budget changes: [Description]
The 15-Minute Daily Check (Optional)
If you can't do the full weekly process daily, at minimum check:
- Spend pace: Are you on track for daily target?
- CPI by platform: Any segment spiking?
- Top creative health: Is your #1 performer still performing?
That's it. Save the deep analysis for your weekly session.
FAQ
How long should I wait before making changes based on this checklist?
For CPI and delivery issues: 3-day minimum of consistent data before major changes. For creative fatigue: act within 24-48 hours once signals are clear. For structural changes (consolidation): wait a full week of data.
What if my CPI looks fine but business results are declining?
CPI is a leading indicator, but it can be misleading. Always pair CPI analysis with downstream quality metrics (proxy events, D0 ROAS, payer rate). Cheap installs that don't convert are worse than expensive ones that do.
Should I run this checklist on weekends?
For most teams, no. Run the Friday version as your "weekend prep" so you know what to monitor. Set up automated alerts for major issues (CPI spike > 30%, delivery drop > 50%) and review them Monday.
How do I prioritize when everything looks broken?
Priority order: (1) Tracking issues (fix first—bad data = bad decisions), (2) Delivery problems (can't optimize what's not spending), (3) CPI/quality issues, (4) Scaling opportunities.
Make It Automatic
This checklist works. It's also time-consuming if you're pulling data manually from multiple sources.
AdPilot automates most of these checks—CPI decomposition, creative fatigue detection, learning phase monitoring—and delivers the results to Slack every morning. You spend 5 minutes reading instead of 45 minutes pulling reports.
Either way, run the checklist. Weekly optimization beats daily firefighting.