The five views investors and operators actually read

Every deck, every board update, every unit-economics argument comes back to the same metrics. An executive analytics tool only needs to answer five questions well.

  • Investor view. MRR, ARPU, LTV, CAC, LTV:CAC. These are the unit-economics metrics a seed or Series A round is priced on.
  • Funnel view. First Open, Signup, HealthKit Granted, First Activity, First Dashboard View, User Activated. The drop-off between steps tells you where to spend engineering time.
  • Engagement view. DAU/MAU, session length, retention curves, which features carry the product.
  • Revenue view. Monthly recurring revenue, trial starts, paid renewals, SKU mix, refunds.
  • Channels view. Organic search, direct, social, paid, referral. Plus a qualitative "how did you hear about us" survey that captures the word-of-mouth traffic GA4 labels as "direct."

Most startups I see have parts of this scattered across three tools (GA4 for web, RevenueCat or App Store Connect for revenue, a spreadsheet for CAC). Nothing reconciles. The investor ask during diligence is usually a fire drill.

Why "build" now beats "buy" for this specific layer

I lean toward buying off-the-shelf tools for anything that is not a competitive differentiator. Analytics has historically been that kind of commodity. Three things shifted.

  • Cost. Mixpanel and Amplitude both charge seat and event-volume fees that compound quickly. A solo-founder or early-stage team can end up paying four figures a month to see the same ten charts.
  • Taxonomy ownership. The off-the-shelf tools define event schemas and expect your team to conform. When you own the taxonomy, the dashboard vocabulary matches how you already talk about the business internally.
  • Velocity. With agentic code generation (I work mainly in Claude Code and Cursor) an end-to-end dashboard, complete with API integrations, goes from a multi-sprint effort to a multi-day effort. I built and shipped Halcyon's version inside a week, including the iOS event specification.

The build/buy calculus flipped. For a single app with a clear event taxonomy and specific investor questions, a custom dashboard is now cheaper than the off-the-shelf alternative, not more expensive.

When you still want Mixpanel or Amplitude

This dashboard is an executive layer. It answers the ten questions your board asks every month. It does not answer the hundred questions your PM asks every week. Knowing where the line is matters.

A custom executive dashboard like Halcyon does not give you:

  • Ad-hoc behavioral drill-downs. "Show me users who triggered feature X within 48 hours of signup, segmented by country and plan type." Mixpanel and Amplitude let your PM self-serve that query in minutes. A custom dashboard would need a new view built for every question.
  • A/B testing and experiment analysis. Feature flagging, statistical significance calculation, and variant comparison are entire product categories. LaunchDarkly, Statsig, or Amplitude Experiment own this layer for a reason.
  • Cohort exploration. Retention curves across dozens of arbitrary cohort definitions, with the ability to click into a cohort and see individual user timelines. The off-the-shelf tools are purpose-built for this.
  • Cross-session funnel manipulation. Reordering funnel steps, defining conversion windows, and comparing funnels across time periods interactively. Halcyon's funnel is fixed by design because the onboarding flow it tracks is fixed.

The honest assessment: if your PM is asking behavioral questions daily and needs to iterate on hypotheses without filing tickets, Mixpanel or Amplitude is worth the money. They are product analytics tools. Halcyon is an executive reporting tool. Different jobs.

The emerging middle ground is AI-powered querying against your own event data. Instead of building a new dashboard view for every question, you ship a well-structured event taxonomy to BigQuery or a similar warehouse, then let an LLM write the SQL on demand. The executive dashboard handles the standing questions; the AI query layer handles the ad-hoc ones. That pattern is where I think most early-stage teams will land within the next year.

Live demo

The dashboard is embedded below. It is branded for a fictional consumer wellness app called "Halcyon" and populated with representative data. Every tab is interactive. The Setup tab is the most interesting one if you only have a minute; scroll down to it to see the self-diagnosis story.

Interactive demo. The iframe runs the shipped frontend code against a mock payload. Open full-screen →

The two design moves that make it useful

Most self-built dashboards fail the same way. They look impressive on day one, then silently rot as the product changes, the instrumentation drifts, and nobody knows whether the numbers can be trusted. Two design choices keep Halcyon honest.

1. The dashboard self-diagnoses its own data gaps

Every KPI tracks its own provenance. MRR knows whether it came from Apple App Store Connect (trustworthy) or a GA4 event-value estimate (undercounted by 5-10x under Apple ATT, because renewals are processed server-side and never touch the client). The Investor view marks each metric with a warning icon if it is running on an estimate, and with a clean badge if it is running on source-of-truth data.

The Setup tab takes this further. It maintains a live ledger of every required data source and every required iOS event, computes which ones are active, and lists the ones that are still pending. When a founder opens Halcyon on Monday morning, the dashboard either says "here is your MRR" or it says "here is your MRR, plus the specific reason you should not trust it yet." That second sentence is what makes the tool survive contact with reality.

2. The dashboard writes its own instrumentation specs

The interesting follow-on: every pending metric is paired with the exact Swift code needed to fill it. The Setup tab ships an inline event spec for each missing event, including parameter names and types, iOS code snippets, and the list of dashboard metrics the event unlocks. Here is the actual spec for trial_started, one of the events currently pending in the demo.

if transaction.offerType == .introductory {
  Analytics.logEvent("trial_started", parameters: [
    "product_id": product.id,
    "trial_days": product.introductoryOffer?.period.days ?? 7
  ])
}

The PM does not have to write a ticket that says "please instrument trial_started with these fields." The dashboard has already done it. The iOS engineer pastes the snippet, the event starts flowing, and the next time the dashboard loads it recomputes gaps and removes trial_started from the pending list. Live documentation, no spec-to-code translation drift.

What investors actually look at, and where this design helps

In diligence calls, investors do not want an eighteen-tab deck. They want five numbers and the story behind them.

  • MRR and its source. The first question I get asked is "how are you computing this." Halcyon's Investor view labels the source on the MRR tile itself (Apple Sales Reports vs. GA4 estimate). That ten-second answer removes most of the follow-up questions.
  • LTV:CAC ratio. The a16z standard is 3:1 or better. The tool computes it from ARPU divided by monthly churn, divided by blended CAC. Each input has its own warning state, so the founder cannot inadvertently cite a ratio built on three estimates.
  • Install-to-Trial rate. The Funnel view uses Apple's own download and trial-start counts from the same month to guarantee numerator-denominator parity. Industry benchmark for health apps is 2-5%; the dashboard flags when the app falls below.
  • Activation rate. The composite event user_activated only fires when a user has completed signup, granted HealthKit, and viewed their first populated dashboard. That is the "aha moment" for this product. Retention curves are built off this cohort.
  • Channel mix with attribution survey. GA4 classifies most inbound traffic as "direct" when the user came from a podcast, SMS, or Instagram DM. The attribution survey captures those qualitative sources so the CAC denominator is actually right.

What to do Monday morning if this is the shape of your problem

If you are running an early-stage subscription app and you are trying to decide whether to spin up a custom executive dashboard, here is the minimum spec.

  • Pick your five views first. Do not start with events or schema. Start with the five questions your board, your investors, and you want answered weekly. Most of the usual suspects are covered above.
  • Identify the source-of-truth data source for each KPI. For subscription revenue on iOS, that is Apple App Store Connect plus Apple S2S webhooks. Not GA4. Not your in-app analytics. The server-side source is the only one that is not lossy.
  • Treat your event schema as a product. Thirteen high-value events matter more than two hundred low-signal ones. Each event should earn its place by unlocking a specific dashboard metric.
  • Make the tool admit what it does not know. A dashboard that shows a confident number on a metric it is secretly guessing at is worse than no dashboard. Self-diagnosis is cheap to add and prevents the worst class of failure.

The short version: for a single-product subscription app with a clear taxonomy and a short list of investor questions, the build/buy calculus has shifted toward build. I am happy to talk through the architecture or the Swift event spec if any of this matches what you are working on.

Let's Talk Read the full case study →