Ravi Parikh
May 25, 2014

In the early days of Heap, we pushed a change to our dashboard that caused engagement to skyrocket. The day after the change went live, people on average ran over twice as many queries in the Heap dashboard as the day before.

The way I phrased that is a bit misleading. While mostly true, the increased engagement had nothing to do with the change. It was entirely attributable to our tiny sample size. At the time we had fewer than 20 beta users, so there was large fluctuation in usage from day to day.

This illustrates a problem all early-stage products face: there isn't enough data to do traditional analytics. It's impossible to get anything approaching statistical significance when running A/B tests or measuring engagement with only a few beta users. It's especially frustrating because it seems at odds with the widespread "Lean Startup" mentality. How are you supposed to measure and iterate quickly without real data?

A large part of early stage development is talking to customers, following your product vision, and a lot of guesswork. But there's still room for data and analytics. The single most important tool at your disposal in an early stage product is individual user analysis: the process of monitoring user activity at a granular, individualized level. In this article we'll take a look at how it works, sample use cases, and things to look out for.

Individual user analysis

Record what your users are doing at a much more granular level and you'll be able to see patterns that don't require statistical significance. Here are a few examples:

  • A user is spending a lot of time using your product, but isn’t using the features you think are valuable. At Heap, we noticed that sometimes people would only run queries that were much better suited to Google Analytics. This meant that we weren’t doing a good enough job communicating our value proposition or onboarding users. Even mature product have this problem, as this post about Evernote demonstrates.
  • You notice that every time someone tries to use "Feature X" for the first time, they visit the documentation page or send you a support ticket. You only need to see this happen a few times to realize that the UI for Feature X doesn't do a good job communicating its function.
  • A customer decides to cancel their account. You can look back and see exactly what they did prior to cancellation. Maybe they got frustrated trying to figure out a certain feature or never went through the full setup process.

What this isn't: You might think "this seems strictly worse than speaking with your users." Individual user analysis is more than just a lo-fi version of real user feedback, for many reasons:

  • Things may not seem significant enough to mention from the user's perspective, but might be from yours. For example, maybe users are ignoring a button in your UI. They don't think to mention this, but it's an important feature that you'd like to measure.
  • A user's memory is incomplete. They may remember the high-level actions they took, but not the exact paths they took or why they did what they did.
  • Also, memory is malleable and flawed. A user may give a neat post-hoc explanation for how they interacted with your product, but the reality may be different.
  • And of course, your users may not always want to have in-depth conversations with you.

The right solution is to get as complete a picture of your users as possible. Combine your conversations, tracking data, and any known user metadata into a holistic picture of how someone is using your product.


Tools like Google Analytics don’t help. GA is built for aggregate reports and doesn't allow individual user analysis at all. Other event-based analytics tools will get you closer, but require manual event instrumentation to track events of interest. This is the opposite of the approach you want in the early stages: it's time consuming, expensive, and difficult to track usage at a high level of granularity.

There are other solutions better geared towards individual user analysis:

  • Session recording tools like ClickTale allow you to play back what users are doing, getting the next-best thing to looking over a user's shoulder as they navigate your website.
  • Our own product Heap takes a different approach. Heap automatically tracks pageviews, clicks, form submission events, and more without any custom code. This allows you to see at a glance the discrete actions a user took on your site. Because these actions are discrete rather than a whole session recording, Heap also allows for aggregate analytics (conversion funnels, segmentation, etc) once your product grows.


Individual user analysis is a powerful tool, but there are a few things to watch out for.

  • Be mindful of more than just the raw data. Data can help inform product decisions and where you should invest your time, but it's not gospel. Individual user analysis can't tell you how to pivot to reach product-market fit, or whether there's demand for a feature that doesn't yet exist.
  • You have a non-representative sample. Early users of startups are often quite different from the eventual userbase. They're more likely to be tolerant of bugs. They may even know you personally and be invested in the success of your product. While you can gain useful insights from monitoring these users, don't overgeneralize.
  • It doesn't scale. Your product may reach a stage where you need a different analytics process. Even with just a few hundred active users, you have enough data to start relying on aggregate data. High-level engagement numbers and conversion funnels will begin to matter more. Regardless, individual user analysis is still useful to massive applications. It's important to have a qualitative, detailed view of what users are doing.

Any other useful analytics strategies for early-stage startups? Let us know on twitter. Also, if you want to join us each week for more data-driven insights, enter your email address in the sidebar to subscribe.