There's a lot of advice around standard metrics: don't have more than 2-3% monthly churn, convert at least 2% of your free users to paid under a premium model, response rates should be 200 milliseconds or less, and so on and so forth. There's a little less out there of *how* to get and measure these core metrics. I wanted to share exactly what we track and how for Nylas Mail, the freemium cross-platform desktop email client I've been working on for the last few months.
We track four main things: user actions, performance, errors, and user happiness. For each one of these core areas, we use a different tool.
User Actions: Mixpanel
For core user actions, we use Mixpanel. Since Nylas Mail is an Electron app, it's just as easy for engineers to add Mixpanel events to Nylas Mail as it is to your standard web-based SaaS app. We want to see what actions users are doing in Nylas Mail so we know what we're good at and where to improve. Of course, we don't look at what people are sending or who they're corresponding with - that's not useful to us, it'd be a ton of data, and it's just plain wrong. As it was the big feature when they launched, and still their best feature, we use Mixpanel funnels to track newly acquired users and try to make sure that we're eliminating all the rough spots so that we can get someone from trial to habit as soon as we can. We also use Mixpanel's built-in notifications feature to do a drip of onboarding emails after a new user signs up. I have lots of gripes about Mixpanel, which makes sense since it's a ten year old tool and things have changed in internet land, but it's still the best tool I know for seeing just what users are doing inside your app.
Performance: Honeycomb
Honeycomb is a relatively new entrant in the analytics space, and while it can be used for a lot of things, we use it to track performance. In particular, we use Honeycomb to track 90th percentile performance versus counts on key events - this lets us make sure that we aren't introducing performance regressions as we push our new versions of the app. There are great filters and ordering options, so we can slice and dice data pretty easily. A really nice feature - Honeycomb runs are discrete events with discrete URLs, so we can paste these into Slack and refer back to that exact data set and those graphs days or weeks later. (This sadly isn't possible in Mixpanel.) Honeycomb is still new, so the ramp up period is a bit longer and you always have this feeling that you're not using the tool as well as you can, but once you find something that works, it's really addictive.
Errors: Sentry
We send errors to Sentry automatically. We use this for two things: we want to see which errors happen the most often so we can prioritize bugs and we can also use Sentry to let us know if we've introduced new errors when we've released a new version of Nylas Mail. We can see if a small handful of users are seeing lots of errors or if errors are being seen more broadly by a large set of users very quickly. We also look to see if errors are concentrated on a particular platform, release, or type of connected account (Google, Office 365, or IMAP.). One thing to look out for: Sentry can get overloaded with errors very, very quickly if you ship something broken or are too generous with sending errors to Sentry, so beware of going over your limits.
User Happiness: Zendesk
Yes, Zendesk. Lookit, for product managers or anyone building something where the customer is the user: vox emptor, vox dei. The voice of the customer is the voice of God. You can look at cohort data, you can ask for CSTAT and NPS, and you can dive into logs until the cows come home. The best way to hear what the customer is experiencing is to hear it with their own voice. When customers choose to reach out to you, it's your job to listen to every word they say. If the customer says "the app is slow" but your telemetry shows that your response rates are under 200 ms, guess what? Your app feels slow and it's your job to fix it, period. We look at every support ticket as an opportunity to understand the problem that the customer has and for us to learn how they tried to use our app to solve it. It's customer development, right in your inbox. Zendesk can be slow and the analytics leave something to be desired, but it's a critical part of our product development process.
We've found lots of neat things with our data sources. Some highlights: it turns out that sending emails is the very best predictor for retention, even though people receive a lot more email than they send. Perceived performance is way different than the performance of the app - in both ways. Electron may be cross-platform, but there's something about Windows users or Windows itself that throws a disproportionate number of pull-your-hair-out bugs. And even the most non-technical users will jump through hoops to get you logs if you're patient and understanding with them.