Solving Problems For the Right Beneficiary

As a product manager, your job is to identify problems to solve, do some prioritization, and ensure that you solve them as fully as possible, given the resources and constraints of your organization.  (To be super explicit: the best solutions aren't necessarily delivered as new features.)

In enterprise product management, you often have multiple constituencies that you need to solve for.  Attention must be paid to all of them to keep both customers and the team happy.  In my mind, when you're choosing problems to work on, you should be explicit about whose problem you're solving.  I use the term "beneficiary" to denote for whom or for what I'm solving for.  Here are the most common beneficiaries:

* The user. 

The user is an individual human.  In consumer, the user is the customer and is the only thing that matters.  In enterprise, there are often many different kinds of users - some users aren't even end users of your product: they're administrators or finance folks.  Even within end users, there can be many different kinds of users.  Some delineations are: novice and expert users of your product, end users with different job roles or responsibilities, or folks in different geographies that have different contexts and capabilities (iOS versus Android, slow connections versus fast connections, etc.)  

If you have different kinds of end users, the best way to have a quick way to be specific about who you're solving for is to go through a persona exercise.  Individual (and team) personas can be incredibly helpful to make sure that you can shorthand a set of motivations, problems, capabilities, and limitations for end users.

* The customer.

Again, in consumer, the user is the customer - the person who pays.  In enterprise, the customer can be one of many folks: an IC putting down a credit card, a manager authorizing a purchase, a procurement department that will "right size" the offering to be bought.

At scale, if the person paying isn't the person using the product, you're going to have to do affirmative work to prove to the purse-strings-holder that you're delivering value commensurate with the price being paid.  Fun note: this work often helps lead to step-function expansions in deal size.

* The system.

Sometimes you're going to solve a problem that makes your system more reliable, available, resilient, or performant.  For example, work that makes it easier to scale (either horizontally or vertically, your choice) or lets you increase placement (standing up an EU region is the most common example for US-based startups targeting EU-based customers) has "the system" as the beneficiary.  To draw a line in the sand, this work will have no user-facing changes in the experience of a working application (with the exception of decreasing latency).  Solving for the system is also different than solving for...

* The team.

Sometimes you will solve a problem that is borne by your internal team - perhaps your developers, your sales team, your support team, or any collection of groups of folks inside the organization.  Solutions often end up automating manual processes or making it easier to make changes to the product.  Solutions can include everything from new ticketing systems, string repositories, cron jobs, documentation, full on code reorganizations, and just hiring a bunch more people.

* The business.

Lastly, problems that are borne by the business are generally problems where the outcome is either "increase revenue" or "reduce costs".  These are where the value you are creating is almost entirely captured by the business that offers the solution.  Being intentional about pricing and packaging helps immensely on the revenue side; sadly, there are often no real alternatives to large infrastructure projects to make reductions on the cost of goods sold (COGS) side.  

You may also find that there are other classes of beneficiaries that your organization needs to consider - that's fine!  Being clear and up front about which beneficiaries you are serving will allow you to make much better prioritization and sizing decisions as a product lead.

Product Management Is Not Just About Creating Value, It's About Capturing Value

Traditionally, when people think about the role of a product manager, people generally talk about working with a cross functional team (usually engineering, design, and data science) to deliver value.  I've come to believe that is necessary but not sufficient.

As a product manager, your day-to-day work is some combination of gathering information, synthesizing it, sharing it, and helping others make better decisions collectively.  When this work is only about delivering value to the user, the PM has not done a complete job.  Many times, the right problem to solve or the correct thing to build is different based on the organization's ability to capture the value that's created.

This is why PMs need to pay so much attention to pricing and packaging when thinking about prioritization.  If your current go-to-market strategy or your current pricing structure won't allow you to capture the additional value the proposed project would create, it's absolutely not worth prioritizing that project until you fix your GTM and/or pricing strategy first. 

For consumer/prosumer/SMB products, the value you're creating is often on one axis - access to songs for Spotify, access to the people you care about for social networks, storage space for Dropbox, and "store of information" for Slack, to name a few(1).  In the enterprise, there are often many different axes of value, some of which may pose tradeoffs to either the user or to the customer.  In these cases, having the value capture conversation before building anything is even more acute.

One last note: being clear about the ability to capture value can help avoid the next feature fallacy that can plague organizations of all sizes, not just those that haven't hit product-market balance(2) yet.  If the next feature doesn't give you any additional pricing/stickiness power, then it's not worth doing.

(1) This is why these organizations fight and fight and fight to introduce a second successful product, or to produce new user experiences that extend the core value.  For consumer, Facebook groups is a great example.  For prosumer/SMB, Dropbox's introduction of Carousel and acquisition of Mailbox are prime examples.

(2) One of my favorite quick posts; still holds up great even ten years later!

Product Tools and How To Use Them

There's a lot of advice around standard metrics: don't have more than 2-3% monthly churn, convert at least 2% of your free users to paid under a premium model, response rates should be 200 milliseconds or less, and so on and so forth.  There's a little less out there of *how* to get and measure these core metrics.  I wanted to share exactly what we track and how for Nylas Mail, the freemium cross-platform desktop email client I've been working on for the last few months.

We track four main things: user actions, performance, errors, and user happiness.  For each one of these core areas, we use a different tool.

User Actions: Mixpanel

For core user actions, we use Mixpanel.  Since Nylas Mail is an Electron app, it's just as easy for engineers to add Mixpanel events to Nylas Mail as it is to your standard web-based SaaS app.  We want to see what actions users are doing in Nylas Mail so we know what we're good at and where to improve.  Of course, we don't look at what people are sending or who they're corresponding with - that's not useful to us, it'd be a ton of data, and it's just plain wrong.  As it was the big feature when they launched, and still their best feature, we use Mixpanel funnels to track newly acquired users and try to make sure that we're eliminating all the rough spots so that we can get someone from trial to habit as soon as we can.  We also use Mixpanel's built-in notifications feature to do a drip of onboarding emails after a new user signs up.  I have lots of gripes about Mixpanel, which makes sense since it's a ten year old tool and things have changed in internet land, but it's still the best tool I know for seeing just what users are doing inside your app.

Performance: Honeycomb

Honeycomb is a relatively new entrant in the analytics space, and while it can be used for a lot of things, we use it to track performance.  In particular, we use Honeycomb to track 90th percentile performance versus counts on key events - this lets us make sure that we aren't introducing performance regressions as we push our new versions of the app.  There are great filters and ordering options, so we can slice and dice data pretty easily.  A really nice feature - Honeycomb runs are discrete events with discrete URLs, so we can paste these into Slack and refer back to that exact data set and those graphs days or weeks later.  (This sadly isn't possible in Mixpanel.)  Honeycomb is still new, so the ramp up period is a bit longer and you always have this feeling that you're not using the tool as well as you can, but once you find something that works, it's really addictive.

Errors: Sentry

We send errors to Sentry automatically.  We use this for two things: we want to see which errors happen the most often so we can prioritize bugs and we can also use Sentry to let us know if we've introduced new errors when we've released a new version of Nylas Mail.  We can see if a small handful of users are seeing lots of errors or if errors are being seen more broadly by a large set of users very quickly. We also look to see if errors are concentrated on a particular platform, release, or type of connected account (Google, Office 365, or IMAP.).   One thing to look out for: Sentry can get overloaded with errors very, very quickly if you ship something broken or are too generous with sending errors to Sentry, so beware of going over your limits.

User Happiness: Zendesk

Yes, Zendesk.  Lookit, for product managers or anyone building something where the customer is the user: vox emptor, vox dei.  The voice of the customer is the voice of God.  You can look at cohort data, you can ask for CSTAT and NPS, and you can dive into logs until the cows come home.  The best way to hear what the customer is experiencing is to hear it with their own voice.  When customers choose to reach out to you, it's your job to listen to every word they say.  If the customer says "the app is slow" but your telemetry shows that your response rates are under 200 ms, guess what?  Your app feels slow and it's your job to fix it, period.  We look at every support ticket as an opportunity to understand the problem that the customer has and for us to learn how they tried to use our app to solve it.  It's customer development, right in your inbox.  Zendesk can be slow and the analytics leave something to be desired, but it's a critical part of our product development process.

We've found lots of neat things with our data sources.  Some highlights: it turns out that sending emails is the very best predictor for retention, even though people receive a lot more email than they send.  Perceived performance is way different than the performance of the app - in both ways.  Electron may be cross-platform, but there's something about Windows users or Windows itself that throws a disproportionate number of pull-your-hair-out bugs.  And even the most non-technical users will jump through hoops to get you logs if you're patient and understanding with them.

The Raw Numbers of a Mediocre Product Hunt Launch

So a month ago, we launched the public beta of Braid on Product Hunt. For a minimally viable product with no external funding, we did pretty well! Over 170 upvotes in the first day, with hundreds of new signups, and small — but real — conversion to paid after the 30 day free trials expired.

In a previous post, I compared Braid’s Product Hunt launch to the 2009 TechCrunch post for the last startup I did.

In this post, I’ll go through all the numbers that we have so that other companies and founders will have a realistic idea of what a Product Hunt launch can — and cannot — do.

The numbers for the first 24 hours:

  • 2,181 total visits to Braid’s homepage (per Product Hunt data)
  • 2,099 unique visits to Braid’s homepage (per Product Hunt data)
  • 1,368 unique visitors to Braid’s homepage (per Google Analytics)

The Product Hunt numbers were generously provided by Ryan Hoover, founder of Product Hunt. Technically, the numbers from Product Hunt’s side are users that clicked on the GET IT button to visit Braid OR visited via Product Hunt’s unique short URL (, which shows up on our side at

So the first thing to investigate is the 3:2 discrepancy between the traffic they sent and the traffic we recorded.

After digging, this is probably due to the fact that the traffic Product Hunt sent — even though a couple thousand isn’t a lot — is more than Namecheap’s shared hosting could handle for our marketing site. The Braid app itself runs off a much more robust, and much more expensive, Amazon Web Services instance. Another possible explanation is that the Segment Wordpress plugin we were using was acting up or unable to fire completely. (We weren’t using GA’s code at the time, but we’ve switched out Segment in favor of Google’s native code as a result.) This is probably because page loading times were so slow with the additional Product Hunt traffic that people bounced before the Segment Javascript could load.

But! Even with the slow site, we signed up 130 new users directly attributable to Product Hunt in the first day (about a 10% conversion rate). After a month, Product Hunt continues to send 10–20 uniques to our site per day, of which we continue to get around a 10% conversion rate to free trial customers. (This is defined by people who install the Chrome extension, navigate to Gmail, subsequently choose to create a Braid account, and authorize Braid to access their Google data. That’s a fair number of steps to signup.)

Braid has a 30-day free trial period (we don’t do freemium), so it took a month to see what the results really were. Turns out that the majority of people coming from Product Hunt liked Braid, but it wasn’t worth converting from free. (That’s to be expected, of course.) But we were able to convert the 130 users into a license for 10 paid users in one customer account. What’s interesting is that the customer paid for a license for ten users, but only two users inside the organization came from Product Hunt.

We’re going to take the feedback we got from surveying those who didn’t up for a paid plan and make our onboarding simpler and get to the “aha moment” faster. There’s also some interesting feature requests we’re investigating to find the true root motivation, and create feature hypotheses to hopefully solve those root problems.

So, in sum, Product Hunt is pretty awesome, but it’s not going to radically transform your company’s trajectory in any way. From my perspective, here are the wins and the deltas:

What we did right:

  • The product being good and matching the benefit you choose to hype matters a lot! People understood our core value proposition — that at some point, email doesn’t really scale. But a lot of people don’t really need full-fledged task managers for everything. So something that complements both Gmail as well as, say, Trello is unique and interesting. (In fact, we use Trello to build Braid’s product but use Braid to log and share all important non-technical developments.)
  • Once the Product Hunt listing was live, frantically emailing and Gchatting people about our posting made a big difference to get us on the front page above the fold, and from there, the Braid listing stayed all day. In fact, it seems that other social networks such as Twitter, Facebook, and LinkedIn posts did nothing.

What we did wrong:

  • I didn’t queue up folks in advance to tweet about our link, and I didn’t email our list because I thought it was spammy. Maybe these are noble intentions, but we didn’t crack the top five for the day so we didn’t make the daily email that supposedly brings another huge boost. (Algolia for Places in the morning and YouTube Creator in the afternoon took two of the available spots.)
  • I never explicitly asked people to upvote our listing. It’s against the rules, after all. However, I can’t help but think that it could have helped enough to get us into that top five. I’m torn on this one — it was the right thing to do but I also have an obligation to the company to do everything I can to make it successful.
  • I didn’t make sure the site could handle the load. I thought that we’d be able to render just fine with a few thousand visits, but the site did slow to a crawl. It also doesn’t help that Braid has lots of opportunity for compressing images and Javascript that we didn’t take advantage of before. (Now, it’s probably not worth it as the site loads reasonably and half our traffic is from the Chrome Store directly.)

Product Hunt 2016 versus TechCrunch 2009

So it’s been a little over a week since Braid launched our public beta on Product Hunt. (Braid is a Chrome extension that adds simple project “news feeds” inside of Gmail and Google Calendar.) The numbers were good, but not overwhelming. Then I got to wondering how a Product Hunt launch today compared to TechCrunch back when TechCrunch was the place everyone wanted their startup to launch.

As it happens, I have a Google Analytics account that has data going all the way back to 2007. And thankfully, the GA account still has the data from my last startup’s TechCrunch debut.

My first startup was called Dawdle. Dawdle was an online marketplace for video games, systems, and accessories. Mark Hendrickson wrote about Dawdle on President’s Day 2009:

God bless the Wayback Machine

I think we tweeted at Mark to get his attention the week before, but searching for a single tweet is hard, so I can’t be sure how we got that press hit. Anyway, Mark was kind enough to write about Dawdle and post it up on TechCrunch.

Here are the traffic stats from Mark’s TechCrunch post about Dawdle:

Of the 591 total visitors for the week after the post, the vast majority were in the first two days. TechCrunch sent 338 visitors on the Monday, and it had decent staying power for the next day with 188 visitors. The new users number you see in the screenshot is the number of visits from people who hadn’t been to the site before, not new registrations. We actually didn’t even track new registrations — they weren’t a KPI. We only tracked listings and purchases. As you can see, no one bought anything from the TechCrunch hit. (Google Shopping was, by far, our best traffic.) So, again, great backlink, not a ton of traffic, no sales.

Braid did pretty well with our Product Hunt launch, sticking on the homepage for most of the day. We botched it in a couple of ways: we only had one image and didn’t have a super user post it for us but we did a couple of smart things: we had a good offer (free lifetime accounts) and we tagged it with all the relevant categories.

Here are the traffic stats from Product Hunt:

Referral traffic from the domain“Direct” traffic but with the slug that let us know it was from Product Hunt

As with Dawdle’s TechCrunch post, the second day’s traffic held up pretty well. The goal conversion rate is wrong: we got about 130 new accounts from Product Hunt, way more than the 24 new accounts implied here from the 2.10% percentage Google Analytics displays. (It’s hard to measure exactly because some of the new accounts are from when a Braid user invites someone to a new or existing Braid project.) Also, the number of visitors may also be artificially low, as marketing site began running slower than usual, meaning that GA’s Javascript may not have loaded for everyone before they bounced.

We’re still in the free trial period, so it’s hard to see how many will convert to paying customers at the end of the 30 day free trial. But what’s super interesting is how few people took advantage of the free Braid account for life offer I posted in my introduction: only 25 of the 130 accounts we attribute to Product Hunt wrote in to request their free account. (I’ll still honor it, by the way.)

Final numbers:

TechCrunch 2009: 591 visitors with unknown new users and zero revenue
Product Hunt 2016: 1,368 visitors with 130 new users and unknown revenue

So Product Hunt is 2x as good as TechCrunch was back in its heyday.

So while we didn’t get the tens of thousands of new visitors and thousands of new users that people hope for with a Product Hunt launch, the numbers are still way better than they were when you had to pray that TechCrunch would find your startup worthy enough to write about. So, thanks Ryan, for making launch day a little more democratic and more productive than the days gone by.

Introducing Braid

Introducing Braid

One of the hard things about doing a startup is finding problem/founder fit. Since successful startups take forever, it’s really important for founders to go after a problem that they want to keep solving. If you’re going to end up working on something for a decade or more, you better know the problem really well and be highly motivated to solve it.

Some people are lucky — they have lots and lots of ideas for lots and lots of problems that they want to solve. They keep notebooks in their bags, by their bedsides, and in their glove compartments so they can write down ideas any time inspiration strikes. (Or, at least they did before iPhones.)

Anyway, I’m not so lucky. I’ve only had one idea I’ve been excited about for the last half-decade.

As I moved up in my career and moved from company to company, I found that I spent more of my time doing administrative work and less of my time doing the job I was hired to do. Building and problem solving gave way to tracking things down and filling out forms. I stopped being excited about my work and started dreading the artificial deadlines and TPS reports required to keep our teams in some semblance of cohesion.

For better or for worse, we define ourselves by the jobs we have. But when we go to work and we can’t even do the jobs we’ve been hired to do, it’s just a total waste. Handcuffed by tools chosen by IT and policies imposed to solve a problem one person had 15 years ago, we become exhausted and disillusioned with our jobs.

For me, at my recent jobs before starting this company, I became disillusioned and exhausted by the constant barrage of unnecessary fire drills and meetings that piled up on the calendar. Instead of being able to talk to customers and mock up wireframes, I spent most of my day making PowerPoints and writing long updates that no one ever read. People were less concerned about helping projects move along and more concerned about making sure they could blame someone else for the inevitable delay. It’s what I call the “cold comfort of soft numbers”.

No one is excited to come home and say “I went to four status update meetings today!”. No amount of money or private offices can replace the joy that comes from doing a job well. What we want as employees at an organization is pride of ownership — to be proud of the thing we built, the team presentation we delivered that impressed the client, or the finally solving that problem bedeviling the team for the past three weeks.

I wanted to build a tool that just let me — and my friends — get back to work and work better together.

Braid is designed to let teams work better together—by taking advantage of the work that we’re already doing. We’ve added Braid to be a light layer on top of Gmail and Google Calendar, targeting those organizations that live in their inboxes.

Braid is a collection of simple news feed feeds that are available where you need them — in your Inbox, and out of your way when you don’t. It’s simple to add emails to a feed with a click of a button — no more copy and pasting from an email into a wiki or a task list. Add events or jot a quick note as well. Upload files so that they’re all in one place. Tying together all the important things we do in one cohesive place is what we’re about.

The motivation behind Braid is to build a tool that respects people and respects their existing workflows. We believe that Braid will help you get better results for your projects. Braid is a quiet enabler instead of a chore. Braid is a tool that enables people to get credit for what they’ve done when they’ve done it. And the bonus is that managers and clients can feel secure seeing the progress of a project in real time.

Braid is built to be a single, simple source of truth that everyone can actually trust. So there’s more time to get things done and everyone is pulling together.

I hope you’ll try Braid and see if it helps you and your team free up more time for the things that truly matter. If you want to read more about Braid the product, check out the launch post. And please reach out with your ideas to make it even better. Thanks for reading.

Please Shut Up About Growth Hacking.

(This post originally appeared on my Svbtle site:

All of a sudden, growth hacking is the new Carly Rae. I’m so completely bored with this boomlet because growth hacking isn’t anything special.

Point one: growth hacking isn’t new.

Let’s posit that growth hacking is finding channels (sometimes new) that work, then jamming everything you can at them until they stop working. Essentially it’s getting in front of Andrew Chen’s law of shitty clickthroughs. The newest new thing I know here is Gchat spam. You find an inefficiency or underexploited channel, you rush to get there before your competitors do, they flood in, the channel gets bid up, the ROI dries up, and you go on to the next thing. That’s just arbitrage. Just cause you’re doing it online doesn’t make it new.

Furthermore, you know who has been doing this for more than a decade? Affiliate marketers. Folks like Jeremy Shoemaker and Zac Johnson have been building landing pages, writing reams of copy, testing ad networks/formats/placements/CTAs/whatever since they were 15. You want to hire a growth hacker? Go to Affiliate Summit and hire a speaker.

Point two: growth hacking doesn’t work if your product sucks.

PayPal paid both sides $5 for signing up, but Yahoo PayDirect paid $10. Everyone else gives away more free space than Dropbox. Friendster started getting really aggressive with email once people were jumping to MySpace and Facebook. Dropbox’s videos and Mint’s blog posts shot up Digg for free, launching a million paid submissions, but no one remembers who did the paying. The original NCAA game on Facebook led to Zynga on Facebook, and then everyone else’s Facebook spam, none of which worked as well as Zynga’s did. Everything autoposts to Twitter. And there are a million untold stories for products that nobody’s ever heard of. Because they sucked.

Plus, there’s no evidence that these dedicated growth hackers - magical unicorns who can market and code (designers who can code, you had your 15 minutes) - can reliably identify and exploit new channels over and over again. A supposed professional growth hacker may be lucky to find one great new channel, but finding three? That’s a James Simons-level of consistency. If someone really can do that, go forth, find her, and pay her the equivalent of 5 and 44.

Point three: growth hacking accelerates traction; growth hacking does not create traction.

Let’s say you find a magical new channel that’s cheap (or free!) to exploit. Then you shove a bunch of users at it. Then they sign up. Then they leave because your product sucks. You know what that is? A best case scenario because you’ve managed to snooker some investors into giving you cash before the bottom falls out. More likely, you won’t find a magical new channel and you’re going to have to rely on people organically sharing your product with the world. It’s called word of mouth. It’s been around for about 5,000 years. (Yeah, even before Delicious bookmarks and Facebook Likes and Tweet buttons and Pinterest pins.) Word of mouth still works pretty well for products and services that people like. And it’s free. Once you see enough word of mouth, or an organic hockey stick, or a 40% very disappointed, or a 50%+ DAU/MAU number, or whatever else the metric flavor of the month is, then you have traction, and then you can think about hacking some additional growth to accelerate your organic traction. Doing it before then is dumb. That’s because growth hacking is just about getting people into the beginning of the funnel, not about making them happy customers.

Lookit, a product manager should be doing inbound and outbound messaging. If the product owner/manager/CEO/whatever doesn’t know when to market, who to market to, how to market to them, and what to say, then your product probably isn’t ready to be marketed. Dedicating precious engineering time to marketing experiments prior to sustained and repeatable traction is a waste of everyone’s time and money.

Frankly, I think that all this talk about growth hacking is the result of the Valley’s ridiculous inability to recognize and discount survivorship bias. Sure, I’m as happy as anyone to recognize and celebrate the success stories and buy successful people beer to gain some wisdom. But taking them as proscriptive is learning the wrong lesson. So stop worrying about growth hacking and get back to work building something useful.

PCI Compliance In The Cloud: This Changes Everything

Amazon announced today that they have PCI compliance certification for the entire AWS stack: EC2, S3, EBS, and VPC.  This is huge.  Almost every startup these days uses some cloud hosting provider, and until now, it's been literally impossible to be PCI compliant in the cloud.

To be PCI compliant, the website owner needs to be able to provide physical access to the servers in the event of a credit card breach.  If you're running in the cloud, you can't provide physical access to an auditor, since you have no way of gaining physical access yourself.

The merchant account providers don't know the first thing about hosting, and they don't really care.  And no startup I know has moved to bare metal and hired a sysadmin just to follow the letter of the standard.  Startups have better things to do than subject themselves to an audit by Trustwave or some other QSA that certifies PCI compliance.  (See, you thought you were PCI compliant, but you're not.)  But if some companies now magically have PCI compliant clouds, then they have every incentive to rat on their competitors who aren't to Visa/Mastercard, effectively shutting down their ability to process credit cards.

Given that this is a rather large deal, I've asked Amazon PR to send me a copy of the QSA report.  Just to prove to yourself that this is a big deal, look at the list of PCI compliant providers: (PDF link)  Amazon is like that Sesame Street song: one of these things is not like the others.

One of things I want to figure out is if Engine Yard's EC2 instances are de facto included in the PCI compliance report, or if only customers contracting directly with Amazon are covered.  Hopefully, the QSA report answers this, but if not, I'll try to run this down with Amazon and Engine Yard.

Upshot: my best guess is that this raises the table stakes for every other cloud provider out there, and if you're in a competitive market and you're not on AWS, your competitors can attempt to report you for PCI violations.  You should ask your cloud provider when they'll provide PCI compliance, and if they can't give you a roadmap, you should investigate moving to Amazon.  

Ten Steps To Ten Thousand Sign Ups Before We Even Launch Our Startup

Blueleaf is building the world’s first truly personal financial planning platform.  It’s a complex product with a lot of features to meet all the different problems that our customer discovery has shown that people want.  Here are ten concrete things we’re doing to get 10,000 sign ups before we even launch our product.

1. Making Potential Users Ask Publicly To Get In

There are two ways to get into Blueleaf: get an invite from us, or get an invite from a current user.  We let in a limited number of users every week, and we choose who we let in based on how much they want in.  Only people who request an invite and follow us on Twitter, Like us on Facebook, or otherwise make it clear that they want in, get in.  This means they have to sign up with us and tell everyone else about us before they can get in.

2. The World’s Strictest Invite System

The other way to get in is to get referred in by a current user.  We’ve developed our own twist on refer-a-friend: rather than limit the number of invites someone has, there are an unlimited number of invites – but we’ve limited the time frames to invite a friend.  We turn on the ability to invite a friend for just an hour or two so that people who want in have to beg everyone they know who’s on to let them in before the window closes.  When we announce that invites are on, potential users need to plead for invites publicly on Twitter and Facebook, privately over IM and e-mail, or however else they can reach current users.  Again, this kicks off a viral loop before the app has even launched.

3. Splitting Up Blog And Library Content

We write – a lot.  We publish something new almost every day.  However, most of our new content isn’t on our blog – it’s in Blueleaf's library of articles about financial planning.  It makes no sense to hide an article about bond funds and bond ETFs on a blog, where it’ll be impossible to find in a month.  We’ve segregated our greenfield content so that someone looking for that topic five years after we publish it can find it easily.  Plus, we can easily update and change this content as laws, tax policy, and the investment environment changes without confusing new or returning visitors. 

4. Having A Strong Point Of View

When we write content, we rarely do six of one, half a dozen of the other.  When we write about Roth versus Traditional IRAs, we don’t do pros and cons.  We say that if you have to ask the question, you should open a Roth IRA.  The vast majority of Americans qualify for a Roth.  They should open a Roth.  For the people who should open a Traditional IRA, we go through the reasons when it would make sense – but we don’t do false equivalence.  On our opinionated articles, Blueleaf’s position is front and center, up front, before the rest of the content to deliver as much value as we can in as few seconds as possible. (BTW, we do absolutely no SEO analysis or keyword stuffing.  That stuff is lame.  We write answers to the questions people ask us, simple as that.)

5. We Don’t Blog About Our Product

The Blueleaf blog has three components: Blueleaf Stories, Blueleaf Thoughts, and (to come) Blueleaf Tech.  Our Stories are the Blueleaf staff writing about their own personal financial experiences so that we can make it clear that we’re making a product that we’re very emotionally invested in.  Over time, we’ll ask our users to contribute their own.  Blueleaf Thoughts are about things we find interesting that we think will also be interesting to our audience.  We want to provide value for everyone who is saving for their financial futures, regardless if they’re interested in our product right now or not.  If we have interesting stuff, we’ll get links and traffic.  That’s it.

6. Link Roundups For Users And Relationships

The other thing we do on our blog is post twenty links every Friday afternoon, comprised of the four handpicked links we tweet out each weekday.  Not everyone follows @blueleafcom on Twitter, so it’s a nice convenient place to have some leisurely reading over the weekend.  However, the real clever thing about link roundups is that they provide Trackbacks to the financial bloggers we want to establish relationships with.  Bloggers click through on their Trackbacks, see the Blueleaf site, and get interested in the product.  This makes it much easier to develop relationships and approach them for coverage as we get closer to launch.

7. Using Facebook And Twitter Very Differently

Instead of using our blog to announce new features, we use Facebook Notes.  Also, Facebook is the only place to get a feed of every new piece of content we post when we post it (there’s no RSS feed for our Article content).  Facebook is Blueleaf's broadcast medium for people who want to keep up-to-the-minute with what Blueleaf is up to.  We use Twitter to share interesting links and we’ll use it as a cornerstone of our customer service as we add additional users.  We’re so big on sharing links on Twitter that we have two accounts for it – @blueleafcom and @BlueleafLinks.  The only thing we do on both?  We’re diligent about responding to anyone who comments on Facebook or @replies to us on Twitter.  

8. A/B Testing The Hell Out Of Everything

We A/B test a lot – we’ve tested calls to action, button colors, button size, headlines, images, and a litany of other things using Visual Website Optimizer and Performable.  But we make sure we don’t end up in local maxima.  We’ve A/B tested radical redesigns of our homepage, and the current page is the one that beat the pants off our old one.  We do this A/B testing so that we can maximize our conversion, regardless if the traffic to our site is a trickle or a flood.  We’re ready to catch all the fish in the sea that just happen to be swimming by.

9. Being Smart About Paid User Acquisition

We’ve tested all sorts of various paid acquisition channels – everything from Google and Yahoo to reddit, Facebook, and directory sites.   We’ve been able to drive down our Customer Acquisition Cost (CAC) to a very reasonable amount – not the blended CAC between paid and free, but our CAC on paid alone is very, very attractive.  Even though I’m the guy who called AdWords “a gateway drug to unprofitable user acquisition”, being smart with small paid tests that get your CAC down to a reasonable number makes potential investors all hot and bothered.  Proof that you can get affordable paid acquisition makes fundraising much, much easier.

10. Knowing Just Who The Hell Is Signing Up

We run every single e-mail we get through Flowtown so we know a bit about who’s coming in through the open door.  Doing this, we can stagger who we let in when and we can have better content for our e-mail newsletters to those on the waiting list.  We reach out to people we’ve identified as potential influencers and we make sure that we don’t let people who are potential investors until the time is right.  (Join the club, folks – not even our current investors have access to Blueleaf.)  Letting in the right people at the right time means that we prime the pump to yield water only when we’re thirsty for new sign ups.

Be Careful About Outsourcing Your Customer Acquisition

One of the great things about services like AdWords, Wufoo, Mailchimp, Flowtown, Performable, and so on is that they allow the marketing side of a startup to work independently of the engineering side of the startup.  Whip out your credit card and you can have a source of traffic, lead generation forms, e-mail marketing campaigns, social media insights, A/B testing of your landing page, and all that great stuff.

The problem with this is that as these tools have begun to play better together (thanks to the magic of Webhooks), your data becomes a bit more trapped inside each of these programs.  It can be hard to audit your data and even harder to integrate this data into your core application.  You end up going to each service independently to answer basic questions such as "when did this user give us their e-mail address?" and "just where did we get this e-mail address from?"  And if you want to answer more complex queries such as "just what is our overall (blended) customer acquisition cost (CAC)?", you have to export your data into CSVs and manually massage them together, which can be a very painful process indeed.

The bigger issue is when you try to integrate this information with your main app so you can do cohort analysis.  Cohort analysis is the magic analysis that tells you how you're doing on your key retention and engagement metrics over time: "did our June signups come back to the site more often than our April signups did?"  Trying to import user acquisition dates into your database of existing users (once you've sent them an invite to your app) is very time-consuming as you want to make sure that you don't break anything - there are a large number of edge cases you have to deal with: you'll have users who gave you their address more than once (for example), you'll have users that have changed their e-mail address inside the app from the one they gave you on your lead gen form, and you'll have users that have invited other users.

When you're a pre-launch or early startup, you don't have a lot of data points to deal with.  This is good because it means that you can, with effort, use Excel or some other tool to tie together all these different repositories of data.  But it's also bad because you may not think to have systems in place to more elegantly handle measuring your AARRR metrics post-launch.  You'll be so busy putting out fires and doing PR that you may end up neglecting instrumentation.  

While you may not have to do everything in-house, it's critical to have an engineer on your team who's involved in business/marketing support from time to time.  Defining and implementing an integration and migration paths (a lot of these services *do* have APIs for example, but those aren't something the mouthbreathing "business guys" can do anything with) before you launch is critical to make sure that you can calculate and measure the things you'll need to calculate and measure as you scale your business.