Author Archives: Casey Winters

Three Mistakes Integrating Growth and Design Teams and How to Address Them

January 23rd, 2017

While I encourage growth teams to start with a dedicated designer, that doesn’t usually happen. Usually, growth teams scale with just engineering and product, and as they scale to a certain size start to eventually “earn” a dedicated designer from the organization. This news makes growth teams extremely happy for a short period of time, but pretty quickly starts to create problems and culture clash. I’ll talk about what happens, and some ways I’ve found to solve these issues. Note: You can replace growth with marketing, and just about the same thing happens in these scenarios.


When you hear you’re getting a dedicated designer, you’re happier than Jim Carrey was back when he had a career.

Problem #1: Team Control AKA The Two Scenarios
One of two scenarios usually emerges when designers join a growth team. In the first scenario, the engineers or PM or marketing person starts with, “I’m so glad you’re here. I need this done and this done and this done for an experiment…tomorrow. No, we don’t need research to understand the problem. Don’t worry. I’ll only ship it if the metrics increase.” The designer realizes the growth team didn’t want a designer. They wanted a pixel monkey to just do what they say, not ever use their brain.


I got 99 problems, but a designer using their brain ain’t one.

The other scenario is just as bad. In this scenario, I call it the designers “moving in”. I liken this to the scenario when a significant other moves into your place and brings way more things than you own to move in. “Don’t worry, I got better furniture and books, and, oh, we’re going to have to get rid of those curtains.”

The design version of this is something like “I’m going to completely rethink all our strategies. I need to spend three months just doing research and then at least double that for design concepts. That home page really needs to change though.” Engineering thinks “this isn’t what we need. These people don’t even know what they’re talking about.” Engineering gets kicked out of the process of figuring out what to work on, and since the design process has no deadlines on it, engineering begins to have nothing to work on.


I tried to find a picture of people dressed in all black moving in, but the best I could do was chambray.

In both scenarios, instead of designers, engineers, product managers, and marketers trying to unify to form one team that leverages all of their strengths, one team tries to dominate the direction. What we did at Pinterest to try to solve this problem was design a process where the teams, specifically design and engineering, are jointly responsible for problem definition and solutions. Here’s what it looks like:

Project Kickoff:
Attendees: PM (leader), engineering lead, design lead, engineer and designer that will be working on the project

Goals:

  • Define the problem we’re trying to solve
  • Any existing ideas on how we might solve it
  • Any previous attempts to solve the problem
  • What metrics we need to inform potential solutions
  • How will we tell if we’re successful

Output:

  • Notes emailed to broader sub-team
  • Slack channel created just for the project, which includes notes and all future communication for the project

Project Brainstorming:
Attendees: engineer and designer on the project, PM (optional so as to prevent their calendar from being a bottleneck)

Goals:

  • Produce multiple potential solutions to the defined problem
  • Prepare those for feedback from attendees of kickoff
  • Cover additional measurement needed for directions that have been chosen e.g. how many people click top right login button

Brainstorm Review:
Attendees: PM, engineering lead, design lead, engineer and designer working on the project (leaders)

Goals:

  • Feedback on concepts from brainstorm
  • Choose one or two directions for experiment

Experiment Launch:
Attendees: engineer and designer working on the project (leaders), PM

Output:

  • Experiment doc that team agrees reflects what we’re testing and why
  • QA and approval over Slack from every team member that experiment is ready to launch to 1%
  • Note: Ramp ups of experiment communicated in Slack channel

Experiment Review:
Attendees: PM (facilitator), engineering lead, design lead, engineer and designer that worked on the project (leaders)

Goals:

  • Determine if you gathered data on key questions you needed to answer for the experiment
  • If yes, what does the data say?
  • If not, how do we get the data?

Output:

  • Ship/kill/iterate decision
  • Emailed notes to the entire team of what happened and why
  • Updated experiment doc on what happened
  • Updated Slack channel
  • Designer brings ship experiment candidates to design manager/lead for any feedback before shipping
    • Problem #2: Design Best Practices
      Designers bring with them a history of best practices from working on other design teams and schooling/training. It can be a culture shock for a designer joining a growth team and seeing little of these being followed. Designers will usually respond to this environment by trying to educate on many things the growth team is managing are designed “wrong” and how they need to be changed. These recommendations based on best practices are usually rejected by engineers on the team as it contradicts what they’ve seen in the data.


      A growth engineer after he hears about a marketing or design best practice.

      Design best practices were created with good purpose, but they quickly become inferior to AB testing in an environment where a direction can be put in front of users and its value quantitatively determined in days or weeks. This doesn’t mean you should AB test every design decision, but it does mean a designer can’t put their foot down by saying something is a best practice. In growth, the users being tested on determine what’s best.

      Another issue with design best practices being applied to growth is that design best practices are usually suited entirely for building user value only. The growth team in many cases needs to trade off user value and business value, or short term user value vs. long term user value. For example, someone can come to my site, and I can offer a great experience. The user then leaves and never comes back. Growth might decide to let someone get a preview of that experience, then ask the user to sign up so they can personalize and deliver more value. The result is that some people will not sign up (creating a worse user experience) and some will sign up, allowing the site to create a better user experience that day and over the long term. This accrues value for the business as well.

      Designers usually feel threatened by this environment initially. They see testing as a threat to their expertise. What you have to do is teach them to use it as a tool to get closer to user feedback at scale and be more efficient with their time. Most design work is wasted because it is spent on a direction that later proves to be flawed. With AB testing, a design can get quick feedback on an idea, validate the direction, then spend the time making it amazing once they know it’s worth that effort.

      That said, while this user feedback at scale is good at showing what users do, it’s not great at explaining why. So, along with spending time getting designers comfortable with testing, growth teams need to start doing more regular qualitative research as well. Designers will usually volunteer to do it themselves if there isn’t the right person to staff for it. Some PM’s are comfortable doing it as well. Engineers and engineering managers can be resistant to spending time watching the sessions, so the first couple you schedule have to be on critical areas you feel there is some understanding the team is missing by only looking at the metrics.

      Once designers get a little more seasoned on growth, you should also work with them to create the new best practices for growth to make the design and engineering process move faster. They will look very different than what designers recommended at the beginning, and be backed by data. Along with this process, I think it’s important to start creating user experience goals for your growth team at this time. These will be components that may not affect the metrics, but ensure a quality and consistent user experience. At Pinterest, we made compliance to design spec a requirement to ship after an experiment is validated, and said our top five flows had to be completely audited for quality user experience. This is a worthy compromise with design to show you actually care about the users and not just the business, a common complaint.

      Problem #3: Growth Onboarding
      Sometimes, you want to make sure you just nail the basics. Most growth designers are joining a new team in a discipline they’ve never worked on before, yet they don’t receive any onboarding as to what growth is and why it’s different. This is also an issue with other new growth team members. It’s like being dropped into a foreign country, not knowing the language, having no one help you, and being expected to contribute to the culture. Like France (okay, I’ve actually been to France, and the people were surprisingly welcoming even though they’re not known for being so). What happens is designers get frustrated and churn from the team.

      It’s critical that every new team member, especially designers, go through a growth team onboarding process. The first thing you do is state the purpose for your growth team, why it’s different from other product teams, and why it’s important. At Pinterest, I would say that while other product teams create value or improve the value of the product, growth teams focus on connecting people to the current value of the product and reducing the friction that prevents that connection. This is important because it’s not enough to build a good product. If no one knows about it, it will die. If it’s too hard to uncover its value, it will also die.

      What we did at Pinterest was create a history of major growth projects that were successful, walked through them, and explained why they were successful. That led into some of our team principles. We also spent a lot of time educating new people on the metrics we use and why. You can’t be expected to design winning experiences if you don’t understand the major criterion for success we’ll be using.

      After onboarding, instead of starting designers on large, complicated projects, it’s important to start them on a well scoped, smaller project, preferably with an experienced PM and engineer, hopefully patient ones. These should still be projects that are worth doing, but not large projects. We had a designer start on growth at Pinterest, and we immediately put her on one of the most strategic, long term investments for the team. While she did great work there, after a few months, she did a smaller side project redesigning our mobile web home page. The conversion rate increased, and we shipped it. She said, “Guys, this is the first time my design increased the metrics!” She was beaming. You want to get new growth designers to that moment as soon as possible.

      Lastly, once onboarded, you want designers contributing growth ideas as soon as possible. I like the idea of forcing people to bring one new idea to the team per week. I believe this is a muscle that needs to be developed via practice. New entrants to the team (design or otherwise) will typically propose bad ideas for a while. That’s okay. The trick is to provide a framework for generating ideas that makes new team members think about elements that typically make for good growth ideas, give feedback on the ideas submitted, and have the ideas be submitted as metrics wins or user experience wins. The Pinterest template focused on :

      • How many people would see this experience if it were built? This is usually by far the most important criterion for a successful growth idea. Any experience has to be seen by a lot of people to have a big possible impact on the business.
      • Has the company tried something in this area before? If so what were the results? This helps us make sure we use any previous learnings to avoid making the same mistakes. Just because we tried something before doesn’t mean it’s a bad idea though as growth environments change frequently.
      • How much effort is required for this idea? On growth, we also try to do the least amount of work to validate an idea is worth it.

      There is no reason designers can’t be key contributors to a growth team, but expecting it to happen automatically is usually a recipe for failure. I hope some of these tips can help you create a thriving cross-functional growth team with all the right disciplines involved. If you’ve have any other issues integrating design into your team, I’d love to hear about them in the comments.

      Currently listening to Sorry I Make You Lush by Wagon Christ.

Don’t Become a Victim of One Key Metric

October 12th, 2016

The One Key Metric, North Star Metric, or One Metric That Matters has become standard operating procedure in startups as a way to manage a growing business. Pick a metric that correlates the most to success, and make sure it is an activity metric, not a vanity metric. In principle, this solves a lot of problems. It has people chasing problems that affect user engagement instead of top line metrics that look nice for the business. I have seen it abused multiple times though, and I’ll point to a few examples of how it can go wrong.

Let’s start with Pinterest. Pinterest is a complicated ecosystem. It involves content creators (the people who make the content we link to), content curators (the people who bring the content into Pinterest), and content consumers (the people who view and save that content). Similar to a marketplace, all of these have to work in concert to create a strong product. If no new content comes in, there are less new things to save or consume, leading to a less engaging experience. Pinterest has tried various times over the years to optimize this complex ecosystem using one key metric. At first, it was MAUs. Then it became clear that the company could optimize usage on the margin, instead of very engaged users. So, the company then thought about what metric really showed a person got value out of what Pinterest showed them. This led to the creation of a WARC, a weekly active repinner or clicker. A repin is a save of content already on Pinterest. A click is a clickthrough to the source of the content from Pinterest. Both indicate Pinterest showed you something interesting. A weekly event made it impossible to optimize for marginal activity.

There are two issues at play here. The first is the combination of two actions: a repin and a click. This creates what our head of product calls false rigor. You can do an experiment that increases WARCs that might actually trade off repins for clicks or vice versa and not even realize it because the combined metric increased. Take that to the extreme, and the algorithm optimizes clickbait images instead of really interesting content, and the metrics make it appear that engagement is increasing. It might be, but it is an empty calorie form that will affect engagement in a very negative way over the long term.

The second issue is how it ignores the supply side of the network entirely. No team wants to spend time on increasing unique content or surfacing new content more often when there is tried and true content that we know drives clicks and repins. This will cause content recycling and stale content for a service that wants to provide new ideas. Obviously, Pinterest doesn’t use WARCs anymore as its one key metric, but the search for one key metric at all for a complex ecosystem like Pinterest over-simplifies how the ecosystem works and prevents anyone from focusing on understanding the different elements of that ecosystem. You want the opposite to be true. You want everyone focused on understanding how different elements work together in this ecosystem. The one key metric can make you think that is not important.

Another example is Grubhub vs. Seamless. These were very similar businesses with different key metrics. Grubhub never subscribed entirely to the one key metric philosophy, so we always looked at quite a few metrics to analyze the health of the business. But if we were forced to boil it down to one, it would be revenue. Seamless used gross merchandise volume. On the surface, these two appear to be the same. If you break the metrics down though, you’ll notice one difference, and it had a profound impact on how the businesses ran.

One way to think of it is that revenue is a subset of GMV, therefore GMV is a better metric to focus on. Another way to think of it is the reverse. Revenue equals GMV multiplied by a commission rate for the marketplace. So, what did they do differently because of this change? Well, Seamless optimized for orders and order size, as that increased GMV. Grubhub optimized for orders, order size, and average commission rate. So, while Seamless would show restaurants in alphabetical order in their search results, Grubhub sorted restaurants by the average commission we made from their orders. Later on, Grubhub had the opportunity to test average commission of a restaurant along with its conversion rate, to maximize that an order would happen and maximize its commission for the business. When GrubHub and Seamless became one company, this was one of the first changes that was made to the Seamless model as it would drastically increase revenue for the business even though it didn’t affect GMV.

This is not to say that revenue is a great one key metric. It may be better than GMV, but it’s not a good one. Homejoy, a service for cleaners, optimized for revenue. Their team found it was easier to optimize for revenue by driving first time use instead of repeat engagement. As a result, their retention rates were terrible, and they eventually shut down.

Startups are complicated businesses. Fooling anyone at the company that only one metric matters oversimplifies what is important to work on, and can create tradeoffs that companies don’t realize they are making. Figure out the portfolio of metrics that matter for a business and track them all religiously. You will always have to make tradeoffs between metrics in business, but they should be done explicitly and not hide opportunities.

Currently listening to A Mineral Love by Bibio.

The Right Way To Set Goals for Growth

August 16th, 2016

Many people know growth teams experiment with their product to drive growth. But how should growth teams set goals? At Pinterest, we’ve experimented with how we set goals too. I’ll walk you through where we started, some learning along the way, and the way we try to set goals now.

Mistake #1: Seasonality
Flash back to early 2014. I started product managing the SEO team at Pinterest. The goal we set was a 30% improvement in SEO traffic. Two weeks in we hit the goal. Time to celebrate, right? No. The team didn’t do anything. We saw a huge seasonal lift that raised the traffic without team interference. Teams should take credit for what they do, not for what happens naturally. What happens when seasonality drops traffic 20%? Does the team get blamed for that? (We did, the first time.) So, we built an SEO experiment framework to actually track our contribution.

Mistake #2: Only Using Experiment Data
So, we then started setting goals that were entirely based on experiment results. Our key result would look something like “Increase traffic 20% in Germany Q/Q non-seasonally as measured by experiments” with a raw number representing what a 20% was. Let’s say it was 100K. Around this time though, we also started investing a lot in infrastructure as a growth team. For example, we created local landing pages from scratch. We had our local teams fix a lot of linking issues. You can’t create an experiment on new pages. The control is zero. So, at the end of the quarter, we looked at our experiments, and only saw a lift of around 60K. When we look at our German site though, traffic was up 300%. The landing pages started accruing a lot of local traffic not accounted for in our experiment data.

Mistake #3: Not Factoring In Mix Shift
In that same quarter, we beat our traffic and conversion goals, but came up short on company goal for signups. Why did that happen? Well, one of the major factors was mix shift. What does mix shift mean? Well, if you grow traffic to a lower converting country (Germany) away from a higher converting country (U.S.), you will hit traffic goals, but not signup goals. Also, if you end up switching between page types you drive traffic to/convert (Pin pages convert worse than boards, for example), or if you switch between platform (start driving more mobile traffic, which converts lower, but has higher activation), you will miss goals.

Mistake #4: Setting Percent Change Goals
On our Activation team, we set goals that looked like “10% improvement in activation rate.” That sounds like a lofty goal, right? Well, let’s do the math. Let’s say you have an activation rate of 20%. What most people read when they see a 10% improvement is “oh, you’re going to move it to 30%.” But that’s not what that means. It’s a relative percent change, meaning the goal is a 2% improvement. With this tactic, you can set goals that look impressive that don’t actually move the business forward.

Mistake #5: Goaling on Rates
Speaking again about that activation goal, there’s actually two issues. The last paragraph talked about the first part, the percent change. The rate is also an issue. An activation rate is two numbers: activated users / total users. There are two ways to move that metric in either direction: change activated users or change total users. What happens when you goal on rates is you have an activation team that wants less users so they hit their rate goals. So, if the traffic or conversion teams identify ways to bring in more users at slightly lower activation rates, the activation team misses their goal.

Best Approach: Set Absolute Goals
What you really care about for a business like Pinterest is increasing the total number of activated users. At real scale, you also care about decreasing churned users because for many business re-acquiring churned users is harder than acquiring someone for the first time. So, those should be the goals: active users and decrease in churned users. Absolute numbers are what matter in growth. What we do now at Pinterest is set absolute goals, and we make sure we account for seasonality, mix shift, experiment data as well as infrastructure work to hit those goals.

Currently listening to RY30 Trax by µ-ziq.

When Experimenting, Know If You’re Optimizing or Diverging

June 21st, 2016

When I first joined the growth team at Pinterest, in one of the early syncs a fellow PM presented a new onboarding experience for iOS. The experience was well-reasoned and an obvious improvement to how we were currently educating new users on how to get value out of the product. After the presentation, someone on the team asked about how the experiment for it was going to work. At this point, the meeting devolved into debate about how they would break out all these new experiences into chunks and what each experience needed to measure. It was liking watching a car wreck take place. I knew something went seriously wrong, but I didn’t know how to change it. The team had an obviously superior product experience, but couldn’t get to a point where they were ready to put it in front of users because of a lack of experimentation plan to measure what would cause the improvement in performance.

After I took over a new team in growth, I was asked to present a product strategy for it to the CEO, and the heads of product, eng, and design. Instead of writing about the product strategy, I wrote an operational strategy first. It was just as important for the team and this group of executives to know how we operated as it was to know what we planned for our operation. One of these components of the operational plan was a mix of optimization vs. divergent thinking. Growth frequently suffers from a local maxima problem, where it finds a playbook that works and optimizes it until it receives more and more marginal gains, making the team less effective, and hiding new opportunities not based on the original tactic. Playbooks are great, and when you have one, you should optimize every component of it for maximum success. But optimization shouldn’t blind teams to new opportunities. In the operational strategy, the first section was about questioning assumptions. If an experiment direction showed three straight results of more and more marginal improvements, the team had to trigger a project in the same area with a totally divergent approach.

This is where our previous growth meeting ran into trouble. When tackling a divergent approach as an experiment, you will either start in one of two directions. One is that the team steps back and comes up with a much better solution than what it was optimizing before, and are pretty confident it will beat the control. The other is the team determines a few different approaches, and they are not sure which ones if any will work. In both cases, it’s impossible to just change one variable from the control to isolate the impact of changes in the experience.

In this scenario, the right solution is not to isolate tons of variables in the new experience vs. the control group. It is to ship an MVP of the totally new experience vs. the control. You do a little bit more development work, but get much closer to validation or invalidation of the new approach. Then, if that new approach is successful, you can look at your data to see which interactions might be driving the lift, and take away components to isolate the variables after you have a winning holistic design.

If you’re optimizing an existing direction, isolating variables is key to learning what impacts your goals. When going in new directions though, isolating variables sets back learning time and may actually prevent you from finding the winning experience. So, when performing experiments, designate whether you are in optimization mode or divergent mode, and pick the appropriate experimentation plan for each.

Currently listening to Oh No by Jessy Lanza.

The Present and Future of Growth

June 7th, 2016

Quite a few people ask me about the future of growth. The idea of having a team dedicated the growth in usage of a product is still a fairly new construct to organizations. More junior folks or people less involved with growth always ask about the split between marketing and growth. More senior folks always ask about the split between growth and core product. Growth butts heads with both sides.

Why do more senior folks tend to turn to the difference between core product and growth than marketing? For this I’ll take a step beck. Now, I’m a marketer by trade. I have an undergraduate degree in marketing and an MBA with a concentration in marketing. So I consider everything marketing: product, growth, research, and I’ve written about that. I used to see what was happening in tech as marketing’s death by a thousand cuts. I now more so see it as marketing’s definition has gotten so broad and each individual component so complicated that it can by no means be managed by one group in a company.

So if marketing is being split into different, more focused functions, growth teams aren’t really butting heads with the remaining functions that are still called marketing over responsibilities like branding. They are butting heads with the core product team over the allocation of resources and real estate for the product.

So how do growth team and core product teams split those work streams today, and what does the future look like? The best definition I can give to that split for most companies today is that growth teams focus on getting the maximum amount of users to experience the current value of the product or removing the friction that prevents people from experiencing current value, and core product teams focus on increasing the value of the product. So, when products are just forming, there is no growth team, because the product is just beginning to try to create value for users. During the growth phase, introducing more people to the current value of the product becomes more important and plays in parallel with improving the value of that core product. For late stage companies, core product teams need to introduce totally new value into the product so that growth isn’t saturated.

My hope is that in the future, this tradeoff between connecting people to current value, improving current value, and creating totally new value is all managed deftly by one product team. That team can either have product people naturally managing the tradeoffs between these three pillars, or three separate teams that ebb and flow in size depending on the strategic priorities of the organization. All three of these initiatives – connecting people to current value, improving current value, and creating new value – are important to creating a successful company, but at different stages of a company, one or two tend to be more important than another.

We should evolve into product organizations that can detect which of these three functions adds the most value at a particular point in time naturally, fund them appropriately, and socialize the reasons for that into the organization so these different functions don’t butt heads in the future. I believe that is the product team of the future. I now believe this is more likely than marketers evolving to manage branding, research, performance marketing, and product effectively under one organization.

Currently listening to Good Luck And Do Your Best by Gold Panda.

If It Ain’t Fixed, Don’t Break It

April 11th, 2016

Frequently, products achieve popularity out of nowhere. People don’t realize why or how a product got so popular, but it did. Now, much of the time, this is from years of hard work no one ever saw. As our co-founder at GrubHub put it, “we were an overnight success seven years in the making.” But sometimes, it really just does happen without people, inside or outside the company, knowing why. Especially with social products, sometimes things just take off. When you’re in one of these situations, you can do a couple of things to your product: not change it until you understand why it’s successful now, or try to harness what you understand into something better that fits your vision. This second approach can be a killer for startups, and I’ve seen it happen multiple times.

Let’s take two examples in the same space: Reddit and Digg. Both launched within six months of each other with missions to curate the best stories across the internet. Both became popular in sensational, but somewhat different ways, but Digg was clearly in breakout mode.

What happened after the end of that graph is a pretty interesting AB test. Digg kept changing things up, launching redesigns and changing policies. Some of these might have been experiments that showed positive metric increases even. Reddit kept the same design and the same features, allowing new “features” to come from the community via subreddits, like AMA. By the launch of Digg’s major redesign in August of 2010 (intended to take on elements from Twitter), Reddit exploded ahead of Digg.

This is what the long term result of these two strategies look like. Digg is a footnote of the internet, and Reddit is now a major force.

Now, neither of these companies are ideal scenarios. The best option in the situations these companies found themselves in is to deeply understand the value their product provides and to which customers, and to completely devote your team to increasing and expanding that value over time. But, if you can’t figure out exactly why something is working, it is better to do nothing then to start messing with your product in a way that may adversely affect the user experience. This has become one of my unintuitive laws of startups: if ain’t fixed, don’t break it. If you don’t know why something is working (meaning it’s fixed and not a variable), do nothing else but explore why the ecosystem works, and don’t change it until you do. If you can’t figure it out, it’s better to change nothing like Reddit and Craigslist than to take a shot in the dark like Digg.

Currently listening to Sisters by Odd Nosdam.

The Mobile Equation

April 7th, 2016

One under-represented area of growth optimization is the mobile web to app handoff and tradeoff. This area of growth is about the key decision of what to do when someone arrives at your website on a mobile device. Do you attempt to get them to sign up or transact on our mobile website? Do you prompt for a mobile app download? Do you make mobile web available at all? Different companies have made different decisions of what to do here. What I want to talk about is how to make that decision with data instead of just on a “strategic” whim.

To do this, you need to know your company’s mobile equation. The mobile equation is: when I optimize for an app install instead of mobile web usage when someone lands on my mobile website, do I receive more or less engaged users? The way to answer this question is to experiment. Optimize some people towards mobile web usage, and some people toward mobile app download. Cohort these users, and see which group has more engagement over the long term.

Keep in mind that engagement can also be optimized though, and where the baseline ends up is the result of multiple factors:

  • the quality of mobile app onboarding
  • the quality of mobile website onboarding
  • the quality of the mobile website itself
  • the quality of the mobile app itself
  • the quality of the mobile app prompt
  • where you prompt for app download
  • the country of the user
  • the landing page of the mobile website

At Pinterest, when we did this baseline, even though prompting for app download decreased signups significantly, we still received more users by optimizing for mobile app download because the activation rate was so much higher. That meant more people got more long term value from Pinterest by a little more friction upfront. So, we got to work in optimizing all of the steps of the mobile app funnel. We tested over 15 different app interstitial concepts. We redesigned the mobile app signup flow multiple times. We made the mobile app faster. We also started preserving the context from what you were looking at on mobile web to what we first showed you on the mobile app. We saw significant increases in engaged use from all of these experiments.

Learn your mobile equation. It will help drive your strategy as well as some key growth opportunities.

Currently listening to Rojus by Leon Vynehall.

Hiring Startup Executives

February 16th, 2016

I was meeting with a startup founder last week, and he started chatting about some advice he got after his latest round of investment about bringing in a senior management team. He then said he spent the last year doing that. I stopped him right there and asked “Are you batting .500?”. Only about half of those executives were still at the company, and the company promoted from within generally to fill those roles after the executives left. The reason I was able to ask about that batting average is that I have see this happen at many startups before. The new investor asks them to beef up their management team, so the founders recruit talent from bigger companies, and the company experiences, as this founder put it, “organ rejection” way too often.

This advice from investors to scaling companies is very common, but I wish those investors would provide more advice on who actually is a good fit for startup executive roles. Startups are very special animals, and they have different stages. Many founders look for executives at companies they want to emulate someday, but don’t test for if that executive can scale down to their smaller environment. There are many executives that are great for public companies, but terrible for startups, and many executives that are great at one stage of a startup, but terrible for others. What founders need to screen for, I might argue about all else, is adaptability and pragmatism.

Why is adaptability important? Because it will be something that is tested every day starting the first day. The startup will have less process, less infrastructure, and a different way of accomplishing things than the executive is used to. Executives that are poor fits for startup will try to copy and paste the approach from their (usually much bigger) former company without adapting it to stage, talent, or business model. It’s easy for founders to be fooled by this early on because they think “this is why I hired this person – to bring in best practices”. That is wrong. Great startup executives spend all their time starting out learning about how an organization works so they can create new processes and ways of accomplishing things that will enhance what the startup is already doing. When we brought on a VP of Marketing at GrubHub, she spent all her time soaking up what was going on and not making any personnel changes. It turns out she didn’t need to make many to be successful. We were growing faster, had a new brand and better coverage of our marketing initiatives by adding only two people and one consultant in the first year.

Why is pragmatism important? As many startups forgot over the last couple of years, startups are on a timer. The timer is the amount of runway you have, and what the startups needs to do is find a sustainable model before that timer gets to zero. Poor startup executives have their way of doing things, and that is usually correlated with needing to create a very big team. They will want to do this as soon as possible, with accelerates burn, shortening the runway before doing anything that will speed up the ability to find a sustainable model. I remember meeting with a new startup exec, and had her run me through her plan for building a team. She was in maybe her second week, and at the end of our conversation I counted at least 15 hires she needed to make. I thought, “this isn’t going work.” She lasted about six months. A good startup executive learns before hiring, and tries things before committing to them fully. Once they know something works, they try to build scale and infrastructure around it. A good startup executive thinks in terms of costs: opportunity costs, capital costs, and payroll. Good executives will trade on opportunity costs and capital costs before payroll because salaries are generally the most expensive and the hardest to change without serious morale implications (layoffs, salary reductions, et al.).

Startup founders shouldn’t feel like batting .500 is good in executive hiring. Let’s all strive to improve that average by searching for the right people from the start by testing for adaptability and pragmatism. You’ll hire a better team, cause less churn on your team, and be more productive.