When Experimenting, Know If You’re Optimizing or Diverging

When I first joined the growth team at Pinterest, in one of the early syncs a fellow PM presented a new onboarding experience for iOS. The experience was well-reasoned and an obvious improvement to how we were currently educating new users on how to get value out of the product. After the presentation, someone on the team asked about how the experiment for it was going to work. At this point, the meeting devolved into debate about how they would break out all these new experiences into chunks and what each experience needed to measure. It was liking watching a car wreck take place. I knew something went seriously wrong, but I didn’t know how to change it. The team had an obviously superior product experience, but couldn’t get to a point where they were ready to put it in front of users because of a lack of experimentation plan to measure what would cause the improvement in performance.

After I took over a new team in growth, I was asked to present a product strategy for it to the CEO, and the heads of product, eng, and design. Instead of writing about the product strategy, I wrote an operational strategy first. It was just as important for the team and this group of executives to know how we operated as it was to know what we planned for our operation. One of these components of the operational plan was a mix of optimization vs. divergent thinking. Growth frequently suffers from a local maxima problem, where it finds a playbook that works and optimizes it until it receives more and more marginal gains, making the team less effective, and hiding new opportunities not based on the original tactic. Playbooks are great, and when you have one, you should optimize every component of it for maximum success. But optimization shouldn’t blind teams to new opportunities. In the operational strategy, the first section was about questioning assumptions. If an experiment direction showed three straight results of more and more marginal improvements, the team had to trigger a project in the same area with a totally divergent approach.

This is where our previous growth meeting ran into trouble. When tackling a divergent approach as an experiment, you will either start in one of two directions. One is that the team steps back and comes up with a much better solution than what it was optimizing before, and are pretty confident it will beat the control. The other is the team determines a few different approaches, and they are not sure which ones if any will work. In both cases, it’s impossible to just change one variable from the control to isolate the impact of changes in the experience.

In this scenario, the right solution is not to isolate tons of variables in the new experience vs. the control group. It is to ship an MVP of the totally new experience vs. the control. You do a little bit more development work, but get much closer to validation or invalidation of the new approach. Then, if that new approach is successful, you can look at your data to see which interactions might be driving the lift, and take away components to isolate the variables after you have a winning holistic design.

If you’re optimizing an existing direction, isolating variables is key to learning what impacts your goals. When going in new directions though, isolating variables sets back learning time and may actually prevent you from finding the winning experience. So, when performing experiments, designate whether you are in optimization mode or divergent mode, and pick the appropriate experimentation plan for each.

Currently listening to Oh No by Jessy Lanza.