Monthly Archives: February 2015

I saw an example of good leadership yesterday

When I think about good leadership, often I think about big, bold leadership.  People like Marissa Mayer or Elon Musk – big risk takers who make bold moves.  Sure, they are inspiring and there is value in my business education to following them, but it’s hard to consider them role models. Aspiring to be “like Marissa Mayer” or “like Elan Musk” is not helpful to me on a daily basis.

Yesterday, I watched a very small activity that I’m not sure anyone else noticed, and it hit me hard that *this* is what good, everyday leadership looks like.  Here’s what happened.

Our CTO was giving a technical talk, occurring in Cambridge with our Nashua team joining via Google Hangouts.  We do this kind of broadcast bi-weekly for our engineering iteration meetings, and they are riddled with issues: bad audio, screens that don’t share, wifi problems, and any number of other similar problems.

I watched as our SVP of Engineering – an executive who has arguably one of the most important roles in executing our strategy – log in to the Hangout from Cambridge to see the version of the screen his team in Nashua is seeing. He had the CTO correct his screen sharing and camera a few times.  Then I watched as he ran over to his desk and grabbed his headset to hear the audio as it was being heard in Nashua.  He attended the entire talk this way.

It’s a minor thing, right?  But it’s great leadership – it’s leadership that shows a complete lack of ego, and a desire to be inclusive to team members, and a commitment to getting done what has to get done.

The opportunities for these sorts of things are probably more likely in a startup where there isn’t really administrative support staff, and where everyone pitches in, but I also know of startups where the SVP of Engineering wouldn’t deign to do something so mundane.

My lesson of the day: good leadership isn’t only big, bold leadership.

Critical Mass

Last week at Infinio we added a few members to the sales and marketing teams.

I’m not sure I could have predicted it, but it was enough people to suddenly feel like there is critical mass on the sales and marketing floor.  There’s now a constant buzz of noise, someone is always on the phone with a customer, and our Friday afternoon sports/movies/music debates just got a lot more lively.  Another indicator?  There’s more than one destination for lunch each day.

When I was at Dell there was a time when my team grew from 3 people to 7, inorganically. Suddenly one person’s dental appointment or sick kid no longer cancelled the team meeting; my team 1x1s took nearly an entire day.

As a manager, having a team grow like that added immeasurable complexity, but it also added immeasurable value.  When it came to brainstorming, or allocating projects to people, or even sending someone to get something proofread or looked at, there were options.  Delegating became something other than zero-sum.

It’s exciting to see this happen at Infinio.  To have responsibilities that were held by one person grow large enough that they split into a few people’s domains, to have a few people performing the same function rather than just one person in each role, and to feel like when something comes up, there’s a particular person to go to for assistance – these are all exciting signs of growth.

I’ve heard early members of startups talk about the good old days when they were small and agile and knew everyone.  We’re still in that phase, I think.  But, delightfully, a little bigger.

Statistical Significance

The other day I was in a meeting looking at sales numbers.  We were comparing performance ofgraph two different queues of leads, and someone said something like “Clearly, Queue A is giving us a better yield.”

The numbers were pretty small and we only had a week’s worth of data.  “I don’t know,” I said, “is that difference really statistically significant?”

“Of course it is.  It’s twice as big”

In the moment I let it go, but I knew that it wasn’t a true assessment of statistical significance. Let’s say there were only 4 names in each queue.  If Queue A gave us 1 lead and Queue B gave us 2 leads, then “twice as many” wouldn’t feel like a conclusion.  Conversely, if there 1000 names in each queue, and one gave us 400 leads while the other gave us 800 leads, then we could comfortably draw the conclusion.  But what about all the in-betweens?

All of this drove me to do some research on what “statistical significance” really means.  As I’ve been learning, much of marketing is actually pretty numbers-driven, so knowing what numbers “matter” is important to making good decisions.

Here’s the first definition that came up, from Wikihow.  “Statistical significance is the number, called a p-value, that tells you the probability of your result being observed, given that a certain statement (the null hypothesis) is true. If this p-value is sufficiently small, the experimenter can safely assume that the null hypothesis is false.”


OK, first I teased out what “null hypothesis” means – it’s the baseline that assumes that there is no impact of the variable, or no difference in two populations.  In my example, the null hypothesis would be that the yield of Queue A is the same as the yield of Queue B.  Any difference in their yields is based purely on randomness.

As I read more, there seem to be three interrelated concepts that feed into whether something is significant:

Sample Size: How many activities are we looking at?

How big are each of the queues?

Confidence Range: How precisely honed in on the conclusion are we?

We’re comfortable saying that if the yields are within 5% of each other then they are, for practical purposes, the same.

Confidence Level: How sure are we about our conclusion?

We’re 90% sure that the yields being different means that in the larger population they will be different (and thus we should/shouldn’t make a decision based on this)

So the way these interact is: If you want a higher confidence level (i.e., to be more sure of your conclusion) then you have to accept a larger confidence interval (i.e., accept a greater range, like 8% rather than 5%).  To make that interval lower, then you need a larger sample size.

Statistical significance means that we are at least 95% that the results are due to the nature of the different populations, not to randomness.

Getting back to our example, then the thing we’d be testing is “Is the yield from Queue A greater than that of Queue B.”  We’d define “greater than” as calculating the interval of both results and checking that they don’t overlap.  And we’d need a big enough sample size to be 95% sure that these results were repeatable.

One thing that helped me understand this more concretely is this calculator provided by KISSMetrics.

Let’s say I have a sample size of 100.

If Queue A yields 90 and Queue B yields 80, then with 98% certainty we can say Queue A is better.

But let’s say Queue A yields 40 and Queue B yields 50.  Then the certainty is only 92%, which is not considered “statistically significant.”  We’re only 92% sure that this is not due to randomness.

This video is a great resource specifically about marketing, as is this page from UT, which is a little more formal mathematically.

My take-home conclusion is that of where I started: it’s not always obvious whether something is “statistically significant” without doing some serious math.

Marketing: Probabilistic, not deterministic

In college, one of the courses I enjoyed the most was Operations Research.  There were two versions of this course, a deterministic one and a probabilistic one.  I always regret only having taken the former.

The deterministic course taught you to solve problems like this, “If it costs $X to make metal widgets over 3 weeks with a profit of $A, and $Y to make wooden widgets over 5 weeks with a profit of $B, what is the optimal mix of metal and wooden widgets to make to maximize profit?”

The probabilistic course taught you to solve problems like this, “If 75% of widgets with a Flaw X fail, and only 2% of those without Flaw X fail, and 15% of widgets have Flaw X, what is the chance of any new widget failing?”

This all came to mind because a few days ago a colleague and I were discussing how marketing is totally probabilistic.  There’s no set of events, materials, and interactions that will guarantee a certain outcome.  Everything we’re doing in marketing is to increase probabilities.

Marketing (I’m talking about demand generation / marketing communications marketing, not product marketing) is all about reaching a large audience and successively focusing in on the people in that audience who are most open to learning more, then have the specific pain we’re solving, then are making a decision about how to solve the problem in the near-term.

(If you need a crash course on this, Atlassian did a series of blog posts on it a few years back.)

Optimizing marketing is about (a) increasing the size of the total audience, and (b) increasing the conversion rate for each phase of the funnel.  That is, how can we go to the right events/write the write whitepapers/invite people to the right webinars so that more of the people looking for a solution know about us, and more of the ones who know about us see us as a solution for their shortlist.

But that’s what’s kind of crazy about marketing: It’s always “how can we find more” but there’s no concept of “how can we find all.”  Sure, there are industry benchmarks (things like X% of visitors to your website should visit at least Y pages, and Y% of people who attend a webinar are likely to buy in Z months), but those are merely goals, not guarantees.

In short, there’s no formula that says “Go to VMworld, send these two whitepapers, have these three conversations, and then invite the customer to this private event, then they will buy your product.”

Marketing gets a bad rap – but it shouldn’t.  It’s a key function in a company, and is a lot harder than just writing some press releases and choosing booth graphics for an event.  But thinking through this leads me to wonder if its lack of respect is rooted in its probabilistic – rather than deterministic – nature.

What’s the deal with hotels not providing toothpaste?

In the past 16 hours, I’ve checked into not one, but (due to a wee snowstorm in Boston) two toothpastehotels.  One was mid-range, and one slightly higher-end.  Neither one had toothpaste in the bathroom.

In fact, I can’t remember checking into any hotels ever – anywhere – that had toothpaste in the bathroom.  Mouthwash, very, very occasionally.  But not toothpaste.

Lest you think this is a Seinfeld-esque meditation on “What’s the deal with hotels and toothpaste”, read on. There’s a marketing lesson here.

A few years ago, Slate had a comprehensive article on why there isn’t toothpaste in hotel rooms.  The author interviews several people in the industry, then offers several theories, none of which he really seems to like: toiletries are refilled from a big vat, and toothpaste can’t be refilled; it’s too expensive; it’s not an aspirational cosmetic; it’s a conspiracy to tip bellman who have to bring it up.

Whether it’s a conspiracy, an economic decision, or something else, what I see is an interesting marketing opportunity – at least an opportunity for analysis.

One of the things marketers do is create “awareness” around a brand.  This isn’t the kind of marketing that helps someone who is down to deciding between Toyota and Honda, this is the kind of marketing that Mercury did a few years ago with “You’ve got to put Mercury on your list.”  It’s making sure that people are aware that the brand exists, so that when it’s time to make the “shortlist” (in this case, cruise through the CVS toothpaste aisle and choose one) the brand is top of mind.

I don’t know a lot about how consumers choose to purchase products, (although I can tell you a whole lot about how IT buyers do it) but I’ve got to believe that there is a value to someone’s using a toothpaste they haven’t used before, in their likelihood to buy it in the future.  Right now I think that toiletries in hotels are funded by the hotels as an amenity.  Some of the higher-end hotels have higher-end brands like Bliss or L’Occitane.

But what if Colgate (or Crest or Aquafresh or AIM) paid for their toothpaste to go into hotels? Couldn’t that help the toothpaste companies?  Let’s say I always just buy what’s on sale; or perhaps I always buy Colgate because that is what I grew up brushing with.  Wouldn’t this be one of the only chances to have me try something different?  And couldn’t it be at least as effective per marketing $ as telling me that 9 out of 10 dentists recommend something?

It seems like that would be something worth piloting, although measuring the efficacy could be difficult.  If you picked just one city, you’d have no way to track the impact on everyone’s buying habits when they went home and bought Brand X at their local pharmacy.  Unless you gave them a coupon, which tracked their purchase.  Or chose a destination where you know where the guests are from (e.g. Disney during NJ’s school vacation time).

In any case – I can’t tell you how often I’ve forgotten toothpaste and yearned for it in my hotel room at 11pm.  I’d be a fan of the first brand to help me out.