Roster cuts

It’s football season (Hallelujah! Go Pats!) and going into the season, every team just finished making the roster cuts necessary to get down to 53 men.

Hmmm…something about that sounds familiar…OH, right!

It’s VMworld and going into Tech Field Day, we just finished making the slide deck cuts necessary to get our presentation down to 53 minutes.

The morning before TFD, we killed a slide.  It wasn’t a bad slide, but there was discussion of how the deck was already long, the speaking slot short, and the slide perhaps a bit redundant. All told, the deck was 24 slides, and I think we created and didn’t use another 24 slides as we crafted and refined our story.

The presentation went great – Peter and Scott did an amazing job talking about our architecture and demonstrating the new features of version 2.0 of our product.  Check it out here.

But the last slide we killed stayed with me.  Not because it was the greatest slide, or the worst slide, but because I think it says something interesting that as the product marketer I didn’t capture compellingly enough to justify its remaining in the deck.

Here’s the original slide:








See?  It’s pretty interesting – it takes six important pieces of the datacenter and provides a factoid about each one.  After that, we focused in on why there is so much pressure on storage, since that is the problem our software solves.








After that, the story continues with a discussion how different vendors are solving the problem of pressure in storage, and how our solution is differentiated from them.  We ended up removing the first slide above and just starting the story with the second slide.

What I didn’t capture right in the deck is the interconnected-ness of these components.  I’m going to take storage out of it for a moment, because that’s the component whose change we want to analyze.  So, if you look at what is enabling Applications and Virtualization to change in these ways, it’s the changes in CPU, Memory, and Networking.  Conversely, the innovation in those areas is being driven by the Application and Virtualization growth.

Kind of like this. (And to get really technical about it, I’d say that 10GbE networking is even more impactful for cloud-based apps than it is for virtualization, we can keep the image simpler.)








Now we can look at what is driving all the pressure on storage.  CPU, Memory, and 10GbE are enabling applications to get faster (and scale out) and virtualization to consolidate more densely.  But the 10GbE network isn’t just enabling applications and virtualization, it’s also the pipe directly to the storage – so it’s a direct factor as well:








And one final touch on that slide – the major disruption in storage itself – flash technologies dropping in price – are causing pressure on storage.  Flash means more I/O able to be processed on the storage, IF the storage processor can handle the uptick in IOPS.  And flash needs special handling for things like garbage collection and wear leveling.  So that’s more pressure storage is putting “on itself.”








So that itself would have been a better first slide – and in fact may have negated the need for the second slide.

Stay tuned for another post early next week on what else is interesting about this slide that didn’t quite make the roster…err…deck.

One thought on “Roster cuts

  1. Pingback: Hair of the dog | storageDiva's Tablespace

Leave a Reply

Your email address will not be published. Required fields are marked *