Every day for the first 12 years of my education, my father sent me off to school with a single piece of advice: “Ask good questions.”
He has a few other standards (doesn’t everyone’s dad?), but this one has always stuck with me. I think it’s a major contributing reason as to how I ended up in technology.
Right now I’m in the phase of learning where the good questions are dumb questions. Because when we say, “I have a dumb question” what are we really saying?:
- “I want to ask about something that I think that you think that I should already know”
- “I want to ask about something that seems really basic but I don’t understand”
- “I want to ask about something that I think everyone else in the room already knows”
- “I want to ask about something that seems to be accepted but I don’t believe.”
- “I want to ask about something that you didn’t explain very well.”
I love asking dumb questions. Because they usually aren’t that dumb. And right now they are the key to my building a strong foundation of knowledge and capability.
So I think they are good questions. Here are some of the ones I’ve asked recently:
1. Why is our product called a “content-based cache?” Aren’t all caches caching content?
Well, yeah. Usually. But when we say ”content-based cache,” it is in contrast to “location-based cache.”
Location-based cache says “oh, you asked for the data in slot 7 already. I have that slot 7 data, here it is.”
Content-based cache says “oh, you asked for the data whose hash is xyzabc. I have the data whose hash is xyzabc, here it is.”
The benefit of using a content-based caching scheme is that if the same data is in two places, it only has to be stored in the cache once. That makes the cache’s logical size larger.
2. So is write-through cache write cache or not?
Write caching comes in a few flavors. Write-back cache might be what you think of when you think of write caching. When writes come in, they are written to cache immediately. At some point, they get moved to more permanent storage (or not.)
Write-through cache is what Infinio does. It doesn’t mean that we aren’t caching writes – in fact, every write that comes in is added to the cache. However, the original write is also committed to the storage system immediately. This “warms” the cache by putting the most recently written data into it. However, unlike write-back cache, there’s no point in time in which the data is only in cache and not also on disk.
Write-through cache is not as fast as write-back cache, because the data is being written all the way to disk, not just to local (faster) cache. But it is less risky because all writes are going through to disk. For our solution, it also means you don’t have to make any changes to your storage system because snapshots, replication, etc., all work exactly the same way as before Infinio.
3. What’s all this discussion about Offload vs. Acceleration?
My first day, I sat in on a meeting where for 30 minutes we discussed whether we do offload or acceleration. I say “we” but I had no idea what was going on. There were metaphors involving cars and engines that were not helping me understand anything any better. I asked someone to explain it but it felt like all they were doing was describing the product again.
As it turns out, here’s what they were talking about:
The core function of our product is to offload read requests from the storage system by serving them from our cache. There are a lot of benefits that come out of this offload:
Some reads are much, much faster if you are requesting data that is in your local piece of the distributed cache.
Some reads are much faster if you are requesting data that is anywhere in the distributed cache.
Because fewer requests are going to the storage system, the requests that are can be served more quickly because the storage system is less busy.
Technically, the first two are acceleration and the #3 is offload. You’re welcome – now you don’t have to go to a meeting about this.
I feel pretty confident that this “dumb question” phase is far from over. I’ll share my next few shortly.