Design Cache Q: What if we shard among machines with 16GB of RAM?
the discussion seems a bit shallow. 5500 QPS seems reasonable? why?
I think the idea here is to show that 550 QPS is lower than 2300 QPS and so better compared to the previous approach. It is just a question of showing the different back of envelope calculation and arriving at a reasonable number.
you forgot a 0 after your numbers. 10M / 430 is 23k. I do not think 5500 qps is a reasonable number. This part Is something like: "ok but if it does not work you can add machines".
in this line "alll the currently cached data becomes VALID and all requests would have to" should not this be invalid
This 1875 shards means 1875 machines with 16G RAM?
Yes. We would have 1875 machines with 16G RAM
“Perform well” is a very short-sighted term. Much better is to say “perform better”, or even say “consistent hashing will require some cache entries to be invalidated, but their expected number will be N/k, where N is number of entries and k is previous number of shards”. Yet, there is a better solution than just consistent hashing: use more shards than servers and distribute shards between servers. In this case adding a server will require several shards to move to this server.
Also, requests do not need to hit DB to warm up the cache. This is a disaster. You can load data from DB to cache based on keys that were cached before shards were changed.
What does a shard mean here? Is it same as a server? or multiple shards on a server.
I did not understand how 1 server manages multiple shards and how a single server managing multiple shards is better than a single machine handling more data?
With lower main memory size, CPU cycles required for access lowers as well. How?