About the Design Cache category
I see a problem in this design. Here we are assuming that 10M Request per second
What is the kind of QPS we expect for the system?
Will our machines be able to handle qps of 23000?
How would you implement HashMap?
What if we shard among machines with 16GB of RAM?
I was asked this problem in Google's interview for L5 position
Consistency vs Availability?
Please explain Number of shards = 30 * 1000 / 16 = 1875
10M/420 machines gives 2381 QPS per machine
What is 4 in below statement?
A very good resource for designing distributed cache
What if we never had to remove entries from the LRU cache because we had enough space, what would you use to support and get and set?
What is the meaning of "1M QPS" here?
Confused about the statement that QPS of 23K is not easily feasible
How the QPS is calculated here? I am not getting the exact calculation. Can someb
Diagram is the first problem
I dont understand how are you updating the linked list in the LRU
What is the number of machines required to cache?
Benefits of Write Through Cache
LRU cache on a single machine which is multi threaded - how does the LRU part work?
Where to submit solution
Submit Design Problem
How would a LRU cache work on a single machine which is multi threaded?
LRU for a distributed cache?
How would you prioritize above operations to keep latency to a minimum for our system?
What happens when a machine handling a shard goes down?
How would a LRU cache work on a single machine which is single threaded?
What about sharding algorithms ? How does the caller know which server to go for
next page →