How would a LRU cache work on a single machine which is single threaded?


#1

Design Cache Q: How would a LRU cache work on a single machine which is single threaded?


#2

Question is a bit unclear. What's the meaning of: "a machine which is single threaded ?". Do you mean that machine has only one core so any app running on that can't make use of threading. Or if the app itself isn't doing multi-threading ? But in either case, being single threaded means that parallel reads and writes to cache won't be possible.


#3

Whatever is your assumption, the cache will be unable to handle multiple threads. So it is not a thread safe cache at this moment. Basically it's always a good idea to start with a very minimal & simple design so that the candidate get the functionalities correct. So first we can develop a single threaded simple cache, then a bit complex thread safe cache, then we can think of distributing the cache over many machines as a single entity.


#4

Would a priority queue instead of a doubly linked list work here as well?


#5

Priority queue will have an unnecessary log of time complexity, thus seriously reducing the performance in large system


#6

Can we use a LinkedHashMap instead of Doubly linked list and a hash map??


#7

I understand the evict operation would take constant time. How about the retrieve operation? Won't it be linear? Since in the value of the Map you have the node of the list. And as per my knowledge the only way to reach a node in the linked list is right from the head.


#8

Just like the list nodes contain the pointer to next or prev node, the map itself contains the pointer to the exact list node which in turn contains the actual value against a key. This is unlike the normal paradigm where we are used to storing the values against the key in the map itself, but it gives us the access to the list node in constant time and the node gives us access to the value in constant time again. We could have saved the value in the map itself but we also need a DS to maintain the order of access of keys for the purpose of LRU, for which we require the Doubly Linked List. List to maintain the order and Doubly linked to facilitate shifting (delete + insert at the front) of recently accessed nodes in constant time again.

list = [Node1, Value1] <–>[Node2, Value2]<–>[Node3, Value3]
front = [Node1]
map = {Key1: Node1, Key2: Node2, Key3: Node3}

So to say the least this setup (map + DLL) is the best possible setup to give us constant time for all Read, Write and Delete operations on the cache. Thus solving the problem of latency the best it can.