A read/write request will arrive for KEY which can be userId or table rowId depending on how we shard the database.
We should be clear that each node represents a hash value for a request and the first following shard will be serving its requests.
As I understand there can be two ways of implementing modified consistent hashing that depends on how we place the duplicate shards.
- Using different hash functions for shards to place them next to each other in as much randomness as possible so as to make sure that when one server fails the load is ALMOST EQUALLY transferred to other servers (as in all randomness the next server wont be same one).
For Eg:, s1 s2 s3 s1 s3 s2 s1 … … . So if s1 fails its load falls on s2 and s3 both in different places.
Just to make it clear there are not multiple instances of s1 but only multiple mapping positions in ring.
- Having actual multiple instances in parallel for shards in master slave fashion.
(Just an alternative that comes to mind when dealing with failures.)