
International Journal of Leading Research Publication
E-ISSN: 2582-8010
•
Impact Factor: 9.56
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 6 Issue 6
June 2025
Indexing Partners



















Optimizing Read Performance in Distributed Systems with Lease-Based Latency
Author(s) | Naveen Srikanth Pasupuleti |
---|---|
Country | United States |
Abstract | In distributed systems like etcd, read index latency plays a critical role in determining the responsiveness and efficiency of read operations. etcd is a highly consistent and reliable key-value store that uses the Raft consensus algorithm to ensure strong consistency across nodes. In such systems, read operations must often be linearizable, meaning they reflect the most recent committed writes. To achieve this, etcd implements a mechanism called the read index operation, which allows a follower node to serve a read request without becoming the leader, while still ensuring the data is up-to-date and consistent. The read index process in etcd involves a follower contacting the leader to determine the highest committed index. The leader then broadcasts this information to the cluster, ensuring that a quorum has acknowledged the latest state before the read is served. This guarantees linearizability but introduces latency, especially as the cluster size increases. Every read involves inter-node communication, which leads to increased overhead compared to simpler or stale reads. This makes read index operations suitable for strongly consistent reads but less ideal for high-throughput or low-latency scenarios. As the number of nodes in an etcd cluster grows, read index latency tends to increase. This is due to the coordination cost of maintaining consistency across more nodes. For example, in a small cluster of three nodes, the round-trip communication needed to validate a read is relatively quick. However, in larger clusters such as those with seven or more nodes, the communication and consensus overhead significantly increase the time it takes to confirm a read, resulting in higher latency. In benchmarks and test environments, it has been observed that read index latency grows steadily with cluster size. This elevated latency becomes a bottleneck for systems that require fast and frequent reads, such as service discovery, dynamic configuration updates, or distributed locking mechanisms. In real-world use cases, high read index latency can degrade application performance and responsiveness. To address this, some developers use stale reads or caching techniques, which reduce latency but compromise consistency. In summary, while etcd’s read index mechanism ensures strong data consistency, it comes with the drawback of higher latency, especially in larger or busy clusters. Careful architectural choices and optimizations are required to balance consistency with performance in applications relying on etcd for real-time read operations. This paper addresses this issue by using lease based latency. |
Field | Engineering |
Published In | Volume 4, Issue 2, February 2023 |
Published On | 2023-02-03 |
Cite This | Optimizing Read Performance in Distributed Systems with Lease-Based Latency - Naveen Srikanth Pasupuleti - IJLRP Volume 4, Issue 2, February 2023. DOI 10.70528/IJLRP.v4.i2.1583 |
DOI | https://doi.org/10.70528/IJLRP.v4.i2.1583 |
Short DOI | https://doi.org/g9p7mv |
Share this


CrossRef DOI is assigned to each research paper published in our journal.
IJLRP DOI prefix is
10.70528/IJLRP
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
