
International Journal of Leading Research Publication
E-ISSN: 2582-8010
•
Impact Factor: 9.56
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 6 Issue 7
July 2025
Indexing Partners



















Reinforcement Learning Scheduler to Improve Kubernetes Performance and Scalability
Author(s) | Kanagalakshmi Murugan |
---|---|
Country | United States |
Abstract | Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. It provides a powerful and extensible architecture that enables organizations to run complex, distributed systems reliably and efficiently. One of the core components that drives Kubernetes' operational efficiency is its scheduler, which is responsible for assigning newly created pods to suitable nodes within a cluster. The Kubernetes scheduler plays a critical role in balancing workloads across the available resources to ensure performance, reliability, and optimal resource utilization. The default scheduler in Kubernetes, known as the kube-scheduler, follows a multi-step process to make scheduling decisions. First, it filters nodes based on resource requirements and constraints such as CPU, memory, affinity rules, taints, and tolerations. Once the set of feasible nodes is identified, a scoring mechanism is applied to rank these nodes. Factors such as resource balance, data locality, and pod affinity are taken into account. The node with the highest score is then selected, and the pod is bound to it. This decision-making process is designed to be pluggable, allowing custom schedulers or policy extensions to be added for more specific use cases. In such scenarios, alternative scheduling approaches are explored, including heuristics, constraint solvers, and increasingly, machine learning and reinforcement learning (RL) techniques. RL-based schedulers, for instance, learn optimal scheduling policies by interacting with the environment and receiving feedback in the form of rewards or penalties based on outcomes such as pod latency, resource utilization, or SLA compliance. RL-based schedulers can outperform traditional approaches by identifying non-obvious patterns and continuously evolving their policies based on operational data. Kubernetes scheduler is a foundational element of the platform that significantly influences application performance and cluster efficiency. With the growing complexity of containerized workloads, there is a clear need to explore intelligent scheduling mechanisms that go beyond rule-based methods. Reinforcement learning provides a promising direction for future innovation, enabling smarter and more adaptable scheduling strategies for next-generation Kubernetes deployments. This paper shows the perf improvement at scheduler. |
Keywords | Kubernetes, scheduler, containers, orchestration, deployment, clustering, scalability, latency, scheduling, automation, reinforcement, learning, optimization, resources, performance. |
Published In | Volume 3, Issue 5, May 2022 |
Published On | 2022-05-06 |
Cite This | Reinforcement Learning Scheduler to Improve Kubernetes Performance and Scalability - Kanagalakshmi Murugan - IJLRP Volume 3, Issue 5, May 2022. DOI 10.70528/IJLRP.v3.i5.1641 |
DOI | https://doi.org/10.70528/IJLRP.v3.i5.1641 |
Short DOI | https://doi.org/g9rv6t |
Share this


CrossRef DOI is assigned to each research paper published in our journal.
IJLRP DOI prefix is
10.70528/IJLRP
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
