Main Article Content
To date, battery optimization for embedded systems still a crucial subject. Actually, the majority of carried out works focus on transmission controls without taking into account the specifications of the batteries themselves. Indeed, an improvementof 70\% is reported by exploiting the battery recovery effect.In this paper, the recovery phenomenon is exploited to design an algorithm that optimizes both the lifetime of the battery and the performance of the studied system. The algorithms from Dynamic programming and Reinforcement learning fields are the first to be considered. When in Dynamic programming prior detailed information are assumed to be available, in reinforcement learning those information becomes unknown and long calculation times are needed to converge toward an optimal policy solution. The paper contribution is about designing a new Rapid Learning Algorithm (RLA) that combines both Dynamic programming and Reinforcement learning features. RLA exploits a reduced model of the system instead of exploring the whole and heavy system state model as Dynamic programming do. The RLA run-time is then shortened. Based on battery stochastic model, the simulation results obtained with RLA are compared to the Dynamic programming and Reinforcement learning algorithms under the same conditions. By taking into account the recovery effect this paper illustrates that the calculation time and the system performance are greatly improved when RLA is adopted.
How to Cite
ASSAOUY, M., Zytoune, O., & Ouadou, M. (2022). Rapid Learning Optimization Approach for Battery Recovery-Aware Embedded System Communications. International Journal of Communication Networks and Information Security (IJCNIS), 12(3). https://doi.org/10.17762/ijcnis.v12i3.4774 (Original work published December 21, 2020)