Base Station Power Optimization for Green Networks Using Reinforcement Learning

The next generation mobile networks have to provide high data rates, extremely low latency, and support high connection density. To meet these requirements, the number of base stations will have to increase and this increase will lead to an energy consumption issue. Therefore “green” approaches to the network operation will gain importance. Reducing the energy consumption of base stations is essential for going green and also it helps service providers to reduce operational expenses. However, achieving energy savings without degrading the quality of service is a huge challenge. In order to address this issue, we propose a machine learning based intelligent solution that also incorporates a network simulator. We develop a reinforcement-based learning model by using deep deterministic policy gradient algorithm. Our model update frequently the policy of network switches in a way that, packet be forwarded to base stations with an optimized power level. The policies taken by the network controller are evaluated with a network simulator to ensure the energy consumption reduction and quality of service balance. The reinforcement learning model allows us to constantly learn and adapt to the changing situations in the dynamic network environment, hence having a more robust and realistic intelligent network management policy set. Our results demonstrate that energy efficiency can be enhanced by 32% and 67% in dense and sparse scenarios, respectively.


A. Usman, I. Ozturk, A. Hassan, S. M. Zafar and S. Ullah, "The effect of ICT on energy consumption and economic growth in South Asian economies: an empirical analysis," Telematics and Informatics, vol. 58, p. 101537, 2021.

A. Abrol and R. K. Jha, "Power Optimization in 5G Networks: A Step Towards GrEEn Communication," IEEE Access, vol. 4, pp. 1355-1374, 4 2016.

E. C. Strinati and L. Herault, "Holistic approach for future energy efficient cellular networks," Elektrotechnik und Informationstechnik, vol. 127, p. 314–320, 11 2010.

S. Zhou, J. Gong, Z. Yang, Z. Niu, P. Yang and D. Corporation, "Green mobile access network with dynamic base station energy saving," ACM MobiCom, vol. 9, p. 10–12, 1 2009.

M. Ismail, W. Zhuang, E. Serpedin and K. Qaraqe, "A survey on green mobile networking: From the perspectives of network operators and mobile users," IEEE Communications Surveys and Tutorials, vol. 17, pp. 1535-1556, 2015.

M. Aykut Yigitel, O. D. Incel and C. Ersoy, "Dynamic BS Topology Management for Green Next Generation HetNets: An Urban Case Study," IEEE Journal on Selected Areas in Communications, vol. 34, p. 3482–3498, 12 2016.

M. Feng, S. Mao and T. Jiang, "Base Station ON-OFF Switching in 5G Wireless Systems: Approaches and Challenges," IEEE Wireless Communications, vol. 24, p. 46–54, 8 2017.

B. B. Post and H. van den Berg, "A self-organizing base station sleeping and user association strategy for dense cellular networks," Wireless Networks, vol. 27, no. 1, pp. 307-322, 2021.

A. Al-Quzweeni, A. Lawey, T. El-Gorashi and J. M. H. Elmirghani, "A framework for energy efficient NFV in 5G networks," in 2016 18th International Conference on Transparent Optical Networks (ICTON), 2016.

T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver and D. Wierstra, "Continuous control with deep reinforcement learning," in 4th International Conference on Learning Representations, 2016.

C. Jiang, H. Zhang, Y. Ren, Z. Han, K. Chen and L. Hanzo, "Machine Learning Paradigms for Next-Generation Wireless Networks," IEEE Wireless Communications, vol. 24, pp. 98-105, 4 2017.

T. E. Bogale, X. Wang and L. B. Le, "Machine Intelligence Techniques for Next-Generation Context-Aware Wireless Networks," ITU Journal: ICT Discoveries, Special Issue, p. 1–11, 2 2018.

M. G. Kibria, K. Nguyen, G. P. Villardi, O. Zhao, K. Ishizu and F. Kojima, "Big Data Analytics, Machine Learning and Artificial Intelligence in Next-Generation Wireless Networks," IEEE access, vol. 6, p. 32328–32338, 5 2018.

R. S. Sutton, "Learning to predict by the methods of temporal differences," Machine Learning, vol. 3, p. 9–44, 8 1988.

L. Raju, R. S. Milton, S. Suresh and S. Sankar, "Reinforcement learning in adaptive control of power system generation," Procedia Computer Science, vol. 46, p. 202–209, 12 2015.

C. J. C. H. Watkins, "Learning from Delayed Rewards," Cambridge, 1989.

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, "Playing Atari with Deep Reinforcement Learning," in Neural Information Processing Systems (NIPS) Workshop on Deep Learning, 2013.

D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra and M. Riedmiller, "Deterministic Policy Gradient Algorithms," 31st International Conference on Machine Learning, ICML 2014, vol. 1, 6 2014.

M. Rupp, S. Schwarz and M. Taranetz, The Vienna LTE-Advanced Simulators: Up and Downlink, Link and System Level Simulation, 1 ed., Springer Singapore, 2016.

G. Karagiannis, G. T. Pham, A. D. Nguyen, G. J. Heijenk, B. R. Haverkort and F. Campfens, "Performance of LTE for Smart Grid Communications," in Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance, Springer International Publishing, 2014, p. 225–239.

H. P. Keeler, B. Blaszczyszyn and M. K. Karray, "SINR-based k-coverage probability in cellular networks with arbitrary shadowing," in 2013 IEEE International Symposium on Information Theory, 2013.

3rd Generation Partnership Project (3GPP), "Evolved universal terrestrial radio access (EUTRA)," in 3rdGenerationPartnership Project (3GPP), 2014.

B. Clerckx, G. Kim and S. Kim, "MU-MIMO with Channel Statistics-Based Codebooks in Spatially Correlated Channels," in IEEE Global Telecommunications Conference, 2008.

G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang and W. Zaremba, OpenAI Gym, 2016.

C. Liu, B. Natarajan and H. Xia, "Small Cell Base Station Sleep Strategies for Energy Efficiency," IEEE Transactions on Vehicular Technology, vol. 65, p. 1652–1661, 3 2016.

S. Cai, Y. Che, L. Duan, J. Wang, S. Zhou and R. Zhang, "Green 5G Heterogeneous Networks Through Dynamic Small-Cell Operation," IEEE Journal on Selected Areas in Communications, vol. 34, pp. 1103-1115, 5 2016.

H. Lu, B. Hu, Z. Ma and S. Wen, "Reinforcement learning optimization for energy-efficient cellular networks with coordinated multipoint communications," Mathematical Problems in Engineering, vol. 2014, pp. 1-9, 7 2014.

S. Sharma, S. J. Darak and A. Srivastava, "Energy saving in heterogeneous cellular network via transfer reinforcement learning based policy," in 9th International Conference on Communication Systems and Networks (COMSNETS), 2017.

I. AlQerm and B. Shihada, "Enhanced machine learning scheme for energy efficient resource allocation in 5G heterogeneous cloud radio access networks," in 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), 2017.

E. Ghadimi, F. D. Calabrese, G. Peters and P. Soldati, "A reinforcement learning approach to power control and rate adaptation in cellular networks," in 2017 IEEE International Conference on Communications (ICC), 2017.

Y. Wang, X. Dai, J. M. Wang and B. Bensaou, "A Reinforcement Learning Approach to Energy Efficiency and QoS in 5G Wireless Networks," IEEE Journal on Selected Areas in Communications, vol. 37, pp. 1413-1423, 6 2019.

X. W. Wang, L. Chenyang, L. Xiuhua, T. Victor and Tarik, "Federated deep reinforcement learning for Internet of Things with decentralized cooperative edge caching," IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9441-9455, 2020.

N.-N. N. Dao and C. S. Wonjong, "Deep Reinforcement Learning-Based Hierarchical Time Division Duplexing Control for Dense Wireless and Mobile Networks," IEEE Transactions on Wireless Communications, 2021.

Kaynak Göster