Deep Reinforcement Learning for Communication Flow Control in Wireless Mesh Networks

Qingzhi Liu, Long Cheng*, Adele Lu Jia, Cong Liu

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Wireless mesh network (WMN) is one of the most promising technologies for Internet of Things (IoT) applications because of its self-adaptive and self-organization nature. To meet different performance requirements on communications in WMNs, traditional approaches always have to program flow control strategies in an explicit way. In this case, the performance of WMNs will be significantly affected by the dynamic properties of underlying networks in real applications. With providing a more flexible solution in mind, in this article, for the first time, we present how we can apply emerging Deep Reinforcement Learning (DRL) on communication flow control in WMNs. Moreover, different from a general DRL based networking solution, in which the network properties are pre-defined, we leverage the adaptive nature of WMNs and propose a self-adaptive DRL approach. Specifically, our method can reconstruct a WMN during the training of a DRL model. In this way, the trained DRL model can capture more properties of WMNs and achieve better performance. As a proof of concept, we have implemented our method with a self-adap-tive Deep Q-learning Network (DQN) model. The evaluation results show that the presented solution can significantly improve the communication performance of data flows in WMNs, compared to a static benchmark solution.

Original languageEnglish
Article number9351833
Pages (from-to)112-119
Number of pages8
JournalIEEE Network
Volume35
Issue number2
DOIs
Publication statusPublished - 1 Mar 2021

Fingerprint Dive into the research topics of 'Deep Reinforcement Learning for Communication Flow Control in Wireless Mesh Networks'. Together they form a unique fingerprint.

Cite this