Top Read Articles
Published in last 1 year |  In last 2 years |  In last 3 years |  All
Please wait a minute...
For Selected: Toggle Thumbnails
Artificial intelligence for satellite communication: A review
Fourati Fares,Alouini Mohamed-Slim
  2021, 2 (3): 213-243.   DOI: 10.23919/ICN.2021.0015
Abstract217)   HTML6)    PDF (1337KB)(306)      

Satellite communication offers the prospect of service continuity over uncovered and under-covered areas, service ubiquity, and service scalability. However, several challenges must first be addressed to realize these benefits, as the resource management, network control, network security, spectrum management, and energy usage of satellite networks are more challenging than that of terrestrial networks. Meanwhile, artificial intelligence (AI), including machine learning, deep learning, and reinforcement learning, has been steadily growing as a research field and has shown successful results in diverse applications, including wireless communication. In particular, the application of AI to a wide variety of satellite communication aspects has demonstrated excellent potential, including beam-hopping, anti-jamming, network traffic forecasting, channel modeling, telemetry mining, ionospheric scintillation detecting, interference managing, remote sensing, behavior modeling, space-air-ground integrating, and energy managing. This work thus provides a general overview of AI, its diverse sub-fields, and its state-of-the-art algorithms. Several challenges facing diverse aspects of satellite communication systems are then discussed, and their proposed and potential AI-based solutions are presented. Finally, an outlook of field is drawn, and future steps are suggested.

Table and Figures | Reference | Related Articles | Metrics
True-data testbed for 5G/B5G intelligent network
Yongming Huang,Shengheng Liu,Cheng Zhang,Xiaohu You,Hequan Wu
  2021, 2 (2): 133-149.   DOI: 10.23919/ICN.2021.0002
Abstract208)   HTML5)    PDF (1146KB)(63)      

Future beyond fifth-generation (B5G) and sixth-generation (6G) mobile communications will shift from facilitating interpersonal communications to supporting internet of everything (IoE), where intelligent communications with full integration of big data and artificial intelligence (AI) will play an important role in improving network efficiency and providing high-quality service. As a rapid evolving paradigm, the AI-empowered mobile communications demand large amounts of data acquired from real network environment for systematic test and verification. Hence, we build the world’s first true-data testbed for 5G/B5G intelligent network (TTIN), which comprises 5G/B5G on-site experimental networks, data acquisition & data warehouse, and AI engine & network optimization. In the TTIN, true network data acquisition, storage, standardization, and analysis are available, which enable system-level online verification of B5G/6G-orientated key technologies and support data-driven network optimization through the closed-loop control mechanism. This paper elaborates on the system architecture and module design of TTIN. Detailed technical specifications and some of the established use cases are also showcased.

Table and Figures | Reference | Related Articles | Metrics
Blockchain-enabled fog resource access and granting
Gang Liu,Jinsong Wu,Ting Wang
  2021, 2 (2): 108-114.   DOI: 10.23919/ICN.2021.0009
Abstract151)   HTML4)    PDF (965KB)(66)      

Fog computing is a new computing paradigm for meeting ubiquitous massive access and latency-critical applications by moving the processing capability closer to end users. The geographical distribution/floating features with potential autonomy requirements introduce new challenges to the traditional methodology of network access control. In this paper, a blockchain-enabled fog resource access and granting solution is proposed to tackle the unique requirements brought by fog computing. The smart contract concept is introduced to enable dynamic, and automatic credential generation and delivery for an independent offer of fog resources. A per-transaction negotiation mechanism supports the fog resource provider to dynamically publish an offer and facilitates the choice of the preferred resource by the end user. Decentralized authentication and authorization relieve the processing pressure brought by massive access and single-point failure. Our solution can be extended and used in multi-access and especially multi-carrier scenarios in which centralized authorities are absent.

Table and Figures | Reference | Related Articles | Metrics
A flexible scheduling algorithm for the 5th-generation networks
Lanlan Li,Wentao Shao,Xin Zhou
  2021, 2 (2): 101-107.   DOI: 10.23919/ICN.2020.0017
Abstract87)   HTML2)    PDF (808KB)(61)      

At present, the 5th-Generation (5G) wireless mobile communication standard has been released. 5G networks efficiently support enhanced mobile broadband traffic, ultra-reliable low-latency communication traffic, and massive machine-type communication. However, a major challenge for 5G networks is to achieve effective Radio Resource Management (RRM) strategies and scheduling algorithms to meet quality of service requirements. The Proportional Fair (PF) algorithm is widely used in the existing 5G scheduling technology. In the PF algorithm, RRM assigns a priority to each user which is served by gNodeB. The existing metrics of priority mainly focus on the flow rate. The purpose of this study is to explore how to improve the throughput of 5G networks and propose new scheduling schemes. In this study, the package delay of the data flow is included in the metrics of priority. The Vienna 5G System-Level (SL) simulator is a MATLAB-based SL simulation platform which is used to facilitate the research and development of 5G and beyond mobile communications. This paper presents a new scheduling algorithm based on the analysis of different scheduling schemes for radio resources using the Vienna 5G SL simulator.

Table and Figures | Reference | Related Articles | Metrics
Routing strategy of reducing energy consumption for underwater data collection
Wu Jiehong,Sun Xichun,Wu Jinsong,Han Guangjie
  2021, 2 (3): 163-176.   DOI: 10.23919/ICN.2021.0012
Abstract71)   HTML8)    PDF (6640KB)(160)      

Underwater Wireless Sensor Networks (UWSNs) are widely used in many fields, such as regular marine monitoring and disaster warning. However, UWSNs are still subject to various limitations and challenges: ocean interferences and noises are high, bandwidths are narrow, and propagation delays are high. Sensor batteries have limited energy and are difficult to be replaced or recharged. Accordingly, the design of routing protocols is one of the solutions to these problems. Aiming at reducing and balancing network energy consumption and effectively extending the life cycle of UWSNs, this paper proposes a Hierarchical Adaptive Energy-efficient Clustering Routing (HAECR) strategy. First, this strategy divides hierarchical regions based on the depth of the sensor node in a three-dimensional (3D) space. Second, sensor nodes form different competition radii based on their own relevant attributes and remaining energy. Nodes in the same layer compete freely to form clusters of different sizes. Finally, the transmission path between clusters is determined according to comprehensive factors, such as link quality, and then the optimal route is planned. The simulation experiment is conducted in the monitoring range of the 3D space. The simulation results prove that the HAECR clustering strategy is superior to LEACH and UCUBB in terms of balancing and reducing energy consumption, extending the network lifetime, and increasing the number of data transmissions.

Table and Figures | Reference | Related Articles | Metrics
Distributed reinforcement learning based framework for energy-effcient UAV relay against jamming
Weihang Wang,Zefang Lv,Xiaozhen Lu,Yi Zhang,Liang Xiao
  2021, 2 (2): 150-162.   DOI: 10.23919/ICN.2021.0010
Abstract66)   HTML0)    PDF (885KB)(43)      

Unmanned aerial vehicle (UAV) network is vulnerable to jamming attacks, which may cause severe damage like communication outages. Due to the energy constraint, the source UAV cannot blindly enlarge the transmit power, along with the complex network topology with high mobility, which makes the destination UAV unable to evade the jammer by flying at will. To maintain communication with a limited battery capacity in the UAV networks in the presence of a greedy jammer, in this paper, we propose a distributed reinforcement learning (RL) based energy-efficient framework for the UAV networks with constrained energy under jamming attacks to improve the communication quality while minimizing the total energy consumption of the network. This framework enables each relay UAV to independently select its transmit power based on historical state-related information without knowing the moving trajectory of other UAVs as well as the jammer. The location and battery level of each UAV need not be shared with other UAVs. We also propose a deep RL based anti-jamming relay approach for UAVs with portable computation equipment like Raspberry Pi to achieve higher and faster performance. We study the Nash equilibrium (NE) and the performance bounds based on the formulated power control game. Simulation results show that the proposed schemes can reduce the bit error rate (BER) and reduce energy consumption of the UAV network compared with the benchmark method.

Table and Figures | Reference | Related Articles | Metrics
Harmonious wireless networks: Perspective of interference management
Liu Wei
  2021, 2 (3): 198-204.   DOI: 10.23919/ICN.2021.0013
Abstract62)   HTML3)    PDF (1911KB)(40)      

This paper elaborates on the harmonious wireless network from the perspective of interference management. The coexistence of useful signals and interfering signals is beneficial in throughput terms of the entire wireless network. Useful signals and interfering signals are complementary and are in juxtaposition to each other in the context of a single communication link, and are in symbiosis within the framework of the networks. The philosophy behind this could be described by the Chinese traditional culture symbol of “yin” and “yang”. A wireless network having optimal performance must be a harmonious network where the interfering and useful signals harmoniously coexist in an optimal balance. Interference management plays a critical role in achieving this optimal balance, while sophisticated interference management techniques should be designed to improve the system performance.

Table and Figures | Reference | Related Articles | Metrics
Multitarget tracking control algorithm under local information selection interaction mechanism
Jiehong Wu,Jinghui Yang,Weijun Zhang,Jiankai Zuo
  2021, 2 (2): 91-100.   DOI: 10.23919/ICN.2021.0011
Abstract59)   HTML1)    PDF (5026KB)(49)      

This study focuses on the problem of multitarget tracking. To address the existing problems of current tracking algorithms, as manifested by the time consumption of subgroup separation and the uneven group size of unmanned aerial vehicles (UAVs) for target tracking, a multitarget tracking control algorithm under local information selection interaction is proposed. First, on the basis of location, number, and perceived target information of neighboring UAVs, a temporary leader selection strategy is designed to realize the local follow-up movement of UAVs when the UAVs cannot fully perceive the target. Second, in combination with the basic rules of cluster movement and target information perception factors, distributed control equations are designed to achieve a rapid gathering of UAVs and consistent tracking of multiple targets. Lastly, the simulation experiments are conducted in two- and three-dimensional spaces. Under a certain number of UAVs, clustering speed of the proposed algorithm is less than 3 s, and the equal probability of the UAV subgroup size after group separation is over 78%.

Table and Figures | Reference | Related Articles | Metrics
Model-based reinforcement learning for router port queue configurations
Kattepur Ajay,David Sushanth,Kumar Mohalik Swarup
  2021, 2 (3): 177-197.   DOI: 10.23919/ICN.2021.0016
Abstract47)   HTML1)    PDF (5322KB)(53)      

Fifth-generation (5G) systems have brought about new challenges toward ensuring Quality of Service (QoS) in differentiated services. This includes low latency applications, scalable machine-to-machine communication, and enhanced mobile broadband connectivity. In order to satisfy these requirements, the concept of network slicing has been introduced to generate slices of the network with specific characteristics. In order to meet the requirements of network slices, routers and switches must be effectively configured to provide priority queue provisioning, resource contention management and adaptation. Configuring routers from vendors, such as Ericsson, Cisco, and Juniper, have traditionally been an expert-driven process with static rules for individual flows, which are prone to sub optimal configurations with varying traffic conditions. In this paper, we model the internal ingress and egress queues within routers via a queuing model. The effects of changing queue configuration with respect to priority, weights, flow limits, and packet drops are studied in detail. This is used to train a model-based Reinforcement Learning (RL) algorithm to generate optimal policies for flow prioritization, fairness, and congestion control. The efficacy of the RL policy output is demonstrated over scenarios involving ingress queue traffic policing, egress queue traffic shaping, and one-hop router coordinated traffic conditioning. This is evaluated over a real application use case, wherein a statically configured router proved sub optimal toward desired QoS requirements. Such automated configuration of routers and switches will be critical for multiple 5G deployments with varying flow requirements and traffic patterns.

Table and Figures | Reference | Related Articles | Metrics
An intelligent wireless transmission toward 6G
Zhang Ping,Li Lihua,Niu Kai,Li Yaxian,Lu Guangyan,Wang Zhaoyuan
  2021, 2 (3): 244-257.   DOI: 10.23919/ICN.2021.0017
Abstract43)   HTML2)    PDF (6956KB)(22)      

With the deployment and commercial application of 5G, researchers start to think of 6G, which could meet more diversified and deeper intelligent communication requirements. In this paper, a four physical elements, i.e., man, machine, object, and genie, featured 6G concept is introduced. Genie is explained as a new element toward 6G. This paper focuses on the genie realization as an intelligent wireless transmission toward 6G, including sematic information theory, end-to-end artificial intelligence (AI) joint transceiver design, intelligent wireless transmission block design, and user-centric intelligent access. A comprehensive state-of-the-art of each key technology is presented and main questions as well as some novel suggestions are given. Genie will work comprehensively in 6G wireless communication and other major industrial vertical, while its realization is concrete and step by step. It is realized that genie-based wireless communication link works with high intelligence and performs better than that controlled manually.

Table and Figures | Reference | Related Articles | Metrics
Intelligent throughput stabilizer for UDP-based rate-control communication system
Harayama Michiko,Miyagawa Noboru
  2021, 2 (3): 205-212.   DOI: 10.23919/ICN.2021.0014
Abstract43)   HTML3)    PDF (2041KB)(65)      

In view of the successful application of deep learning, mainly in the field of image recognition, deep learning applications are now being explored in the fields of communication and computer networks. In these fields, systems have been developed by use of proper theoretical calculations and procedures. However, due to the large amount of data to be processed, proper processing takes time and deviations from the theory sometimes occur due to the inclusion of uncertain disturbances. Therefore, deep learning or nonlinear approximation by neural networks may be useful in some cases. We have studied a user datagram protocol (UDP) based rate-control communication system called the simultaneous multipath communication system (SMPC), which measures throughput by a group of packets at the destination node and feeds it back to the source node continuously. By comparing the throughput with the recorded transmission rate, the source node detects congestion on the transmission route and adjusts the packet transmission interval. However, the throughput fluctuates as packets pass through the route, and if it is fed back directly, the transmission rate fluctuates greatly, causing the fluctuation of the throughput to become even larger. In addition, the average throughput becomes even lower. In this study, we tried to stabilize the transmission rate by incorporating prediction and learning performed by a neural network. The prediction is performed using the throughput measured by the destination node, and the result is learned so as to generate a stabilizer. A simple moving average method and a stabilizer using three types of neural networks, namely multilayer perceptrons, recurrent neural networks, and long short-term memory, were built into the transmission controller of the SMPC. The results showed that not only fluctuation reduced but also the average throughput improved. Together, the results demonstrated that deep learning can be used to predict and output stable values from data with complicated time fluctuations that are difficultly analyzed.

Table and Figures | Reference | Related Articles | Metrics