Internet of Things (IoTs) is a big world of connected objects, including the small and low-resources devices, like sensors, as well as the full-functional computing devices, such as servers and routers in the core network. With the emerging of new IoT-based applications, such as smart transportation, smart agriculture, healthcare, and others, there is a need for making great efforts to achieve a balance in using the IoT resources, including Computing, Communication, and Caching. This paper provides an overview of the convergence of Computing, Communication, and Caching (CCC) by covering the IoT technology trends. At first, we give a snapshot of technology trends in communication, computing, and caching. As well, we describe the convergence in sensors, devices, and gateways. Addressing the aspect of convergence, we discuss the relationship between CCC technologies in collecting, indexing, processing, and storing data in IoT. Also, we introduce the three dimensions of the IoTs based on CCC. We explore different existing technologies that help to solve bottlenecks caused by a large number of physical devices in IoT. Finally, we propose future research directions and open problems in the convergence of communication, computing, and cashing with sensing and actuating devices.
High Altitude Platform (HAP) systems comprise airborne base stations deployed above 20 km and below 50 km to provide wireless access to devices in large areas. In this paper, two types of applications using HAP systems: one with HAP Station (HAPS) and the other with HAPS as International Mobile Telecommunication (IMT) Base Station (HIBS) are introduced. The HAP system with HAPS has already received wide recognition from the academia and the industry and is considered as an effective solution to provide internet access between fixed points in suburban and rural areas as well as emergencies. HAP systems with HIBS to serve IMT user terminal have just started to draw attention from researchers. The HIBS application is expected to be an anticipate mobile service application complementing the IMT requirement for cell phone or other mobile user terminals in which the service field of HAPS application cannot reach. After describing and characterizing the two types of systems, coexistence studies and simulation results using both the Power Fluxed Density (PFD) mask and separation distance based methods are presented in this paper. This paper also predicts future trends of the evolution paths for the HAP systems along with challenges and possible solutions from the standpoint of system architectures and spectrum regulation.
Internet of Vehicles (IoV) is a distributed network of connected cars, roadside infrastructure, wireless communication networks, and central cloud platforms. Wireless recommendations play an important role in the IoV network, for example, recommending appropriate routes, recommending driving strategies, and recommending content. In this paper, we review some of the key techniques in recommendations and discuss what are the opportunities and challenges to deploy these wireless recommendations in the IoV.
Satellite communication offers the prospect of service continuity over uncovered and under-covered areas, service ubiquity, and service scalability. However, several challenges must first be addressed to realize these benefits, as the resource management, network control, network security, spectrum management, and energy usage of satellite networks are more challenging than that of terrestrial networks. Meanwhile, artificial intelligence (AI), including machine learning, deep learning, and reinforcement learning, has been steadily growing as a research field and has shown successful results in diverse applications, including wireless communication. In particular, the application of AI to a wide variety of satellite communication aspects has demonstrated excellent potential, including beam-hopping, anti-jamming, network traffic forecasting, channel modeling, telemetry mining, ionospheric scintillation detecting, interference managing, remote sensing, behavior modeling, space-air-ground integrating, and energy managing. This work thus provides a general overview of AI, its diverse sub-fields, and its state-of-the-art algorithms. Several challenges facing diverse aspects of satellite communication systems are then discussed, and their proposed and potential AI-based solutions are presented. Finally, an outlook of field is drawn, and future steps are suggested.
For a future scenario where everything is connected, cognitive technology can be used for spectrum sensing and access, and emerging coding technologies can be used to address the erasure of packets caused by dynamic spectrum access and realize cognitive spectrum collaboration among users in mass connection scenarios. Machine learning technologies are being increasingly used in the implementation of smart networks. In this paper, after an overview of several key technologies in the cognitive spectrum collaboration, a joint optimization algorithm of dynamic spectrum access and coding is proposed and implemented using reinforcement learning, and the effectiveness of the algorithm is verified by simulations, thus providing a feasible research direction for the realization of cognitive spectrum collaboration.
Energy source and circuit cost are two critical challenges for the future development of the Internet of Things (IoT). Backscatter communications offer a potential solution to conveniently obtain power and reduce cost for sensors in IoT, and researchers are paying close attention to the technology. Backscatter technology originated from the Second World War and has been widely applied in the logistics domain. Recently, both the academic and industrial worlds are proposing a series of new types of backscatter technologies for communications and IoT. In this paper, we review the history of both IoT and backscatter, describe the new types of backscatter, demonstrate their applications, and discuss the open challenges.
With next generation networks driving the confluence of multi-media, broadband, and broadcast services, Cognitive Radio (CR) networks are positioned as a preferred paradigm to address spectrum capacity challenges. CRs address these issues through dynamic spectrum access. However, the main challenges faced by the CR pertain to achieving spectrum efficiency. As a result, spectrum efficiency improvement models based on spectrum sensing and sharing models have attracted a lot of research attention in recent years, including CR learning models, network densification architectures, and massive Multiple Input Multiple Output (MIMO), and beamforming techniques. This paper provides a survey of recent CR spectrum efficiency improvement models and techniques, developed to support ultra-reliable low latency communications that are resilient to surges in traffic and competition for spectrum. These models and techniques, broadly speaking, enable a wide range of functionality ranging from enhanced mobile broadband to large scale Internet of Things (IoT) type communications. In addition and given the strong correlation between the typical size of a spectrum block and the achievable data rate, the models studied in this paper are applicable in ultra-high frequency band. This study therefore provides a good review of CRs and direction for future investigations into newly identified 5G research areas, applicable in industry and in academia.
The Internet of Radio-Light (IoRL) is a cutting-edge system paradigm to enable seamless 5G service provision in indoor environments, such as homes, hospitals, and museums. The system draws on innovative architectural structure that sits on the synergy between the Radio Access Network (RAN) technologies of millimeter Wave communications (mmWave) and Visible Light Communications (VLC) for improving network throughput, latency, and coverage compared to existing efforts. The aim of this paper is to introduce the IoRL system architecture and present the key technologies and techniques utilised at each layer of the system. Special emphasis is given in detailing the IoRL physical layer (Layer 1) and Medium Access Control layer (MAC, Layer 2) by means of describing their unique design characteristics and interfaces as well as the robust IoRL methods of improving the estimation accuracy of user positioning relying on uplink mmWave and downlink VLC measurements.
Mobile Edge Computing (MEC) is one of the most promising techniques for next-generation wireless communication systems. In this paper, we study the problem of dynamic caching, computation offloading, and resource allocation in cache-assisted multi-user MEC systems with stochastic task arrivals. There are multiple computationally intensive tasks in the system, and each Mobile User (MU) needs to execute a task either locally or remotely in one or more MEC servers by offloading the task data. Popular tasks can be cached in MEC servers to avoid duplicates in offloading. The cached contents can be either obtained through user offloading, fetched from a remote cloud, or fetched from another MEC server. The objective is to minimize the long-term average of a cost function, which is defined as a weighted sum of energy consumption, delay, and cache contents’ fetching costs. The weighting coefficients associated with the different metrics in the objective function can be adjusted to balance the tradeoff among them. The optimum design is performed with respect to four decision parameters: whether to cache a given task, whether to offload a given uncached task, how much transmission power should be used during offloading, and how much MEC resources to be allocated for executing a task. We propose to solve the problems by developing a dynamic scheduling policy based on Deep Reinforcement Learning (DRL) with the Deep Deterministic Policy Gradient (DDPG) method. A new decentralized DDPG algorithm is developed to obtain the optimum designs for multi-cell MEC systems by leveraging on the cooperations among neighboring MEC servers. Simulation results demonstrate that the proposed algorithm outperforms other existing strategies, such as Deep Q-Network (DQN).
Although Successive Interference Cancellation (SIC) decoding is widely adopted in Nonorthogonal Multiple Access (NOMA) schemes for the recovery of user data at acceptable complexity, the imperfect SIC would cause Error Propagation (EP), which can severely degrade system performance. In this work, we propose an SIC-free NOMA scheme in pulse modulation based Visible Light Communication (VLC) downlinks, including two types of users with different data rate requirements. Low bit-rate users adopt on-off keying, whereas high bit-rate ones use Multiple Pulse Position Modulation (MPPM). The soft decision decoding scheme is exploited by high bit-rate users to decode MPPM signals, which could fundamentally eliminate the detrimental effect of EP; the scheme is also easier and faster to execute compared with the conventional SIC decoding scheme. Expressions of the symbol error rate and achievable data rate for two types of users are derived. Results of the Monte Carlo simulation are provided to confirm the correctness of theoretical results.
Fog Radio Access Networks (F-RANs) have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing. However, the current contributions in computation offloading and resource allocation are inefficient; moreover, they merely consider the static communication mode, and the increasing demand for low latency services and high throughput poses tremendous challenges in F-RANs. A joint problem of mode selection, resource allocation, and power allocation is formulated to minimize latency under various constraints. We propose a Deep Reinforcement Learning (DRL) based joint computation offloading and resource allocation scheme that achieves a suboptimal solution in F-RANs. The core idea of the proposal is that the DRL controller intelligently decides whether to process the generated computation task locally at the device level or offload the task to a fog access point or cloud server and allocates an optimal amount of computation and power resources on the basis of the serving tier. Simulation results show that the proposed approach significantly minimizes latency and increases throughput in the system.
Underwater Wireless Sensor Networks (UWSNs) are widely used in many fields, such as regular marine monitoring and disaster warning. However, UWSNs are still subject to various limitations and challenges: ocean interferences and noises are high, bandwidths are narrow, and propagation delays are high. Sensor batteries have limited energy and are difficult to be replaced or recharged. Accordingly, the design of routing protocols is one of the solutions to these problems. Aiming at reducing and balancing network energy consumption and effectively extending the life cycle of UWSNs, this paper proposes a Hierarchical Adaptive Energy-efficient Clustering Routing (HAECR) strategy. First, this strategy divides hierarchical regions based on the depth of the sensor node in a three-dimensional (3D) space. Second, sensor nodes form different competition radii based on their own relevant attributes and remaining energy. Nodes in the same layer compete freely to form clusters of different sizes. Finally, the transmission path between clusters is determined according to comprehensive factors, such as link quality, and then the optimal route is planned. The simulation experiment is conducted in the monitoring range of the 3D space. The simulation results prove that the HAECR clustering strategy is superior to LEACH and UCUBB in terms of balancing and reducing energy consumption, extending the network lifetime, and increasing the number of data transmissions.
The video transmission in the Internet-of-Things (IoT) system must guarantee the video quality and reduce the packet loss rate and the delay with limited resources to satisfy the requirement of multimedia services. In this paper, we propose a reinforcement learning based energy-efficient IoT video transmission scheme that protects against interference, in which the base station controls the transmission action of the IoT device including the encoding rate, the modulation and coding scheme, and the transmit power. A reinforcement learning algorithm state-action-reward-state-action is applied to choose the transmission action based on the observed state (the queue length of the buffer, the channel gain, the previous bit error rate, and the previous packet loss rate) without knowledge of the transmission channel model at the transmitter and the receiver. We also propose a deep reinforcement learning based energy-efficient IoT video transmission scheme that uses a deep neural network to approximate Q value to further accelerate the learning process involved in choosing the optimal transmission action and improve the video transmission performance. Moreover, both the performance bounds of the proposed schemes and the computational complexity are theoretically derived. Simulation results show that the proposed schemes can increase the peak signal-to-noise ratio and decrease the packet loss rate, the delay, and the energy consumption relative to the benchmark scheme.
Fog computing is a new computing paradigm for meeting ubiquitous massive access and latency-critical applications by moving the processing capability closer to end users. The geographical distribution/floating features with potential autonomy requirements introduce new challenges to the traditional methodology of network access control. In this paper, a blockchain-enabled fog resource access and granting solution is proposed to tackle the unique requirements brought by fog computing. The smart contract concept is introduced to enable dynamic, and automatic credential generation and delivery for an independent offer of fog resources. A per-transaction negotiation mechanism supports the fog resource provider to dynamically publish an offer and facilitates the choice of the preferred resource by the end user. Decentralized authentication and authorization relieve the processing pressure brought by massive access and single-point failure. Our solution can be extended and used in multi-access and especially multi-carrier scenarios in which centralized authorities are absent.
At present, the 5th-Generation (5G) wireless mobile communication standard has been released. 5G networks efficiently support enhanced mobile broadband traffic, ultra-reliable low-latency communication traffic, and massive machine-type communication. However, a major challenge for 5G networks is to achieve effective Radio Resource Management (RRM) strategies and scheduling algorithms to meet quality of service requirements. The Proportional Fair (PF) algorithm is widely used in the existing 5G scheduling technology. In the PF algorithm, RRM assigns a priority to each user which is served by gNodeB. The existing metrics of priority mainly focus on the flow rate. The purpose of this study is to explore how to improve the throughput of 5G networks and propose new scheduling schemes. In this study, the package delay of the data flow is included in the metrics of priority. The Vienna 5G System-Level (SL) simulator is a MATLAB-based SL simulation platform which is used to facilitate the research and development of 5G and beyond mobile communications. This paper presents a new scheduling algorithm based on the analysis of different scheduling schemes for radio resources using the Vienna 5G SL simulator.
Future beyond fifth-generation (B5G) and sixth-generation (6G) mobile communications will shift from facilitating interpersonal communications to supporting internet of everything (IoE), where intelligent communications with full integration of big data and artificial intelligence (AI) will play an important role in improving network efficiency and providing high-quality service. As a rapid evolving paradigm, the AI-empowered mobile communications demand large amounts of data acquired from real network environment for systematic test and verification. Hence, we build the world’s first true-data testbed for 5G/B5G intelligent network (TTIN), which comprises 5G/B5G on-site experimental networks, data acquisition & data warehouse, and AI engine & network optimization. In the TTIN, true network data acquisition, storage, standardization, and analysis are available, which enable system-level online verification of B5G/6G-orientated key technologies and support data-driven network optimization through the closed-loop control mechanism. This paper elaborates on the system architecture and module design of TTIN. Detailed technical specifications and some of the established use cases are also showcased.
This study focuses on the problem of multitarget tracking. To address the existing problems of current tracking algorithms, as manifested by the time consumption of subgroup separation and the uneven group size of unmanned aerial vehicles (UAVs) for target tracking, a multitarget tracking control algorithm under local information selection interaction is proposed. First, on the basis of location, number, and perceived target information of neighboring UAVs, a temporary leader selection strategy is designed to realize the local follow-up movement of UAVs when the UAVs cannot fully perceive the target. Second, in combination with the basic rules of cluster movement and target information perception factors, distributed control equations are designed to achieve a rapid gathering of UAVs and consistent tracking of multiple targets. Lastly, the simulation experiments are conducted in two- and three-dimensional spaces. Under a certain number of UAVs, clustering speed of the proposed algorithm is less than 3 s, and the equal probability of the UAV subgroup size after group separation is over 78%.
Fifth-generation (5G) systems have brought about new challenges toward ensuring Quality of Service (QoS) in differentiated services. This includes low latency applications, scalable machine-to-machine communication, and enhanced mobile broadband connectivity. In order to satisfy these requirements, the concept of network slicing has been introduced to generate slices of the network with specific characteristics. In order to meet the requirements of network slices, routers and switches must be effectively configured to provide priority queue provisioning, resource contention management and adaptation. Configuring routers from vendors, such as Ericsson, Cisco, and Juniper, have traditionally been an expert-driven process with static rules for individual flows, which are prone to sub optimal configurations with varying traffic conditions. In this paper, we model the internal ingress and egress queues within routers via a queuing model. The effects of changing queue configuration with respect to priority, weights, flow limits, and packet drops are studied in detail. This is used to train a model-based Reinforcement Learning (RL) algorithm to generate optimal policies for flow prioritization, fairness, and congestion control. The efficacy of the RL policy output is demonstrated over scenarios involving ingress queue traffic policing, egress queue traffic shaping, and one-hop router coordinated traffic conditioning. This is evaluated over a real application use case, wherein a statically configured router proved sub optimal toward desired QoS requirements. Such automated configuration of routers and switches will be critical for multiple 5G deployments with varying flow requirements and traffic patterns.
Unmanned aerial vehicle (UAV) network is vulnerable to jamming attacks, which may cause severe damage like communication outages. Due to the energy constraint, the source UAV cannot blindly enlarge the transmit power, along with the complex network topology with high mobility, which makes the destination UAV unable to evade the jammer by flying at will. To maintain communication with a limited battery capacity in the UAV networks in the presence of a greedy jammer, in this paper, we propose a distributed reinforcement learning (RL) based energy-efficient framework for the UAV networks with constrained energy under jamming attacks to improve the communication quality while minimizing the total energy consumption of the network. This framework enables each relay UAV to independently select its transmit power based on historical state-related information without knowing the moving trajectory of other UAVs as well as the jammer. The location and battery level of each UAV need not be shared with other UAVs. We also propose a deep RL based anti-jamming relay approach for UAVs with portable computation equipment like Raspberry Pi to achieve higher and faster performance. We study the Nash equilibrium (NE) and the performance bounds based on the formulated power control game. Simulation results show that the proposed schemes can reduce the bit error rate (BER) and reduce energy consumption of the UAV network compared with the benchmark method.
As pioneering information technology, the Internet of Things (IoT) targets at building an infrastructure of embedded devices and networks of connected objects, to offer omnipresent ecosystem and interaction across billions of smart devices, sensors, and actuators. The deployment of IoT calls for decentralized power supplies, self-powered sensors, and wireless transmission technologies, which have brought both opportunities and challenges to the existing solutions, especially when the network scales up. The Triboelectric Nanogenerators (TENGs), recently developed for mechanical energy harvesting and mechanical-to-electrical signal conversion, have the natural properties of energy and information, which have demonstrated high potentials in various applications of IoT. This context provides a comprehensive review of TENG enabled IoT and discusses the most popular and significant divisions. Firstly, the basic principle of TENG is re-examined in this article. Subsequently, a comprehensive and detailed review is given to the concept of IoT, followed by the scientific development of the TENG enabled IoT. Finally, the future of this evolving area is addressed.
In view of the successful application of deep learning, mainly in the field of image recognition, deep learning applications are now being explored in the fields of communication and computer networks. In these fields, systems have been developed by use of proper theoretical calculations and procedures. However, due to the large amount of data to be processed, proper processing takes time and deviations from the theory sometimes occur due to the inclusion of uncertain disturbances. Therefore, deep learning or nonlinear approximation by neural networks may be useful in some cases. We have studied a user datagram protocol (UDP) based rate-control communication system called the simultaneous multipath communication system (SMPC), which measures throughput by a group of packets at the destination node and feeds it back to the source node continuously. By comparing the throughput with the recorded transmission rate, the source node detects congestion on the transmission route and adjusts the packet transmission interval. However, the throughput fluctuates as packets pass through the route, and if it is fed back directly, the transmission rate fluctuates greatly, causing the fluctuation of the throughput to become even larger. In addition, the average throughput becomes even lower. In this study, we tried to stabilize the transmission rate by incorporating prediction and learning performed by a neural network. The prediction is performed using the throughput measured by the destination node, and the result is learned so as to generate a stabilizer. A simple moving average method and a stabilizer using three types of neural networks, namely multilayer perceptrons, recurrent neural networks, and long short-term memory, were built into the transmission controller of the SMPC. The results showed that not only fluctuation reduced but also the average throughput improved. Together, the results demonstrated that deep learning can be used to predict and output stable values from data with complicated time fluctuations that are difficultly analyzed.
As Internet of Things (IoT) applications become more prevalent and grow in their use, a limited number of wireless communication methods may be unable to enable dependable, robust delivery of information. It is necessary to enable adaptive communication and interoperability over a variety of wireless communication media to meet the requirements of large-scale IoT applications. This paper utilizes Named Data Networking (NDN), an up-and-coming Information-Centric Network architecture, to interconnect differing communication links via the network layer, and implements dynamic forwarding strategies and routing mechanisms which aid in the efficient dissemination of information. This work targets the creation of an interface technique to allow NDN to be transported via LoRa. This is acheived via the coupling of LoRa and WiFi using the NDN Forwarding Daemon (NFD) to create a universal ad hoc network. This network has the capacity for high range and multi-hop Device-to-Device (D2D) communication together with compatibility with other network communication media. Testing of the system in a real environment has shown that the newly created ad hoc network is capable of communicating over a several kilometer radius, while making use of the features provided by NDN to capitalize upon various links available to enable the efficient dissemination of data. Furthermore, the newly created network leverages NDN features to enable content-based routing within the LoRa network and utilize content-based routing techniques.
We rederive from first principles and generalize the theoretical framework of the nonlinear Gaussian noise model to the case of coherent optical systems with multiple fiber types per span and ideal Nyquist spectra. We focus on the accurate numerical evaluation of the integral for the nonlinear noise variance for hybrid fiber spans. This task consists in addressing four computational aspects: (1) Adopting a novel transformation of variables (other than using hyperbolic coordinates) that changes the integrand to a more appropriate form for numerical quadrature; (2) Evaluating analytically the integral at its lower limit, where the integrand presents a singularity; (3) Dividing the interval of integration into subintervals of size π and approximating the integral over each subinterval by using various algorithms; and (4) Deriving an upper bound for the relative error when the interval of integration is truncated in order to accelerate computation. We apply the proposed analytical model to the performance evaluation of coherent optical communications systems with hybrid fiber spans composed of quasi-single-mode and single-mode fiber segments. More specifically, the model is used to optimize the lengths of the optical fiber segments that compose each span in order to maximize the system performance. We check the validity of the optimal fiber segment lengths per span provided by the analytical model by using Monte Carlo simulation, where the Manakov equation is solved numerically using the split-step Fourier method. We show that the analytical model predicts the lengths of the optical fiber segments per span with satisfactory accuracy so that the system performance, in terms of the Q-factor, is within 0.1 dB from the maximum given by Monte Carlo simulation.
This paper elaborates on the harmonious wireless network from the perspective of interference management. The coexistence of useful signals and interfering signals is beneficial in throughput terms of the entire wireless network. Useful signals and interfering signals are complementary and are in juxtaposition to each other in the context of a single communication link, and are in symbiosis within the framework of the networks. The philosophy behind this could be described by the Chinese traditional culture symbol of “yin” and “yang”. A wireless network having optimal performance must be a harmonious network where the interfering and useful signals harmoniously coexist in an optimal balance. Interference management plays a critical role in achieving this optimal balance, while sophisticated interference management techniques should be designed to improve the system performance.
Nowadays, Edge Information System (EIS) has received a lot of attentions. In EIS, Distributed Machine Learning (DML), which requires fewer computing resources, can implement many artificial intelligent applications efficiently. However, due to the dynamical network topology and the fluctuating transmission quality at the edge, work node selection affects the performance of DML a lot. In this paper, we focus on the Internet of Vehicles (IoV), one of the typical scenarios of EIS, and consider the DML-based High Definition (HD) mapping and intelligent driving decision model as the example. The worker selection problem is modeled as a Markov Decision Process (MDP), maximizing the DML model aggregate performance related to the timeliness of the local model, the transmission quality of model parameters uploading, and the effective sensing area of the worker. A Deep Reinforcement Learning (DRL) based solution is proposed, called the Worker Selection based on Policy Gradient (PG-WS) algorithm. The policy mapping from the system state to the worker selection action is represented by a deep neural network. The episodic simulations are built and the REINFORCE algorithm with baseline is used to train the policy network. Results show that the proposed PG-WS algorithm outperforms other comparation methods.
The Space-Terrestrial Integrated Network (STIN) is considered to be a promising paradigm for realizing worldwide wireless connectivity in sixth-Generation (6G) wireless communication systems. Unfortunately, excessive interference in the STIN degrades the wireless links and leads to poor performance, which is a bottleneck that prevents its commercial deployment. In this article, the crucial features and challenges of STIN-based interference are comprehensively investigated, and some candidate solutions for Interference Management (IM) are summarized. As traditional IM techniques are designed for single-application scenarios or specific types of interference, they cannot meet the requirements of the STIN architecture. To address this issue, we propose a self-adaptation IM method that reaps the potential benefits of STIN and is applicable to both rural and urban areas. A number of open issues and potential challenges for IM are discussed, which provide insights regarding future research directions related to STIN.
The edge caching resource allocation problem in Fog Radio Access Networks (F-RANs) is investigated. An incentive mechanism is introduced to motivate Content Providers (CPs) to participate in the resource allocation procedure. We formulate the interaction between the cloud server and the CPs as a Stackelberg game, where the cloud server sets nonuniform prices for the Fog Access Points (F-APs) while the CPs lease the F-APs for caching their most popular contents. Then, by exploiting the multiplier penalty function method, we transform the constrained optimization problem of the cloud server into an equivalent non-constrained one, which is further solved by using the simplex search method. Moreover, the existence and uniqueness of the Nash Equilibrium (NE) of the Stackelberg game are analyzed theoretically. Furthermore, we propose a uniform pricing based resource allocation strategy by eliminating the competition among the CPs, and we also theoretically analyze the factors that affect the uniform pricing strategy of the cloud server. We also propose a global optimization-based resource allocation strategy by further eliminating the competition between the cloud server and the CPs. Simulation results are provided for quantifying the proposed strategies by showing their efficiency in pricing and resource allocation.
With the deployment and commercial application of 5G, researchers start to think of 6G, which could meet more diversified and deeper intelligent communication requirements. In this paper, a four physical elements, i.e., man, machine, object, and genie, featured 6G concept is introduced. Genie is explained as a new element toward 6G. This paper focuses on the genie realization as an intelligent wireless transmission toward 6G, including sematic information theory, end-to-end artificial intelligence (AI) joint transceiver design, intelligent wireless transmission block design, and user-centric intelligent access. A comprehensive state-of-the-art of each key technology is presented and main questions as well as some novel suggestions are given. Genie will work comprehensively in 6G wireless communication and other major industrial vertical, while its realization is concrete and step by step. It is realized that genie-based wireless communication link works with high intelligence and performs better than that controlled manually.
Unlimited and seamless coverage as well as ultra-reliable and low-latency communications are vital for connected vehicles, in particular for new use cases like autonomous driving and vehicle platooning. In this paper, we propose a novel Space-Air-Ground integrated vehicular network (SAGiven) architecture to gracefully integrate the multi-dimensional and multi-scale context-information and network resources from satellites, High-Altitude Platform stations (HAPs), low-altitude Unmanned Aerial Vehicles (UAVs), and terrestrial cellular communication systems. One of the key features of the SAGiven is the reconfigurability of heterogeneous network functions as well as network resources. We first give a comprehensive review of the key challenges of this new architecture and then provide some up-to-date solutions on those challenges. Specifically, the solutions will cover the following topics: (1) space-air-ground integrated network reconfiguration under dynamic space resources constraints; (2) multi-dimensional sensing and efficient integration of multi-dimensional context information; (3) real-time, reliable, and secure communications among vehicles and between vehicles and the SAGiven platform; and (4) a holistic integration and demonstration of the SAGiven. Finally, it is concluded that the SAGiven can play a key role in future autonomous driving and Internet-of-Vehicles applications.
Machine learning techniques such as artificial neural networks are seeing increased use in the examination of communication network research questions. Central to many of these research questions is the need to classify packets and improve visibility. Multi-Layer Perceptron (MLP) neural networks and Convolutional Neural Networks (CNNs) have been used to successfully identify individual packets. However, some datasets create instability in neural network models. Machine learning can also be subject to data injection and misclassification problems. In addition, when attempting to address complex communication network challenges, extremely high classification accuracy is required. Neural network ensembles can work towards minimizing or even eliminating some of these problems by comparing results from multiple models. After ensembles tuning, training time can be reduced, and a viable and effective architecture can be obtained. Because of their effectiveness, ensembles can be utilized to defend against data poisoning attacks attempting to create classification errors. In this work, ensemble tuning and several voting strategies are explored that consistently result in classification accuracy above 99%. In addition, ensembles are shown to be effective against these types of attack by maintaining accuracy above 98%.
Power is an important part of the energy industry, relating to national economy and people’s livelihood, and it is of great significance to ensure the security and stability in operation of power transmission and distribution system. Based on Wireless Sensor Network technology (WSN) and combined with the monitoring and operating requirements of power transmission and distribution system, this paper puts forward an application system for monitoring, inspection, security, and interactive service of layered power transmission and distribution system. Furthermore, this paper demonstrates the system verification projects in Wuxi, Jiangsu Province and Lianxiangyuan Community in Beijing, which have been widely used nationwide.
Network slicing is a key technology to support the concurrent provisioning of heterogeneous Quality of Service (QoS) in the 5th Generation (5G)-beyond and the 6th Generation (6G) networks. However, effective slicing of Radio Access Network (RAN) is very challenging due to the diverse QoS requirements and dynamic conditions in the 6G networks. In this paper, we propose a self-sustained RAN slicing framework, which integrates the self-management of network resources with multiple granularities, the self-optimization of slicing control performance, and self-learning together to achieve an adaptive control strategy under unforeseen network conditions. The proposed RAN slicing framework is hierarchically structured, which decomposes the RAN slicing control into three levels, i.e., network-level slicing, next generation NodeB (gNodeB)-level slicing, and packet scheduling level slicing. At the network level, network resources are assigned to each gNodeB at a large timescale with coarse resource granularity. At the gNodeB-level, each gNodeB adjusts the configuration of each slice in the cell at the large timescale. At the packet scheduling level, each gNodeB allocates radio resource allocation among users in each network slice at a small timescale. Furthermore, we utilize the transfer learning approach to enable the transition from a model-based control to an autonomic and self-learning RAN slicing control. With the proposed RAN slicing framework, the QoS performance of emerging services is expected to be dramatically enhanced.