An Intelligent Energy-Efficient Data Routing Scheme for Wireless Sensor Networks Utilizing Mobile Sink
Read the full articleJournal profile
Wireless Communications and Mobile Computing provides the R&D communities working in academia and the telecommunications and networking industries with a forum for sharing research and ideas in this fast moving field.
Editor spotlight
Chief Editor Dr Cai is an Associate Professor in the Department of Computer Science at Georgia State University, USA and an Associate Director at INSPIRE Center.
Special Issues
Latest Articles
More articlesA Novel Hybrid Feature Selection with Cascaded LSTM: Enhancing Security in IoT Networks
The rapid growth of the Internet of Things (IoT) has created a situation where a huge amount of sensitive data is constantly being created and sent through many devices, making data security a top priority. In the complex network of IoT, detecting intrusions becomes a key part of strengthening security. Since IoT environments can be easily affected by a wide range of cyber threats, intrusion detection systems (IDS) are crucial for quickly finding and dealing with potential intrusions as they happen. IDS datasets can have a wide range of features, from just a few to several hundreds or even thousands. Managing such large datasets is a big challenge, requiring a lot of computer power and leading to long processing times. To build an efficient IDS, this article introduces a combined feature selection strategy using recursive feature elimination and information gain. Then, a cascaded long–short-term memory is used to improve attack classifications. This method achieved an accuracy of 98.96% and 99.30% on the NSL-KDD and UNSW-NB15 datasets, respectively, for performing binary classification. This research provides a practical strategy for improving the effectiveness and accuracy of intrusion detection in IoT networks.
Resource Scheduling in URLLC and eMBB Coexistence Based on Dynamic Selection Numerology
This paper focuses on the resource allocation problem of multiplexing two different service scenarios, enhanced mobile broadband (eMBB) and ultrareliable low latency (URLLC) in 5G New Radio, based on dynamic numerology structure, mini-time slot scheduling, and puncturing to achieve optimal resource allocation. To obtain the optimal channel resource allocation under URLLC user constraints, this paper establishes a relevant channel model divided into two convex optimization problems: (a) eMBB resource allocation and (b) URLLC scheduling. We also determine the numerology values at the beginning of each time slot with the help of deep reinforcement learning to achieve flexible resource scheduling. The proposed algorithm is verified in simulation software, and the simulation results show that the dynamic selection of numerologies proposed in this paper can better improve the data transmission rate of eMBB users and reduce the latency of URLLC services compared with the fixed numerology scheme for the same URLLC packet arrival, while the reasonable resource allocation ensures the reliability of URLLC and eMBB communication.
Reliability-Constrained Task Scheduling for DAG Applications in Mobile Edge Computing
The development of the internet of things (IoT) and 6G has given rise to numerous computation-intensive and latency-sensitive applications, which can be represented as directed acyclic graphs (DAGs). However, achieving these applications poses a huge challenge for user equipment (UE) that are constrained in computational power and battery capacity. In this paper, considering different requirements in various task scenarios, we aim to optimize the execution latency and energy consumption of the entire mobile edge computing (MEC) system. The system consists of single UE and multiple heterogeneous MEC servers to improve the execution efficiency of a DAG application. In addition, the execution reliability of a DAG application is viewed as a constraint. Based on the strong search capability and Pareto optimality theory of the cuckoo search (CS) algorithm and our previously proposed improved multiobjective cuckoo search (IMOCS) algorithm, we improve the initialization process and the update strategy of the external archive, and propose a reliability-constrained multiobjective cuckoo search (RCMOCS) algorithm. According to the simulation results, our proposed RCMOCS algorithm is able to obtain better Pareto frontiers and achieve satisfactory performance while ensuring execution reliability.
Low Latency 5G IP Transmission Backhaul Network Architecture: A Techno-Economic Analysis
The steeply rising demand for mobile data drives the investigation of the transmission backhaul network architecture and cost for the fifth generation (5G) of mobile technologies. The proposed backhaul architecture will facilitate high throughput, low latency, scalability, low cost of ownership, and high capacity backhaul for 5G mobile technologies. This paper presents a transmission backhaul network architecture for 5G technology; the proposed internet protocol (IP) transmission backhauling architecture includes the data center, core network, distribution network, and access or IP random access network. A mathematical model for the data center IP core network, IP distributed network, and the IP access network for capital expenditure (Capex), operational expenditure (Opex), and the total cost of ownership (TCO) are presented, as well as a mathematical model for the entire backhauling architecture. The result shows that the increase in IP sites is positively proportional to the Capex and negatively proportional to the Opex. The selectivity analysis shows that the increase in bandwidth is directly proportional to the Capex, Opex, and TCO in the IP core network. The increase in data centers is directly proportional to the Capex, Opex, and TCO of the entire backhauling architecture.
Federated Medical Learning Framework Based on Blockchain and Homomorphic Encryption
Federated learning-based medical data privacy sharing can promote the development of medical industry intelligence, but limited by its own security and privacy deficiencies, federated learning still suffers from a single point of failure and privacy leakage of intermediate parameters. To address these problems, this paper proposes a privacy protection framework for medical data based on blockchain and cross-silo federated learning, using cross-silo federated learning to establish a collaborative training platform for multiple medical institutions to enhance the privacy of medical data, introducing blockchain and smart contracts to realize decentralized federated learning to enhance trust between distrustful medical institutions and solve the problem of a single point of failure. In addition, a secure aggregation scheme is designed using threshold homomorphic encryption to prevent the privacy leakage problem during parameter transmission. The experimental and analytical results show that the accuracy of this paper’s scheme is consistent with the original federated learning scheme, effectively deals with the problems of single-point failure and inference attacks of federated learning, improves system robustness, and is suitable for medical scenarios with more stringent requirements on security and accuracy.
Performance Modeling of Hyperledger Fabric 2.0: A Queuing Theory-Based Approach
Hyperledger Fabric (shortened to Fabric) is an open-source, enterprise-level, permissioned distributed ledger technology platform with a highly modular, configurable architecture. It supports writing smart contracts in general-purpose programing languages and has become the preferred choice for enterprise-level blockchain applications. However, the transaction throughput of the Fabric system remains a critical factor that restricts the further application of this technology in various fields. Therefore, it is necessary to evaluate and optimize the performance of the Fabric blockchain platform. Existing performance modeling methods need to be improved in terms of compatibility and effectiveness. To address this, we propose a performance-compatible modeling method for Fabric using queuing theory, which considers the limited transaction pool and the situation where node groups are attacked. Using the Fabric 2.0 version as an example, we have established a model of the transaction process in the Fabric network. By analyzing the model’s continuous 3D time Markov process, we solved the system stationary equation and obtained analytical expressions for performance indicators such as system throughput, system steady-state queue length, and system average response time. We conducted extensive analyses and simulations to verify the models’ and formulations’ accuracy and validity. We believe this approach can be extended to various scenarios in other blockchain systems.