Automatic Modulation Recognition (AMR) is an important part of spectrum management. Existing work and datasets focus on variety in the modulations transmitted and only apply rudimentary channel effects. We propose a new dataset which supports AMR tasks which focuses on only a few common modulations but introduces a large variation to the propagation channel. Simple scenarios with rural and urban areas are randomly generated using Simplex noise and a receiver/transmitter pair is placed in the scenario. The 3GPP model is combined with the propagation vector from the scenario generator to simulate a signal propagating across the generated terrain. This dataset brings more realism to the AMR task and will allow machine learning models to adapt to changing environments.
Automatic Modulation Recognition (AMR) is critical in identifying various modulation types in wireless communication systems. Recent advancements in deep learning have facilitated the integration of algorithms into AMR techniques. However, this integration typically follows a centralized approach that necessitates collecting and processing all training data on high-powered computing devices, which may prove impractical for bandwidth-limited wireless networks. In response to this challenge, this study introduces two methods for distributed learning-based AMR on the collaboration of multiple receivers to perform AMR tasks. The TeMuRAMRD 2023 dataset is employed to support this investigation, uniquely suited for multi-receiver AMR tasks. Within this distributed sensing environment, multiple receivers collaborate in identifying modulation types from the same RF signal, each possessing a partial perspective of the overall environment. Experimental results demonstrate that the centralized-based AMR, with six receivers, attains an impressive accuracy rate of 91%, while individual receivers exhibit a notably lower accuracy, at around 41%. Nonetheless, the two proposed decentralized learning-based AMR methods exhibit noteworthy enhancements. Based on consensus voting among six receivers, the initial method achieves a marginally lower accuracy. It achieves this while substantially reducing the bandwidth demands to a 1/256th of the centralized model. With the second distributed method, each receiver shares its feature map, subsequently aggregated by a central node. This approach also accompanies a substantial bandwidth reduction of 1/8 compared to the centralized approach. These findings highlight the capacity of distributed AMR to significantly enhance accuracy while effectively addressing the constraints of bandwidth-limited wireless networks.
Neural networks continue to be vulnerable to adversarial attacks. In addressing this, two primary defensive strategies have emerged based on network composition: those targeting individual networks and those grounded in ensemblebased strategies. While merging both strategies is ideal, on edge devices, a combined defense that scales with ensemble size could result in significant inference latency increases. Many of the ensemble based approaches in the literature offer robust protection while necessitating large ensemble size. To address the challenge of deploying ensemble based adversarial defenses on edge device, this work introduces the Categorized Ensemble networks (CAEN) training methodology. CAEN’s foundation lies in two observations: 1. Under adversarial conditions, models frequently confuse conceptually contrastive classes with each-other and 2. Assigning soft label values to contrastive class pairs enhances network resilience against adversarial attacks. Building on these insights, CAEN initially identifies contrastive classes under Projected Gradient Decent (PGD) based attacks through a confusion matrix. It then formulates the problem of pairing contrastive classes across ensemble members as an Integer Linear Program (ILP). Following this, CAEN applies soft label assignments to identified contrastive class pairs during the ensemble training process. By averaging the outputs of the independently trained ensemble members, a CAEN ensemble is formed. CAEN training surpasses current state-of-theart robust ensemble training techniques, achieving an average 1.11X/1.57X improvement in robust accuracy against whitebox and black-box attacks. Additionally, by limiting ensemble members to just two networks, CAEN training produces ensembles that offer robust protection while reducing runtime FLOPs by 16% compared to SOTA, making CAEN ensembles suitable for deployment on edge devices.
With the advent of deep learning, there has been an ever-growing list of applications to which Deep Convolutional Neural Networks (DCNNs) can be applied. The field of Multi-Task Learning (MTL) attempts to provide optimizations to many-task systems, improving performance by optimization algorithms and structural changes to these networks. However, we have found that current MTL optimization algorithms often impose burdensome computation overheads, require meticulously labeled datasets, and do not adapt to tasks with significantly different loss distributions. We propose a new MTL optimization algorithm: Batch Swapping with Multiple Optimizers (BSMO). We utilize single-task labeled data to train on a multi-task hard parameter sharing (HPS) network through swapping tasks at the batch level. This dramatically increases the flexibility and scalability of training on an HPS network by allowing for per-task datasets and augmentation pipelines. We demonstrate the efficacy of BSMO versus current SOTA algorithms by benchmarking across contemporary benchmarks & networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.