Research Projects
Probabilistic Signal Processing for Integrated Sensing, Communications, and Detection (SPICED)
SecureG
Secure Beamforming and Tracking in 5G/NextG Systems: Attacks and Countermeasures
SecureG
Adaptive Modular LLMs for Trustworthy and Efficient On-Device
Intelligence in 6G Networks
SecureG
Wireless Geofencing with Passive Reconfigurable Intelligent Surfaces
SecureG
Machine Learning Driven Beam Management for 5G/NextG
SpectrumG
QoS-Aware Resource Management for Coexistence of 5G NR-U and WiFi
SpectrumG
Synthesis of RF Coverage Maps Using Generative AI
SpectrumG
Hardware-accelerated Machine Learning Designs for Energy-efficient Real-time Signal Intelligence at the Edge
SmartG
Multi-Agent Framework for Advanced Topic Intelligence and Search
Optimization
SmartG
Who Did What? Learning and Reconstructing O-RAN Conflicts with Graph Neural Networks
OpenG
QoS-Aware Scheduling for Coexistence of 5G NR-U and WiFi
Lead PI: Brian L. Mark, George Mason University
Abstract:
The rapid growth in wireless connectivity demands, driven by diverse applications ranging from mobile communications to IoT networks, has led to an increasing reliance on unlicensed spectrum. The unlicensed bands are shared by multiple Radio Access Technologies (RATs), such as 5G New Radio in Unlicensed Spectrum (NR-U) and Wi-Fi, which introduce complex interference dynamics that can degrade network performance if not managed effectively. The primary challenge lies in achieving high network performance while maintaining fairness among different technologies sharing the same spectrum. In this context, the coexistence of 5G NR-U and Wi-Fi networks introduces significant complexities. The two technologies use distinct channel access mechanisms: 5G NR-U relies on Listen Before Talk (LBT) procedures, while Wi-Fi employs Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). These mechanisms must operate harmoniously to minimize inter-network collisions and ensure equitable resource allocation. However, this is non-trivial due to the diverse Quality of Service (QoS) requirements, traffic patterns, and contention behaviors of the two networks. For instance, 5G NR-U can have stringent latency and reliability requirements, particularly for high-priority traffic, which can conflict with the opportunistic nature of Wi-Fi’s channel access mechanism.We propose a constrained reinforcement learning (CRL) framework to dynamically adjust the coexistence parameters of 5G NR-U and Wi-Fi networks with the objective of optimizing overall network performance subject to the QoS constraints of high-priority traffic. The CRL framework is based on a state augmentation learning approach in which the wireless network state is augmented with dual variables at each time step, providing dynamic inputs to the learning algorithm. Preliminary results from applying the framework to the channel access contention window parameter show that the delay constraints of high-priority traffic can be satisfied while optimizing fairness across the two networks. In the proposed project, we plan to advance the CRL framework for wireless coexistence and extend it to incorporate power control at the physical layer to further enhance overall network performance while providing QoS guarantees.
Adaptive Modular LLMs for Trustworthy and Efficient On-Device
Intelligence in 6G Networks
Lead PI: Michael Wu, University of Arizona
Abstract:
The convergence of Large Language Models (LLMs) and 6G wireless technology marks a transformativeleap in artificial intelligence and communications. LLMs have demonstrated remarkable capabilities innatural language processing, content generation, and complex decision-making across numerous domains,while 6G promises unprecedented speeds, ultra-low latency, and massive connectivity. Their integrationhas the potential to revolutionize sectors from IoT and smart cities to healthcare and autonomous vehicles,enabling sophisticated automation and ambient intelligence. However, a fundamental challenge hinders thisvision: state-of-the-art LLMs have grown too large for efficient deployment on edge devices. Traditionalcompression methods like pruning and quantization, while reducing model size, often degrade performanceby stripping the vast knowledge encoded in LLM. This limitation creates a critical gap between the promiseof LLM-6G integration and its practical feasibility.We propose an innovative approach that fundamentally differs from model compression techniques. Insteadof reducing model size, our solution disassembles LLMs into smaller, self-functional modules that canoperate independently on distributed devices while maintaining the ability to dynamically reassemble viahigh-speed 6G networks for collaborative processing. This approach enables flexible deployment across6G networks, balancing local processing for latency-sensitive tasks with collaborative computation forcomplex operations. Through this framework with the following three thrusts, we aim to unlock the fullpotential of LLM-6G integration while optimizing resource utilization and maintaining model performanceand trustworthiness.
Wireless Geofencing with Passive Reconfigurable Intelligent Surfaces
Lead PI: Jacek Kibilda, Virginia Tech
Abstract:
Wireless geofencing refers to the external ability to control the availability of wireless services within a given locality, e.g., an indoor conference room hosting a classified meeting. Conventionally, wireless geofencing isaccomplished by adjusting transmitter configurations, which may be possible for local WiFi deployments but challenging to achieve with cellular base stations operated by independent network carriers. In this case, activejammers could be used effectively, however, their operation may be limited by the licenses required to transmit in the cellular spectrum. In this project, we propose using passive reconfigurable intelligent surfaces (RISs) to control the coverage of wireless services within a confined indoor area. The proposal builds on our preliminary research that shows a passive RIS controller that can exploit the initial access protocol to hinder outdoor-to-indoor cellularcoverage. In this project, we aim to enhance the design of the RIS controller to a more realistic network setting, with a high-fidelity wireless outdoor-to-indoor propagation scenario and a large-scale network setup. As such, ourshort- to mid-term objectives are: i) develop a high-fidelity simulation model involving outdoor-to-indoor propagation and RIS, ii) propose a machine learning-based approach that leverages local data to control capabilities geofencing accurately, and iii) perform a simulation campaign to characterize the performance. In the long run, we plan to validate the proposed geofencing approach through experiments with a passive RIS in the CCI xG Testbed.
Hardware-accelerated Machine Learning Designs for Energy-efficient Real-time Signal Intelligence at the Edge
Lead PI: Marwan Krunz, University of Arizona
Abstract:
Deep neural networks (DNNs) have been successfully applied in many domains, including image classification, language models, speech analysis, autonomous vehicles, wireless communications, bioinformatics, and others. Their success stems from their ability to handle vast amounts of data and infer patterns without making assumptions on the underlying dynamics that produced the data. Cloud providers operate large data centers with high-speed computers that continuously perform DNN computations, with huge energy consumption that rivals that of some industries and nations. In addition to being used in solving large-scale problems, DNNs are now being considered for recognition and inference applications in batteryoperated systems such as smartphones and embedded devices. Thus, there is a critical need to improve the energy efficiency of DNNs.The main objective of this project is twofold: (1) Design and evaluate a radically innovative energy-efficient hardware/software framework for on-chip implementation of DNNs, and (2) customize this framework fornew DNNs that enable real-time signal classification in next-generation wireless systems. By integrating processing elements within memory chips, the energy consumption of a DNN can be significantly reduced,and more computations can be done faster. The hardware-accelerated DNN designs provided by this projectwill facilitate rapid identification of wireless transmissions (e.g. radar, 5G, LTE, Wi-Fi, microwave, satellite, and others) in a shared-spectrum scenario, enabling better use of the spectrum and facilitating accurate detection of adversarial and rogue signals. To achieve 10x-100x reduction in DNN energy consumption, we propose a holistic approach that encompasses: (1) new circuit designs that leverage emerging ‘CMOS+X’ technologies; (2) a novel nearmemory architecture in which processing elements are seamlessly integrated with traditional DynamicRAM (DRAM); (3) novel 3D-matrix-based per-layer DNN computations and data-layout optimizations for kernel weights; and (4) algorithms and hardware/software co-design tailored for near-real-time DNN-based signal classification in next-generation wireless systems.
Who Did What? Learning and Reconstructing O-RAN Conflicts with Graph Neural Networks
Lead PI: Joao Santos, Virginia Tech
Abstract:
Mobile networks are transitioning toward open, disaggregated architectures with distributed components. A key example of this evolution is O-RAN, which enables third-party xApps on the Near-RT RIC to enhance capabilities of the RAN. However, this flexibility introduces new security vulnerabilities, as xApps can independently modify control parameters, potentially leading to performance degradation and instability. To detect potential conflicts, we proposed and validated a data-driven solution where we represent the relationships between xApps, control parameters, and KPIs as conflict graphs, and leverage Graph Neural Networks (GNNs) and knowledge graphs to learn these relationships from collected data from the state of the RAN. By introducing graph-based definitions of different types of conflicts, we can apply well-understood graph labeling techniques to autonomously detect conflicts based on the structure of the conflict graph. If funded, we will advance our conflict detection framework by (i) investigating causal relationships to improve conflict detection accuracy and trace the root cause of conflicts; and (ii) exploring anomaly detection to increase the robustness against data contamination and allow us to identify anomalous xApps, parameters and KPIs. We seek industry advice to refine our capabilities, laying thefoundation for an autonomous, scalable solution that ensures more resilient future mobile networks.
Multi-Agent Framework for Advanced Topic Intelligence and Search Optimization
Lead PI: Marwan Krunz, University of Arizona
Abstract:
In the era of digital-first branding, intelligent content discovery, and real-time search optimization,businesses must navigate a complex topic universe spanning brand marketing, SEO, and multi-domaincontent intelligence. The evolution of semantic search, algorithmic shifts, and dynamic user behaviordemands a move from static keyword-based strategies to adaptive, multi-modal, agentic intelligencesystems. To address this, we propose Agentic Orchestrations for Topic Intelligence Management System(AOTIMS)—a multi-agent, multi-modal AI framework that autonomously curates, analyzes, and optimizesbrand- and search-relevant topics. AOTIMS integrates Retrieval-Augmented Generation (RAG), vectordatabases, reinforcement learning, and multi-modal agentic reasoning, ensuring scalability, adaptability,and intelligent governance of topic ecosystems.
Machine Learning Driven Beam Management for 5G/NextG
Lead PI: Marwan Krunz, University of Arizona
Abstract:
The design and optimization of next-generation wireless systems require an efficient beam managementstrategy to support high-frequency communications, particularly in the millimeter-wave (mmWave) andsub-THz bands. Operating over these bands enable high data rates but pose significant challenges due tothe highly directional nature of transmissions and their susceptibility to blockages. Effective beammanagement is essential for maintaining robust connectivity in dynamic environments. Traditional beammanagement strategies rely on exhaustive beam sweeping and measurement-based feedback, whichintroduce substantial overhead, latency, and computational complexity. Managing beams efficiently is evenmore challenging in multi-user and multi-cell scenarios due to increased interference, mobility, anddynamic channel conditions.This project proposes a machine learning (ML) driven approach for beam prediction and optimization,aiming to reduce the overhead associated with conventional beam training methods. Novel ML models forpredicting the optimal beam for communication are proposed using a reduced beam search space. Thesemodels will leverage historical and real-time signal measurements, minimizing the need for frequentfeedback from mobile users, thereby improving latency and overall system efficiency.
Synthesis of RF Coverage Maps Using Generative AI
Lead PI: Marwan Krunz, University of Arizona
Abstract:
The design and optimization of next-generation (NextG) wireless systems require a comprehensiveunderstanding of the radio frequency (RF) environment in which these systems operate. NextG networksmust coexist with other wireless systems, such as radar and SATCOM, while efficiently sharing the heavilycongested mid-band spectrum (e.g., 1–7 GHz) and the upper mid-band spectrum (FR3, from 7 to 24 GHz).Additionally, they will operate in the millimeter-wave (mmWave) and sub-THz bands, including current5G bands (24, 28, 37, 39, and 43 GHz) and frequencies above 100 GHz. RF maps, which provide a 2D/3Drepresentation of received signals, are critical for network planning, spectrum-access coordination, andprotocol adaptation in all these bands. However, obtaining RF maps through site surveys is labor-intensive,costly, and often impractical. One approach for producing RF maps is the use of ray-tracing simulations,which involve solving Maxwell equations to model signal propagation. While ray tracing can providedetailed insights, it incurs significant computational overhead and often requires simplifications that impactaccuracy. Moreover, this method cannot extrapolate results for parameter settings beyond its design scope.To overcome these limitations, we have been exploring generative AI techniques for RF map synthesis. Ourprevious research includes the development of RecuGAN, a GAN-based model that captures the impact ofindividual attributes such as transmitter location, antenna array size, and center frequency on 2D RF mapsfor indoor scenarios. Once trained, RecuGAN can generate RF maps for specific attribute values, enablingdata-driven RF modeling. Additionally, RecuGAN associates attribute values with classes of RF maps inan unsupervised manner, allowing for the synthesis of diverse RF maps without requiring labeled data.Another key effort is RADIANCE, a GAN-based framework that generates extrapolated 2D RF maps bylearning from training data with known attribute values and predicting RF maps for attribute values notfound in the training dataset. Our preliminary results show that RADIANCE can be used to synthesize RFmaps for new center frequencies, antenna patterns, transmitter locations, and floor plans, highlighting itsgeneralization capabilities.Building on these prior efforts, this project proposes the development of advanced generative models for2D and 3D RF map synthesis in both indoor and outdoor environments. Moreover, we will developtechniques to apply these RF maps in network planning and optimization, enabling more efficientdeployment and management of NextG wireless systems.
Probabilistic Signal Processing for Integrated Sensing, Communications, and Detection (SPICED)
Lead PI: Daniel J. Jakubisin, Virginia Tech
Abstract:
A driving use case for 6G wireless technology is integrated sensing andcommunications (ISAC). ISAC has the potential to unlock new networkapplications and capabilities, driving subscriber/revenue growth in the nextgeneration. ISAC is also a highly relevant technology for networks supportingdefense applications—opening the door to rich analytics from the network. Akey challenge of ISAC is to efficiently accomplish both tasks within limitedspectrum resources. While existing work has placed significant attention onwaveform design, in this project, we focus on receiver design. The 6G receiverwill be responsible for performing communications demodulation &decoding, sensing estimation, and signal detection. We draw from theiterative receiver concept to jointly perform the demodulation, sensing, anddetection tasks using probabilistic information—with the goal of high-resolution sensing. Through this work, we will identify and develop receiverstructures and algorithms which will be critical to 6G ISAC.
Secure Beamforming and Tracking in 5G/NextG Systems: Attacks and Countermeasures
Lead PI: Marwan Krunz, University of Arizona
Abstract:
5G NR enhances the security of legacy networks but also introduces new vulnerabilities, particularly in thebeamforming and beam tracking processes. In 5G NR, Channel State Information – Reference Signal (CSIRS) and Sounding Reference Signals (SRS) are used for downlink and uplink channel estimation,respectively. These signals play a crucial role in beamforming and modulation and coding scheme (MCS)selection. This project focuses on studying adversarial attacks on the digital MIMO beamforming andprecoding process. Specifically, we will investigate tampering and jamming attacks that distort CSIestimation used for MIMO precoding, as well as attacks on the reinforcement learning (RL) process usedfor beam tracking and MCS selection. Countermeasures and defense mechanisms will also be investigated.