I am a Phd student in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. Under the guidance of my advisor, Dr. Alkan Soysal, at Wireless@VT, my research endeavors are dedicated to the advancement of secure wireless systems. My research lies at the intersection of wireless communication systems, and adversarial machine learning.
Prior to my current academic pursuits, I earned a Master of Science degree in Computer Science and Software Engineering from Auburn University. During this time, I collaborated with Dr. Jingyi Zheng, focusing on research in biomedical image analysis and the examination of scalp EEG signals using machine learning techniques.
Channel Distribution Information (CDI)-aware Generative Adversarial Network (GAN) is designed to address the unique challenges of adversarial attacks in wireless communication systems. The generator in this CDI-aware GAN maps random input noise to the feature space, generating perturbations intended to deceive a target modulation classifier. Its discriminators play a dual role: one enforces that the perturbations follow a Gaussian distribution, making them indistinguishable from Gaussian noise, while the other ensures these perturbations account for realistic channel effects and resemble no-channel perturbations.
We explore channel-aware adversarial attacks on DNN-based modulation classification models within wireless environments. Our investigation focuses on the robustness of these attacks with respect to channel distribution and path-loss parameters. We examine two scenarios: one in which the attacker has instantaneous channel knowledge and another in which the attacker relies on statistical channel data. In both cases, we study channels subject to Rayleigh fading alone, Rayleigh fading combined with shadowing, and Rayleigh fading combined with both shadowing and path loss. Our findings reveal that the distance between the attacker and the legitimate receiver largely dictates the success of an AML attack. Without precise parameter estimation, adversarial attacks are likely to fail.