Massive MIMO technology, leveraging beamforming and precoding, significantly enhances spectrum and energy efficiency when CSI at the transmitter is available. However, the large-scale arrays envisioned for millimeter-wave or terahertz communications drastically increase the downlink CSI training overhead and uplink feedback overhead, necessitating efficient CSI feedback designs in FDD wireless systems. Similar to image compression, after downlink CSI estimation at the UE, downlink CSI is encoded into a low-dimensional codeword stream, transmitted to the base station, and decoded for downlink CSI recovery. Recently, deep learning, particularly autoencoders, has been widely discussed as an efficient CSI feedback approach, outperforming traditional compressive sensing methods. However, several challenges remain unaddressed in learning-based CSI feedback frameworks.
In this dissertation, we address six critical issues in learning-based CSI feedback frameworks. We explore the exploitation of uplink/downlink frequency-division duplexing reciprocity. The energy, delay, and angles of arrival and departure of uplink and downlink channels are highly correlated. Yet, uplink CSI, available at the base station, is seldom used to reduce the uncertainty of downlink CSI recovery. We propose a deep learning CSI feedback framework leveraging this reciprocity, with a redesigned loss function for joint encoding of CSI magnitudes and phases. Evaluation shows superior performance and better utilization of frequency-division duplexing reciprocity compared to previous works.
We also address the reduction of pilot transmission overhead. Given limited pilot resources, it is impractical to transmit pilots for all antenna ports in a massive MIMO system. We introduce a beam-based pilot precoding approach and a deep learning CSI feedback framework to minimize pilot transmission and CSI feedback overhead from UE. Furthermore, we propose scalable CSI encoding. Existing frameworks often encode and decode full CSI using autoencoders, leading to heavy models and low scalability. Recognizing low correlation between widely spaced antennas, we propose a scalable deep learning CSI feedback framework using a divide-and-conquer principle to encode CSI subarray-by-subarray. This dynamic compression approach achieves significant model size reduction while maintaining downlink CSI recovery performance.
To tackle the reduction of training costs, deep learning models typically require extensive data collection and customization for different channel types. Inspired by the simplicity and generality of JPEG in image compression, we propose a JPEG-based CSI feedback approach, which requires no prior training and adapts to various channels, offering comparable recovery performance to learning-based methods. Additionally, we enhance frequency selective channel performance. Current learning-based models struggle with CSI recovery in frequency selective channels due to sparse pilot placement. We propose an uplink CSI-assisted CSI upsampling module at the base station, compatible with most previous explicit CSI feedback frameworks. Ensuring adherence to standardized feedback, we note that current learning-based CSI feedback methods do not strictly follow standardized CSI feedback, making industry adoption challenging. We propose a lightweight, efficient precoder upsampler as a plug-in module for the base station, enhancing performance in high delay-spread channels.
For prospective researchers, we suggest two directions to bridge artificial intelligence research and cellular communications industries. Addressing channel aging in CSI feedback is crucial due to the time lag between downlink CSI training and downlink data transmission, which may result in outdated precoders. Future research should focus on developing learning-based CSI and precoder prediction to adapt to non-linear channel variations. Additionally, in practical frequency-division duplexing systems, the base station lacks exact CSI knowledge from UE and relies on fed-back precoders. Researchers should develop precoder-based user scheduling algorithms to avoid interference and maximize system throughput.