您好,欢迎访问浙江省农业科学院 机构知识库!

CDFAN: Cross-Domain Fusion Attention Network for Pansharpening

文献类型: 外文期刊

作者: Ding, Jinting 1 ; Xu, Honghui 2 ; Zhou, Shengjun 3 ;

作者机构: 1.Hangzhou City Univ, Sch Informat & Elect Engn, Hangzhou 310015, Peoples R China

2.Zhejiang Univ Technol, Sch Comp Sci & Technol, Hangzhou 310023, Peoples R China

3.Zhejiang Acad Agr Sci, Hangzhou 310021, Peoples R China

关键词: remote sensing; pansharpening; discrete wavelet transform; attention; information theory; mutual information

期刊名称:ENTROPY ( 影响因子:2.0; 五年影响因子:2.2 )

ISSN:

年卷期: 2025 年 27 卷 6 期

页码:

收录情况: SCI

摘要: Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral-spatial fidelity and producing images with higher perceptual quality.

  • 相关文献
作者其他论文 更多>>