Abstract
Stereo image Super-Resolution (StereoSR) aims to reconstruct a high-resolution image from the low-resolution stereo image pairs by taking full advantage of the complementary information between the left and right views of stereo images. Although existing StereoSR methods have achieved good performances, they have not utilized the information from intra-view and cross-view fully yet and face the huge network’s parameters. In order to address the above problems, a disparate lightweight attention-based fusion network, called Dual-dimensional Interactive Parallax Attention network (DIPAnet), is proposed in this paper. Our proposed network designs an effective Dual-dimensional Interactive Parallax Attention Module (DIPAM) that employs a Spatial Channel Fusion Module (SCFM) to obtain the complementary information from the aspect of spatial dimension and the channel dimension. In the meanwhile, some lightweight Omni-Scale Aggregation Groups (OSAGs) are applied to constitute the backbone of the main network for extracting the intra-view features. Extensive comparison experiments and ablation study illustrate that our proposed DIPAnet can achieve competitive results and outperforms some state-of-the-art StereoSR methods.