Rain-Guided Network Recovers Details in Single-Image Deraining

Beijing Zhongke Journal Publising Co. Ltd.

Rain streaks of different shapes, sizes, and directions obscure image background scenes, resulting in image degradation, including intensity fluctuation, color distortion, or even content alteration. Such degradation impairs the visual quality of an image and leads to undesirable performance of many outdoor computer vision systems that require high-quality images. Therefore, image deraining must be performed, and effective deraining methods should be developed. In this study, we addressed the problem of single-image rain removal.

We propose a novel unrolling rain-guided detail recovery network (URDRN) for single-image deraining. In the proposed URDRN model, to recover the texture detail loss due to over-deraining, an effective rain clue is utilized for guidance. In addition, to extract rain accurately, a context aggregation attention network (CAAN) is introduced to fully exploit global high-level semantic information, as global information has been proven to help rain extraction. Moreover, the proposed URDRN is unrolled into two sub-networks, which has two benefits. In each sub-network, the data fidelity term for establishing the imaging model is guaranteed and reinforced by the network input, and rain/image priors are implicitly captured from the data by the corresponding sub-network structure. Our contributions are summarized as follows:

• Unlike other deraining approaches that recover lost details by the regularization of a complex loss functionin a unified framework or by simply ignoring further background detail recovery, our approach involves using a rain clue to guide detail recovery effectively.

• Unlike other deep-learning-based deraining methods that ignore the data fidelity term and priors hidden in images, the proposed model is unrolled into two sub-networks in a unified framework, bridging the gap between data learning and optimization to a certain degree.

• Extensive experiments demonstrate that the proposed model outperforms other state-of-the-art models on both synthetic and real rain images in terms of both subjective visual experience and objective evaluation metrics.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.