nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg searchdiv qikanlogo popupnotification paper
2025 01 v.55 150-167
基于孪生级联空间滤波的中国传统画修复
基金项目(Foundation): 国家自然科学基金(62471390,62306237); 陕西省重点研发计划(2024GX-YBXM-149); 西北大学研究生创新项目(CX2024204、CX2024206)
邮箱(Email):
DOI: 10.16152/j.cnki.xdxbzr.2025-01-013
中文作者单位:

西北大学信息科学与技术学院;西北大学艺术学院;

摘要(Abstract):

中国传统画作为宝贵的文化遗产,历经时间沉淀以及各种自然因素的影响,常出现开裂、破损和褪色等问题。尽管一些深度学习框架在自然图像修复领域取得了显著进展,但其大多过度依赖卷积权重共享和平移不变性,在处理布局复杂、结构抽象的绘画图像时,难以捕捉其独特的空间特性。针对此问题,提出一种孪生级联空间滤波(twin cascade spatial filtering, TCSF)预测方法用于中国传统画的修复。TCSF采用层级解码策略,从多尺度解析绘画图像的层次特征,并级联空间滤波预测方法得到修复核,从而由粗到细地复原缺失区域的像素。为了在特征信息匮乏的区域精确地复原缺失的结构和笔触信息,进一步引入空间编码机制。通过对滤波特征图空间编码得到坐标矩阵,并在滤波预测过程中注入坐标信息编码,用于缺失像素点恢复时提供空间信息参照,进而提升修复结果的精确度与视觉效果。实验中,选取了具有代表性的中国传统画图像进行训练,并增加壁画数据集和Places数据集测试模型的泛化性能。与现有工作使用掩码不同,该研究在实验中提取部分真实绘画图像的破损掩码,以更逼真地模拟破损情况。定性和定量实验结果表明,该方法在中国传统画恢复任务中取得了较好的修复结果,为数字艺术修复和文化遗产保护提供了有益的启示。

关键词(KeyWords): 图像修复;空间滤波预测;中国传统画修复;文物图像破损掩码
参考文献

[1] LI J,WANG N,ZHANG L,et al.Recurrent feature reasoning for image inpainting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE,2020:7760-7768.

[2] SUVOROV R,LOGACHEVA E,MASHIKHIN A,et al.Resolution-robust large mask inpainting with fourier convolutions[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.Waikoloa:IEEE,2022:2149-2159.

[3] WAN Z,ZHANG J,CHEN D,et al.High-fidelity pluralistic image completion with transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Montreal:IEEE,2021:4692-4701.

[4] LIAO L,XIAO J,WANG Z,et al.Guidance and evaluation:Semantic-aware image inpainting for mixed scenes[C]//Computer Vision-ECCV 2020:16th European Conference.Cham:Springer,2020:683-700.

[5] LI L,ZOU Q,ZHANG F,et al.Line drawing guided progressive inpainting of mural damages[EB/OL].(2022-11-12) [2024-05-15].https://arxiv.org/abs/2211.06649.

[6] ZHU J Y,PARK T,ISOLA P,et al.Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision.Venice:IEEE,2017:2223-2232.

[7] 曹建芳,靳梦燕,李朝霞,等.基于循环生成对抗网络的壁画色彩修复算法[J].山东科技大学学报(自然科学版),2023,42(4):101-112.CAO J F,JIN M Y,LI Z X,et al.Mural color restoration algorithm based on cyclic generative adversarial network[J].Journal of Shandong University of Science and Technology (Natural Science),2023,42(4):101-112.

[8] CHI L,JIANG B,MU Y.Fast fourier convolution[J].Advances in Neural Information Processing Systems,2020,33:4479-4488.

[9] YU J,LIN Z,YANG J,et al.Free-form image inpainting with gated convolution[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Seoul:IEEE,2019:4471-4480.

[10] 胡升,薛涛,季虹.多尺度信息融合的生成对抗网络壁画修复[J].国外电子测量技术,2024,43(4):30-38.HU S,XUE T,JI H.Muralinpainting algorithm for generative adversarial network with multi-scale information fusion[J].Foreign Electronic Measurement Technology,2024,43(4):30-38.

[11] 赵磊,吉柏言,邢卫,等.基于多路编码器和双重注意力的古画修复算法[J].计算机研究与发展,2023,60(12):2814-2831.ZHAO L,JI B Y,XING W,et al.Ancient painting inpaintingalgorithm based on multi-channel encoder and dualattention[J].Joural of Computer Research and Development,2023,60(12):2814-2831.

[12] BARNES C,SHECHTMAN E,FINKELSTEIN A,et al.PatchMatch:A randomized correspondence algorithm for structural image editing[J].ACM Trans Graph,2009,28(3):1-11.

[13] CHO T S,BUTMAN M,AVIDAN S,et al.The patch transform and its applications to image editing[C]//2008 IEEE Conference on Computer Vision and Pattern Recognition.Anchorage:IEEE,2008:1-8.

[14] KWATRA V,ESSA I,BOBICK A,et al.Texture optimization for example-based synthesis[J].ACM Siggraph 2005 Papers,2005,24(3):795-802.

[15] DARABI S,SHECHTMAN E,BARNES C,et al.Image melding:Combining inconsistent images using patch-based synthesis[J].ACM Transactions on graphics (TOG),2012,31(4):1-10.

[16] ZOME T.Learning how to inpaint from global image statistics[C]//Proceedings Ninth IEEE International Conference on Computer Vision.Nice:IEEE,2003:305-312.

[17] BALLESTER C,BERTALMIO M,CASELLES V,et al.Filling-in by joint interpolation of vector fields and gray levels[J].IEEE Transactions on Image Processing,2001,10(8):1200-1211.

[18] PATHAK D,KRAHENBUHL P,DONAHUE J,et al.Context encoders:Feature learning by inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas:IEEE,2016:2536-2544.

[19] LI W,LIN Z,ZHOU K,et al.Mat:Mask-aware transformer for large hole image inpainting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans:IEEE,2022:10758-10768.

[20] DONG Q,CAO C,FU Y.Incremental transformer structure enhanced image inpainting with masking positional encoding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans:IEEE,2022:11358-11368.

[21] KO K,KIM C S.Continuously masked transformer for image inpainting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Paris:IEEE,2023:13169-13178.

[22] CHEN B W,LIU T J,LIU K H.Lightweight image inpainting by stripe window transformer with joint attention to CNN[C]//2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP).Rome:IEEE,2023:1-6.

[23] LIU H,JIANG B,XIAO Y,et al.Coherent semantic attention for image inpainting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Seoul:IEEE,2019:4170-4179.

[24] YAN Z,LI X,LI M,et al.Shift-net:Image inpainting via deep feature rearrangement[C]//Proceedings of the European Conference on Computer Vision (ECCV).Cham:Springer,2018:1-17.

[25] WANG N,LI J,ZHANG L,et al.MUSICAL:Multi-scale image contextual attention learning for inpainting[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI).Macao:IJCAI,2019:3748-3754.

[26] YU J,LIN Z,YANG J,et al.Generative image inpainting with contextual attention[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:5505-5514.

[27] ZHENG H,LIN Z,LU J,et al.Image inpainting with cascaded modulation gan and object-aware training[C]//European Conference on Computer Vision.Cham:Springer,2022:277-296.

[28] SUVOROV R,LOGACHEVA E,MASHIKHIN A,et al.Resolution-robust large mask inpainting with fourier convolutions[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.Waikoloa:IEEE,2022:2149-2159.

[29] IIZUKA S,SIMO-SERRA E,ISHIKAWA H.Globally and locally consistent image completion[J].ACM Transactions on Graphics (ToG),2017,36(4):1-14.

[30] GUO X,YANG H,HUANG D.Image inpainting via conditional texture and structure dual generation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Montreal:IEEE,2021:14134-14143.

[31] ZENG Y,LIN Z,LU H,et al.Cr-fill:Generative image inpainting with auxiliary contextual reconstruction[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.Montreal:IEEE,2021:14164-14173.

[32] SONG Y,YANG C,SHEN Y,et al.Spg-net:Segmentation prediction and guidance network for image inpainting[EB/OL].(2018-08-06) [2024-05-15].https://arxiv.org/abs/1805.03356.

[33] ZHENG C,CHAM T J,CAI J,et al.Bridging global context interactions for high-fidelity image completion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans:IEEE,2022:11512-11522.

[34] DENG Y,HUI S,ZHOU S,et al.T-former:An efficient transformer for image inpainting[C]//Proceedings of the 30th ACM International Conference on Multimedia.Lisboa:ACM,2022:6559-6568.

[35] DENG Y,HUI S,MENG R,et al.Hourglass attention network for image inpainting[C]//European Conference on Computer Vision.Cham:Springer,2022:483-501.

[36] MILDENHALL B,BARRON J T,CHEN J,et al.Burst denoising with kernel prediction networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:2502-2510.

[37] XIE C,TIAN X,JIANG R,et al.Dilated kernel prediction network for single-image denoising[J].Journal of Electronic Imaging,2021,30(2):1-15.

[38] CARBAJAL G,VITORIA P,LEZAMA J,et al.Blind motion deblurring with pixel-wise kernel estimation via kernel prediction networks[J].IEEE Transactions on Computational Imaging,2023,9:928-943.

[39] GUO Q,SUN J,JUEFEI-XU F,et al.Efficientderain:Learning pixel-wise dilation filtering for high-efficiency single-image deraining[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Palo Alto:AAAI Press,2021:1487-1495.

[40] GUO Q,LI X,XU F,et al.Jpgnet:Joint predictive filtering and generative network for image inpainting[C]//Proceedings of the 29th ACM International Conference on Multimedia.New York:ACM,2021:386-394.

[41] LI X,GUO Q,LIN D,et al.Misf:Multi-level interactive siamese filtering for high-fidelity image inpainting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans:IEEE,2022:1869-1878.

[42] LIU R,LEHMAN J,MOLINO P,et al.An intriguing failing of convolutional neural networks and the coordconv solution[J].Advances in Neural Information Processing Systems,2018,31:9605-9616.

[43] JOHNSON J,ALAHI A,FEI-FEI L.Perceptual losses for real-time style transfer and super-resolution[C]//Computer Vision-ECCV 2016:14th European Conference.Cham:Springer,2016:694-711.

[44] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[J].Advances in Neural Information Processing Systems,2014,27:2672-2680.

[45] GATYS L A,ECKER A S,BETHGE M.Image style transfer using convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas:IEEE,2016:2414-2423.

[46] ZHOU B,LAPEDRIZA A,KHOSLA A,et al.Places:A 10 million image database for scene recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(6):1452-1464.

[47] SIMONYAN K.Very deep convolutional networks for large-scale image recognition[EB/OL].(2015-04-10) [2024-05-15].https://arxiv.org/abs/1409.1556.

[48] YI Z,TANG Q,AZIZI S,et al.Contextual residual aggregation for ultra high-resolution image inpainting[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle:IEEE,2020:7508-7517.

[49] LIU G,REDA F A,SHIH K J,et al.Image inpainting for irregular holes using partial convolutions[C]//Proceedings of the European Conference on Computer Vision (ECCV).Cham:Springer,2018:85-100.

[50] ZHANG R,ISOLA P,EFROS A A,et al.The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:586-595.

[51] PASZKE A,GROSS S,MASSA F,et al.Pytorch:An imperative style,high-performance deep learning library[J].Advances in Neural Information Processing Systems,2019,32:8026-8037.

[52] LOSHCHILOV I.Decoupled weight decay regularization[EB/OL].(2019-01-04) [2024-05-15].https://arxiv.org/abs/1711.05101.

[53] ZENG Y,FU J,CHAO H,et al.Aggregated contextual transformations for high-resolution image inpainting[J].IEEE Transactions on Visualization and Computer Graphics,2022,29(7):3266-3280.

[54] LIU W,CUN X,PUN C M,et al.Coordfill:Efficient high-resolution image inpainting via parameterized coordinate querying[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Washington:AAAI Press,2023,37(2):1746-1754.

基本信息:

DOI:10.16152/j.cnki.xdxbzr.2025-01-013

中图分类号:K879.4;J212;TP391.41

引用信息:

[1]薛文喆,董兴宇,胡琦瑶等.基于孪生级联空间滤波的中国传统画修复[J].西北大学学报(自然科学版),2025,55(01):150-167.DOI:10.16152/j.cnki.xdxbzr.2025-01-013.

基金信息:

国家自然科学基金(62471390,62306237); 陕西省重点研发计划(2024GX-YBXM-149); 西北大学研究生创新项目(CX2024204、CX2024206)

检 索 高级检索

引用

GB/T 7714-2015 格式引文
MLA格式引文
APA格式引文