Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
This paper
explains methods. Fusion of multiple-focus images can actually take care of the
profundity of field issue in optical focal point regions, the blurred picture
appears strange due to the high frequency degradation Information. Most often,
the camera is to blame for this. The absence of a deep field is caused by
optics in the cameras. The picture becomes sharper as a result. Only in
particular locations for a comprehensive focal length image, Fusion of
multiple-focus images primary objective is to solve a problem with depth of
field cameras. By blending at least two to some degree centred pictures into a
solitary totally centred picture, Combination of various centre pictures can
tackle the optical focal point's profundity of field issue.
Country : Iraq
IRJIET, Volume 7, Issue 11, November 2023 pp. 385-399
[1] Z. Li, Z. Jing, G. Liu, S. Sun, and H.
Leung, “Pixel visibility based multifocus image fusion,” in International
Conference on Neural Networks and Signal Processing, 2003. Proceedings of the
2003, 2003, vol. 2, pp. 1050–1053.
[2] C. Ludusan and O. Lavialle, “Multifocus
image fusion and denoising: a variational approach,” Pattern Recognit. Lett.,
vol. 33, no. 10, pp. 1388–1396, 2012.
[3] K.-L. Hua, H.-C. Wang, A. H. Rusdi, and
S.-Y. Jiang, “A novel multi-focus image fusion algorithm based on random
walks,” J. Vis. Commun. Image Represent., vol. 25, no. 5, pp. 951–962,
2014.
[4] Y. Liu, S. Liu, and Z. Wang, “Multi-focus
image fusion with dense SIFT,” Inf. Fusion, vol. 23, pp. 139–155, 2015,
doi: https://doi.org/10.1016/j.inffus.2014.05.004.
[5] X. Bai, M. Liu, Z. Chen, P. Wang, and Y.
Zhang, “Multi-focus image fusion through gradient-based decision map
construction and mathematical morphology,” IEEE Access, vol. 4, pp.
4749–4760, 2016.
[6] Y. Chen, J. Guan, and W.-K. Cham, “Robust
multi-focus image fusion using edge model and multi-matting,” IEEE Trans.
Image Process., vol. 27, no. 3, pp. 1526–1541, 2017.
[7] X. Xia, Y. Yao, L. Yin, S. Wu, H. Li, and
Z. Yang, “Multi-focus image fusion based on probability filtering and region
correction,” Signal Processing, vol. 153, pp. 71–82, 2018.
[8] M. S. Farid, A. Mahmood, and S. A.
Al-Maadeed, “Multi-focus image fusion using content adaptive blurring,” Inf.
fusion, vol. 45, pp. 96–112, 2019.
[9] J. Ma, Z. Zhou, B. Wang, L. Miao, and H.
Zong, “Multi-focus image fusion using boosted random walks-based algorithm with
two-scale focus maps,” Neurocomputing, vol. 335, pp. 9–20, 2019.
[10] Z. Ji, X. Kang, K. Zhang, P. Duan, and Q. Hao,
“A two-stage multi-focus image fusion framework robust to image
mis-registration,” IEEE Access, vol. 7, pp. 123231–123243, 2019.
[11] Y. Huang, W. Li, M. Gao, and Z. Liu,
“Algebraic multi-grid based multi-focus image fusion using watershed
algorithm,” IEEE Access, vol. 6, pp. 47082–47091, 2018.
[12] V. Aslantas and R. Kurban, “Fusion of
multi-focus images using differential evolution algorithm,” Expert Syst.
Appl., vol. 37, no. 12, pp. 8861–8870, 2010.
[13] L. Zhang, G. Zeng, and J. Wei, “Adaptive
region-segmentation multi-focus image fusion based on differential evolution,” Int.
J. Pattern Recognit. Artif. Intell., vol. 33, no. 03, p. 1954010, 2019.
[14] I. De and B. Chanda, “Multi-focus image fusion
using a morphology-based focus measure in a quad-tree structure,” Inf.
Fusion, vol. 14, no. 2, pp. 136–146, 2013, doi:
https://doi.org/10.1016/j.inffus.2012.01.007.
[15] S. Li, J. T. Kwok, and Y. Wang, “Combination
of images with diverse focuses using the spatial frequency,” Inf. fusion,
vol. 2, no. 3, pp. 169–176, 2001.
[16] V. Aslantas and R. Kurban, “A comparison of
criterion functions for fusion of multi-focus noisy images,” Opt. Commun.,
vol. 282, no. 16, pp. 3231–3242, 2009.
[17] D. Agrawal and J. Singhai, “Multifocus image
fusion using modified pulse coupled neural network for improved image quality,”
IET image Process., vol. 4, no. 6, pp. 443–451, 2010.
[18] X. Bai, Y. Zhang, F. Zhou, and B. Xue,
“Quadtree-based multi-focus image fusion using a weighted focus-measure,” Inf.
Fusion, vol. 22, pp. 105–118, 2015.
[19] E. Vakaimalar and K. Mala, “Multifocus image
fusion scheme based on discrete cosine transform and spatial frequency,” Multimed.
Tools Appl., vol. 78, pp. 17573–17587, 2019.
[20] A. Banharnsakun, “Multi-focus image fusion
using best-so-far abc strategies,” Neural Comput. Appl., vol. 31, pp.
2025–2040, 2019.
[21] Q. Li, J. Du, F. Song, C. Wang, H. Liu, and C.
Lu, “Region-based multi-focus image fusion using the local spatial frequency,”
in 2013 25th Chinese control and decision conference (CCDC), 2013, pp.
3792–3796.
[22] K. Hawari, “Multi Focus Image Fusion with Region-Center
Based Kernel,” vol. 11, no. 1, 2021.
[23] S. Li and B. Yang, “Region-based multi-focus
image fusion”.
[24] R. Achanta, S. Hemami, F. Estrada, and S.
Susstrunk, “Frequency-tuned salient region detection,” in 2009 IEEE
conference on computer vision and pattern recognition, 2009, pp. 1597–1604.
[25] Y.-F. Ma and H.-J. Zhang, “Contrast-based
image attention analysis by using fuzzy growing,” in Proceedings of the
eleventh ACM international conference on Multimedia, 2003, pp. 374–381.
[26] J. Harel, C. Koch, and P. Perona, “Graph-based
visual saliency,” Adv. Neural Inf. Process. Syst., vol. 19, 2006.
[27] L. Itti, C. Koch, and E. Niebur, “A model of
saliency-based visual attention for rapid scene analysis,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, 1998.
[28] S. Frintrop, M. Klodt, and E. Rome, “A
real-time visual attention system using integral images,” in International
Conference on Computer Vision Systems: Proceedings (2007), 2007.
[29] R. Achanta and S. Süsstrunk, “Saliency
detection using maximum symmetric surround,” in 2010 IEEE International
Conference on Image Processing, 2010, pp. 2653–2656.
[30] D. P. Bavirisetti and R. Dhuli, “Multi-focus
image fusion using multi-scale image decomposition and saliency detection,” Ain
Shams Eng. J., vol. 9, no. 4, pp. 1103–1117, 2018.
[31] S. Paul, I. S. Sevcenco, and P. Agathoklis,
“Multi-exposure and multi-focus image fusion in gradient domain,” J.
Circuits, Syst. Comput., vol. 25, no. 10, pp. 1–18, 2016, doi:
10.1142/S0218126616501231.
[32] Y. Wang and Y. Wang, “Fusion of 3-D medical
image gradient domain based on detail-driven and directional structure tensor,”
J. Xray. Sci. Technol., vol. 28, no. 5, pp. 1001–1016, 2020.
[33] R. Hong, C. Wang, Y. Ge, M. Wang, X. Wu, and
R. Zhang, “Salience preserving multi-focus image fusion,” in 2007 IEEE
international conference on multimedia and expo, 2007, pp. 1663–1666.
[34] G. Piella, “Image fusion for enhanced
visualization: A variational approach,” Int. J. Comput. Vis., vol. 83,
pp. 1–11, 2009.
[35] Y. Zhou, X. Yang, R. Zhang, K. Liu, M.
Anisetti, and G. Jeon, “Gradient-based multi-focus image fusion method using
convolution neural network,” Comput. Electr. Eng., vol. 92, p. 107174,
2021, doi: https://doi.org/10.1016/j.compeleceng.2021.107174.
[36] X. Ma, Z. Wang, and S. Hu, “Multi-focus image
fusion based on multi-scale sparse representation,” J. Vis. Commun. Image
Represent., vol. 81, p. 103328, 2021.
[37] H. Yin, Y. Li, Y. Chai, Z. Liu, and Z. Zhu, “A
novel sparse-representation-based multi-focus image fusion approach,” Neurocomputing,
vol. 216, pp. 216–229, 2016.
[38] Y. Liu and Z. Wang, “Simultaneous image fusion
and denoising with adaptive sparse representation,” IET Image Process.,
vol. 9, no. 5, pp. 347–357, 2015.
[39] B. Yang and S. Li, “Multifocus image fusion
and restoration with sparse representation,” IEEE Trans. Instrum. Meas.,
vol. 59, no. 4, pp. 884–892, 2009.
[40] B. Yang and S. Li, “Pixel-level image fusion
with simultaneous orthogonal matching pursuit,” Inf. fusion, vol. 13,
no. 1, pp. 10–19, 2012.
[41] L. Chen, J. Li, and C. L. P. Chen, “Regional
multifocus image fusion using sparse representation,” Opt. Express, vol.
21, no. 4, pp. 5182–5197, 2013.
[42] X. Ma, S. Hu, S. Liu, J. Fang, and S. Xu,
“Multi-focus image fusion based on joint sparse representation and optimum
theory,” Signal Process. Image Commun., vol. 78, pp. 125–134, 2019.
[43] I. Zafar, E. A. Edirisinghe, and H. E. Bez,
“Multi-exposure & multi-focus image fusion in transform domain,” 2006.
[44] M. B. A. Haghighat, A. Aghagolzadeh, and H.
Seyedarabi, “Multi-focus image fusion for visual sensor networks in DCT
domain,” Comput. Electr. Eng., vol. 37, no. 5, pp. 789–797, 2011.
[45] Y. A. V. Phamila and R. Amutha, “Discrete
Cosine Transform based fusion of multi-focus images for visual sensor
networks,” Signal Processing, vol. 95, pp. 161–170, 2014.
[46] J. Tang, “A contrast based image fusion
technique in the DCT domain,” Digit. Signal Process., vol. 14, no. 3,
pp. 218–226, 2004.
[47] M. A. Naji and A. Aghagolzadeh, “Multi-focus
image fusion in DCT domain based on correlation coefficient,” in 2015 2nd
international conference on knowledge-based engineering and innovation (KBEI),
2015, pp. 632–639.
[48] M. A. Naji and A. Aghagolzadeh, “A new
multi-focus image fusion technique based on variance in DCT domain,” in 2015
2nd International conference on knowledge-based engineering and innovation
(KBEI), 2015, pp. 478–484.
[49] Y. Liu, X. Chen, H. Peng, and Z. Wang,
“Multi-focus image fusion with a deep convolutional neural network,” Inf.
Fusion, vol. 36, pp. 191–207, 2017.
[50] J. Li et al., “DRPL: Deep regression
pair learning for multi-focus image fusion,” IEEE Trans. Image Process.,
vol. 29, pp. 4816–4831, 2020.
[51] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and
L. Zhang, “IFCNN: A general image fusion framework based on convolutional
neural network,” Inf. Fusion, vol. 54, pp. 99–118, 2020.
[52] R. Lai, Y. Li, J. Guan, and A. Xiong,
“Multi-scale visual attention deep convolutional neural network for multi-focus
image fusion,” IEEE Access, vol. 7, pp. 114385–114399, 2019.
[53] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji,
“Ensemble of CNN for multi-focus image fusion,” Inf. fusion, vol. 51,
pp. 201–214, 2019.
[54] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji,
“CNNs hard voting for multi-focus image fusion,” J. Ambient Intell. Humaniz.
Comput., vol. 11, pp. 1749–1769, 2020.
[55] H. Zhai and Y. Zhuang, “Multi-focus image
fusion method using energy of Laplacian and a deep neural network,” Appl.
Opt., vol. 59, no. 6, pp. 1684–1694, 2020.
[56] V. Deshmukh, A. Khaparde, and S. Shaikh,
“Multi-focus image fusion using deep belief network,” in Information and
Communication Technology for Intelligent Systems (ICTIS 2017)-Volume 1 2,
2018, pp. 233–241.
[57] F. Lahoud and S. Süsstrunk, “Fast and
efficient zero-learning image fusion,” arXiv Prepr. arXiv1905.03590,
2019.
[58] X. Zhang, “Deep Learning-Based Multi-Focus
Image Fusion: A Survey and a Comparative Study,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 44, no. 9, pp. 4819–4838, 2022, doi:
10.1109/TPAMI.2021.3078906.
[59] H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo,
“Fusiondn: A unified densely connected network for image fusion,” in Proceedings
of the AAAI conference on artificial intelligence, 2020, vol. 34, no. 07,
pp. 12484–12491.
[60] B. Ma, Y. Zhu, X. Yin, X. Ban, H. Huang, and
M. Mukeshimana, “Sesf-fuse: An unsupervised deep model for multi-focus image
fusion,” Neural Comput. Appl., vol. 33, pp. 5793–5804, 2021.
[61] H. Zhang, Z. Le, Z. Shao, H. Xu, and J. Ma,
“MFF-GAN: An unsupervised generative adversarial network with adaptive and
gradient joint constraints for multi-focus image fusion,” Inf. Fusion,
vol. 66, pp. 40–53, 2021.
[62] H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling,
“U2Fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 44, no. 1, pp. 502–518, 2020.
[63] H. Li, Y. Chai, and Z. Li, “A new fusion
scheme for multifocus images based on focused pixels detection,” Mach. Vis.
Appl., vol. 24, pp. 1167–1181, 2013.
[64] H. Li, X. Liu, Z. Yu, and Y. Zhang,
“Performance improvement scheme of multifocus image fusion derived by
difference images,” Signal Processing, vol. 128, pp. 474–493, 2016.
[65] H. Li, H. Qiu, Z. Yu, and B. Li, “Multifocus
image fusion via fixed window technique of multiscale images and non-local
means filtering,” Signal Processing, vol. 138, pp. 71–85, 2017.
[66] Q. Zhang, F. Wang, Y. Luo, and J. Han,
“Exploring a unified low rank representation for multi-focus image fusion,” Pattern
Recognit., vol. 113, p. 107752, 2021.
[67] K. Xu, Z. Qin, G. Wang, H. Zhang, K. Huang,
and S. Ye, “Multi-focus image fusion using fully convolutional two-stream
network for visual sensors,” KSII Trans. Internet Inf. Syst., vol. 12,
no. 5, pp. 2253–2272, 2018.
[68] K. He, D. Zhou, X. Zhang, R. Nie, and X. Jin,
“Multi-focus image fusion combining focus-region-level partition and pulse-coupled
neural network,” Soft Comput., vol. 23, pp. 4685–4699, 2019.
[69] R. Hou, D. Zhou, R. Nie, and D. Liu,
“Multi-focus color image fusion scheme using NSST and focus region detection,”
in Proceedings of the 3rd international conference on multimedia and image
processing, 2018, pp. 7–11.
[70] D. M. Bulanon, T. F. Burks, and V. Alchanatis,
“Image fusion of visible and thermal images for fruit detection,” Biosyst.
Eng., vol. 103, no. 1, pp. 12–22, 2009.
[71] M. Hossny, S. Nahavandi, and D. Creighton,
“Comments on’Information measure for performance of image fusion’,” 2008.
[72] G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen,
“Detail preserved fusion of visible and infrared images using regional saliency
extraction and multi-scale image decomposition,” Opt. Commun., vol. 341,
pp. 199–209, 2015, doi: https://doi.org/10.1016/j.optcom.2014.12.032.
[73] B. Rajalingam, R. Priya, and R. Bhavani,
“Hybrid Multimodal Medical Image Fusion Using Combination of Transform
Techniques for Disease Analysis,” Procedia Comput. Sci., vol. 152, pp.
150–157, 2019, doi: https://doi.org/10.1016/j.procs.2019.05.037.
[74] O. R. Vincent and O. Folorunso, “A descriptive
algorithm for sobel image edge detection,” in Proceedings of informing
science & IT education conference (InSITE), 2009, vol. 40, pp. 97–107.
[75] C. S. Xydeas and V. Petrovic, “Objective image
fusion performance measure,” Electron. Lett., vol. 36, no. 4, pp.
308–309, 2000.
[76] Y.-J. Rao, “In-fibre Bragg grating sensors,” Meas.
Sci. Technol., vol. 8, no. 4, p. 355, 1997.
[77] A. M. Eskicioglu and P. S. Fisher, “Image
quality measures and their performance,” IEEE Trans. Commun., vol. 43,
no. 12, pp. 2959–2965, 1995.
[78] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P.
Simoncelli, “Image quality assessment: from error visibility to structural
similarity,” IEEE Trans. image Process., vol. 13, no. 4, pp. 600–612,
2004.
[79] S. Li, R. Hong, and X. Wu, “A novel similarity
based quality metric for image fusion,” in 2008 International Conference on
Audio, Language and Image Processing, 2008, pp. 167–172.
[80] Y. Chen and R. S. Blum, “A new automated
quality assessment algorithm for image fusion,” Image Vis. Comput., vol.
27, no. 10, pp. 1421–1432, 2009.
[81] Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new
image fusion performance metric based on visual information fidelity,” Inf.
fusion, vol. 14, no. 2, pp. 127–135, 2013.