Nakshatra-Drishti: A Supervised Learning Approach for Low Light Image Enhancement Using Convolutional Neural Networks
Abstract
Images captured under low-light conditions pose significant challenges for subsequent analysis due to degradation in quality, including noise, loss of scene content, inaccurate colour, and contrast information. In this paper, we propose a supervised learning-based convolutional neural network (CNN) model, Nakshatra-Drishti, specifically designed for enhancing low-light images, videos, and real-time camera feeds. The model is trained on paired datasets and extensively evaluated on various benchmarks, demonstrating remarkable results. We also introduce a user-friendly web-based software application that enhances image perception in poorly illuminated environments, facilitating more effective artificial intelligence analysis and decision-making processes.
References
[2] S. M. Pizer et al., "Adaptive histogram equalization and its variations," Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, 1987.
[3] K. G. Lore, A. Akintayo, and S. Sarkar, "LLNet: A deep autoencoder approach to natural low-light image enhancement," Pattern Recognition, vol. 61, pp. 650–662, 2017.
[4] C. Wei et al., "Deep retinex decomposition for low-light enhancement," arXiv preprint arXiv:1808.04560, 2018.
[5] J. Kim, J. K. Lee, and K. M. Lee, "Accurate image super-resolution using very deep convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1646–1654.
[6] X. Guo, Y. Li, and H. Ling, "LIME: Low-light image enhancement via illumination map estimation," IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2017.
[7] M. Zhu et al., "EEMEFN: Low-light image enhancement via edge-enhanced multi-exposure fusion network," in AAAI, 2020, pp. 13 106–13 113.
[8] C. Wei et al., "Deep retinex decomposition for low-light enhancement," arXiv preprint arXiv:1808.04560, 2018.
[9] C. Lee, C. Lee, and C.-S. Kim, "Contrast enhancement based on layered difference representation," in Image Processing (ICIP), 2012 19th IEEE International Conference on, 2012, pp. 965–968.
[10] D. J. Jobson et al., "A multiscale retinex for bridging the gap between color images and the human observation of scenes," IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965–976, 1997.
[11] S. Wang et al., "Naturalness preserved enhancement algorithm for non-uniform illumination images," IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3538–3548, 2013.
[12] X. Fu et al., "A weighted variational model for simultaneous reflectance and illumination estimation," in CVPR, 2016, pp. 2782–2790.
[13] X. Guo, Y. Li, and H. Ling, "LIME: Low-light image enhancement via illumination map estimation," IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2017.
[14] X. Ren et al., "Joint enhancement and denoising method via sequential decomposition," in Circuits and Systems (ISCAS), 2018 IEEE International Symposium on, 2018, pp. 1–5.
[15] M. Li et al., "Structure-revealing low-light image enhancement via robust retinex model," IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2828–2841, 2018.
[16] M. Gharbi et al., "Deep bilateral learning for real-time image enhancement," ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 118, 2017.
[17] N. K. Kalantari and R. Ramamoorthi, "Deep high dynamic range imaging of dynamic scenes," ACM Trans. Graph, vol. 36, no. 4, p. 144, 2017.
[18] S. Wu et al., "Deep high dynamic range imaging with large foreground motions," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 117–132.
[19] J. Cai, S. Gu, and L. Zhang, "Learning a deep single image contrast enhancer from multi-exposure images," IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018.
[20] C. Chen et al., "Learning to see in the dark," arXiv preprint arXiv:1805.01934, 2018.
[21] I. Goodfellow et al., "Generative adversarial nets," in Advances in neural information processing systems, 2014, pp. 2672–2680.
[22] X. Gong et al., "AutoGAN: Neural architecture search for generative adversarial networks," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 3224–3234.
[23] Y. Jiang et al., "EnlightenGAN: Deep light enhancement without paired supervision," IEEE Transactions on Image Processing, vol. 30, pp. 2340–2349, 2021.
[24] C. Guo et al., "Zero-reference deep curve estimation for low-light image enhancement," in CVPR, 2020, pp. 1780–1789.
[25] C. Li, C. Guo, and C. C. Loy, "Learning to enhance low-light image via zero-reference deep curve estimation," TPAMI, 2021.
[26] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proceedings of the British Machine Vision Conference (BMVC), 2018.
[27] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[28] X. Zhang, X. Chen, X. Huang, and J. Gao, "Learning to See in the Dark via Wavelet Domain Attention Networks," in Proceedings of the European Conference on Computer Vision (ECCV), 2020.
[29] Y. Zhao, H. Guo, and J. Zhang, "Low-Light Image Enhancement Using a Conditional Generative Adversarial Network," IEEE Access, vol. 8, pp. 147744-147756, 2020.
[30] S. Li, Y. Huang, and Q. Tian, "Rethinking of Learning-based Low-light Image Enhancement: A Data-driven Perspective," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020.
To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to AJCT and must accompany any such material in order to be published by AJCT. Please read the form carefully.
The undersigned hereby assigns to the Asian Journal of Convergence in Technology Issues ("AJCT") all rights under copyright that may exist in and to the above Work, any revised or expanded derivative works submitted to AJCT by the undersigned based on the Work, and any associated written, audio and/or visual presentations or other enhancements accompanying the Work. The undersigned hereby warrants that the Work is original and that he/she is the author of the Work; to the extent the Work incorporates text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permission. See Retained Rights, below.
AUTHOR RESPONSIBILITIES
AJCT distributes its technical publications throughout the world and wants to ensure that the material submitted to its publications is properly available to the readership of those publications. Authors must ensure that The Work is their own and is original. It is the responsibility of the authors, not AJCT, to determine whether disclosure of their material requires the prior consent of other parties and, if so, to obtain it.
RETAINED RIGHTS/TERMS AND CONDITIONS
1. Authors/employers retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
2. Authors/employers may reproduce or authorize others to reproduce The Work and for the author's personal use or for company or organizational use, provided that the source and any AJCT copyright notice are indicated, the copies are not used in any way that implies AJCT endorsement of a product or service of any employer, and the copies themselves are not offered for sale.
3. Authors/employers may make limited distribution of all or portions of the Work prior to publication if they inform AJCT in advance of the nature and extent of such limited distribution.
4. For all uses not covered by items 2 and 3, authors/employers must request permission from AJCT.
5. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.
INFORMATION FOR AUTHORS
AJCT Copyright Ownership
It is the formal policy of AJCT to own the copyrights to all copyrightable material in its technical publications and to the individual contributions contained therein, in order to protect the interests of AJCT, its authors and their employers, and, at the same time, to facilitate the appropriate re-use of this material by others.
Author/Employer Rights
If you are employed and prepared the Work on a subject within the scope of your employment, the copyright in the Work belongs to your employer as a work-for-hire. In that case, AJCT assumes that when you sign this Form, you are authorized to do so by your employer and that your employer has consented to the transfer of copyright, to the representation and warranty of publication rights, and to all other terms and conditions of this Form. If such authorization and consent has not been given to you, an authorized representative of your employer should sign this Form as the Author.
Reprint/Republication Policy
AJCT requires that the consent of the first-named author and employer be sought as a condition to granting reprint or republication rights to others or for permitting use of a Work for promotion or marketing purposes.
GENERAL TERMS
1. The undersigned represents that he/she has the power and authority to make and execute this assignment.
2. The undersigned agrees to indemnify and hold harmless AJCT from any damage or expense that may arise in the event of a breach of any of the warranties set forth above.
3. In the event the above work is accepted and published by AJCT and consequently withdrawn by the author(s), the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to AJCT will be destroyed.
4. For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent
for the others.
Licenced by :
Creative Commons Attribution 4.0 International License.
