PEOPLE IDENTIFICATION AND TRACKING IN VIDEO FILES USING MACHINE AND DEEP LEARNING ALGORITHMS


Abstract views: 49 / PDF downloads: 20

Authors

  • LUBNA THANOON ALKAHLA
  • MAHER KHALAF HUSSEIN
  • ALHAN ANWER YONIS

Keywords:

Viola-Jones method, Feature extraction, PCA, Deep learning, VGG16-CNN.

Abstract

This paper presents a people identification and tracking system for video files designed to verify the presence of persons in the video file or not. The system consists of several stages: converting the video files into a set of frames, identifying face areas using the Viola-Jones method, and extracting characteristics of these faces based on two techniques; the first technique is deep feature extraction, which is the pre-trained network VGG16-CNN. The second technique is shallow feature extraction, represented by the Principal Component Analysis method, and classifying persons based on the multilayer perceptron neural network. The system is tested on five video files of different times, and each video contains different numbers of people. The results obtained through the confusion matrix showed that the system gives a high classification accuracy and a good detection rate when using the VGG16-CNN algorithm for feature extraction. Through measures of precision, recall, and accuracy, it is found that the system is strong and has a high detection rate. The system gave a high classification accuracy of 93% ,precision of 100%, and recall of 93%.

References

L. T. Alkahla and J. S. Alneamy, “Face identification in a video file based on hybrid intelligence technique-review,” J. Phys. Conf. Ser., vol. 1818, no. 1, 2021, doi: 10.1088/1742-6596/1818/1/012041.

G. S. Tandel et al., “A review on a deep learning perspective in brain cancer classification,” Cancers, vol. 11, no. 1. 2019. doi: 10.3390/cancers11010111.

M. A. Mannan, A. Lam, Y. Kobayashi, and Y. Kuno, “Facial expression recognition based on hybrid approach,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9227, pp. 304–310, 2015, doi: 10.1007/978-3-319-22053-6_33.

V. K and Dr.S.Padmavathi, “Facial Parts Detection Using Viola,” International Conference on Advanced Computing and Communication Systems (ICACCS -2015). pp. 1–4, 2017.

M. K. Hussien and F. M. Ramo, “Facial expression using Histogram of Oriented Gradients and Ensemble Classifier,” Tikrit J. Pure Sci., vol. 23, no. 9, pp. 1813–1662, 2022, [Online]. Available: http://dx.doi.org/10.25130/tjps.23.2018.141

laheeb M. Alzubiady Ibrahim Ahmed Saleh, “A Comparative Study for Two Artificial Intelligence Methods for Personal Identification System Using Dental Traits,” First International Scientific conference © 2014 Cihan University. All Rights Reserved.

D. Vishaka Gayathri, S. Shree, T. Jain, and K. Sornalakshmi, “Real Time System for Human Identification and Tracking from Surveillance Videos,” International Journal of Engineering & Technology, vol. 7, no. 3.12. p. 244, 2018. doi: 10.14419/ijet.v7i3.12.16034.

E. U. Haq, H. Jianjun, K. Li, and H. U. Haq, “Human detection and tracking with deep convolutional neural networks under the constrained of noise and occluded scenes,” Multimed. Tools Appl., vol. 79, no. 41–42, pp. 30685–30708, 2020, doi: 10.1007/s11042-020-09579-x.

K. Ashwani, S. R. S.S.Sai, and K. Vivek, “An Object Detection Technique For Blind People in Real-Time Using Deep Neural Network,” Proc. IEEE Int. Conf. Image Inf. Process., vol. 2019-Novem, pp. 292–297, 2019, doi: 10.1109/ICIIP47207.2019.8985965.

E. Aguirre and M. García-Silvente, “Detecting and tracking using 2D laser range finders and deep learning,” Neural Comput. Appl., vol. 35, no. 1, pp. 415–428, 2023, doi: 10.1007/s00521-022-07765-6.

M. H. AbdulAbaus and N. D. Al-Shakarchy, “Person identification based on facial biometrics in different lighting conditions,” Int. J. Electr. Comput. Eng., vol. 13, no. 2, p. 2086, 2023, doi: 10.11591/ijece.v13i2.pp2086-2092.

Y. Zhou, D. Liu, and T. Huang, “Survey of face detection on low-quality images,” Proc. - 13th IEEE Int. Conf. Autom. Face Gesture Recognition, FG 2018, pp. 769–773, 2018, doi: 10.1109/FG.2018.00121.

R. Vij and B. Kaushik, “A survey on various face detecting and tracking techniques in video sequences,” 2019 Int. Conf. Intell. Comput. Control Syst. ICCS 2019, no. August 2020, pp. 69–73, 2019, doi: 10.1109/ICCS45141.2019.9065483.

J. Thanoon ALkahla, L., Salahaldeen Alneamy, Improving the Ability of Persons Identification in a Video Files Based on Hybrid Intelligence Techniquestle, Springer., vol. 445. 2023. doi: ttps://doi.org/10.1007/978-981-19-1412-6_44.

K. D. Ismael and S. Irina, “Face recognition using Viola-Jones depending on Python,” Indones. J. Electr. Eng. Comput. Sci., vol. 20, no. 3, pp. 1513–1521, 2020, doi: 10.11591/ijeecs.v20.i3.pp1513-1521.

J. A. López del Val and J. P. Alonso Pérez de Agreda, “Multivariate Statistical Data Analysis- Principal Component Analysis (PCA) Sidharth,” Aten. Primaria, vol. 12, no. 6, pp. 333–338, 2017, doi: 10.5455/ijlr.20170415115235.

A. K. S. Rasmus Broa, “Principal component analysis,” R. Soc. Chem., vol. 6, pp. 2812–2831, 2014, doi: DOI: 10.1039/c3ay41907j.

S. Dargan, M. Kumar, M. R. Ayyagari, and G. Kumar, “A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning,” Arch. Comput. Methods Eng., vol. 27, no. 4, pp. 1071–1092, 2020, doi: 10.1007/s11831-019-09344-w.

A. Bulat, S. Cheng, J. Yang, A. Garbett, E. Sanchez, and G. Tzimiropoulos, “Pre-training Strategies and Datasets for Facial Representation Learning,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13673 LNCS, pp. 107–125, 2022, doi: 10.1007/978-3-031-19778-9_7.

K. Kashiparekh, J. Narwariya, P. Malhotra, L. Vig, and G. Shroff, “ConvTimeNet: A Pre-trained Deep Convolutional Neural Network for Time Series Classification,” Proc. Int. Jt. Conf. Neural Networks, vol. 2019-July, 2019, doi: 10.1109/IJCNN.2019.8852105.

A. Younis, L. Qiang, C. O. Nyatega, M. J. Adamu, and H. B. Kawuwa, “Brain Tumor Analysis Using Deep Learning and VGG-16 Ensembling Learning Approaches,” Appl. Sci., vol. 12, no. 14, 2022, doi: 10.3390/app12147282.

A. Aggarwal, A. Srivastava, A. Agarwal, N. Chahal, and D. Singh, “Two-Way Feature Extraction for Speech Emotion Recognition Using Deep Learning,” MDPI Stay. neutral, vol. 22, no. 2378, pp. 1–11, 2022, doi: https://doi.org/10.3390/ s22062378.

X. Wu, Z. Zhao, R. Tian, S. Gao, Y. Niu, and H. Liu, “Exploration of total synchronous fluorescence spectroscopy combined with pre-trained convolutional neural network in the identification and quantification of vegetable oil,” Food Chem., vol. 335, no. January 2020, p. 127640, 2021, doi: 10.1016/j.foodchem.2020.127640.

Downloads

Published

2023-12-20

How to Cite

ALKAHLA, L. T., HUSSEIN, M. K., & YONIS, A. A. (2023). PEOPLE IDENTIFICATION AND TRACKING IN VIDEO FILES USING MACHINE AND DEEP LEARNING ALGORITHMS. ASES INTERNATIONAL JOURNAL OF HEALTH AND SPORTS SCIENCES (ISSN: 3023-5723), 1(1), 32–46. Retrieved from https://e-hssci.com/index.php/hssci/article/view/4