Optimasi Pengenalan Gaya Berjalan Berbasis Gait Energy Image melalui Pra-pemrosesan Siluet
- HAPPID RIDWAN ILMI
- 14230010
ABSTRAK
ABSTRAK
- Nama : Happid Ridwan Ilmi
- NIM : 14230010
- Program Studi : Ilmu Komputer
- Fakultas : Teknologi Informasi
- Jenjang : Strata Dua (S2)
Peminatan : Artificial Intelligence & Blockchain Judul : Optimasi Pengenalan Gaya Berjalan Berbasis Gait Energy Image melalui Pra-pemrosesan Siluet
Pengenalan gaya berjalan adalah biometrik non-invasif yang mengidentifikasi individu berdasarkan pola berjalan, serta dapat dilakukan dari jarak jauh tanpa interaksi langsung. Penelitian ini bertujuan meningkatkan performa gait recognition berbasis Gait Energy Image (GEI) menggunakan pembelajaran mendalam. Dataset yang digunakan meliputi CASIA-B, CASIA-C, dan NEWGAITDS. Metode yang diusulkan mencakup segmentasi siluet dengan bounding box, resize dengan rasio aspek tetap, serta padding atau cropping agar fokus pada subjek. Arsitektur yang digunakan berupa Convolutional Neural Network (CNN) dengan triplet loss untuk menghasilkan embedding yang lebih diskriminatif. Evaluasi menggunakan metrik Rank-1 menunjukkan akurasi tinggi: CASIA-B mencapai 97.4% (nm), 84.7% (bg), 57.3% (cl); CASIA-C 68.87% (fb), 87.74% (fq), 66.98% (fs); dan NEWGAITDS tanpa pelatihan ulang 92.0% (nm), 74.7% (bg), 58.7% (cl). Hasil ini menunjukkan bahwa pendekatan GEI statis tetap kompetitif bila dioptimalkan secara tepat. Source code tersedia di https://github.com/Happid/gait-recognition.
KATA KUNCI
Gait Energy Image
DAFTAR PUSTAKA
DAFTAR REFERENSI
[1] J. Kim, B. Kim, and H. Lee, “Temporally Deformable Convolution for Gait Recognition,” IEEE Access, vol. 13, pp. 6475–6486, 2025, doi: 10.1109/ACCESS.2025.3526886.
[2] K. Liu, M. Bouazizi, Z. Xing, and T. Ohtsuki, “A Comparison Study of Person Identification Using IR Array Sensors and LiDAR,” Sensors, vol. 25, no. 1, p. 271, Jan. 2025, doi: 10.3390/s25010271.
[3] D. M. Ahmed and B. S. Mahmood, “Integration of Face and Gait Recognition via Transfer Learning: A Multiscale Biometric Identification Approach,” Trait. du Signal, vol. 40, no. 5, pp. 2179–2190, Oct. 2023, doi: 10.18280/ts.400535.
[4] J. Liu, Y. Ke, T. Zhou, Y. Qiu, and C. Wang, “GaitRGA: Gait Recognition Based on Relation-Aware Global Attention,” Sensors, vol. 25, no. 8, p. 2337, Apr. 2025, doi: 10.3390/s25082337.
[5] S. Qiao, C. Tang, H. Hu, W. Wang, A. Tong, and F. Ren, “Cross-view identification based on gait bioinformation using a dynamic densely connected spatial-temporal feature decoupling network,” Biomed. Signal Process. Control, vol. 104, p. 107494, Jun. 2025, doi: 10.1016/j.bspc.2025.107494.
[6] Z. Zhang, L. Tran, F. Liu, and X. Liu, “On Learning Disentangled Representations for Gait Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 1, pp. 345–360, Jan. 2022, doi: 10.1109/TPAMI.2020.2998790.
[7] B. Ali, M. Bukhari, M. Maqsood, J. Moon, E. Hwang, and S. Rho, “An end-to-end gait recognition system for covariate conditions using custom kernel CNN,” Heliyon, vol. 10, no. 12, p. e32934, Jun. 2024, doi: 10.1016/j.heliyon.2024.e32934.
[8] K. Hasan et al., “MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks,” Symmetry (Basel)., vol. 17, no. 7, p. 1155, Jul. 2025, doi: 10.3390/sym17071155.
[9] D. R. M. Bastos and J. M. R. S. Tavares, “A scalable gait acquisition and recognition system with angle-enhanced models,” Expert Syst. Appl., vol. 269, p. 126499, Apr. 2025, doi: 10.1016/j.eswa.2025.126499. 64 Program Studi Ilmu Komputer (S2) FTI Universitas Nusa Mandiri 65
[10] B. Yaprak and E. Gedikli, “Different gait combinations based on multi-modal deep CNN architectures,” Multimed. Tools Appl., vol. 83, no. 35, pp. 83403–83425, Mar. 2024, doi: 10.1007/s11042-024-18859-9.
[11] X. Shi, W. Zhao, H. Pei, H. Zhai, and Y. Gao, “Research on Gait Recognition Based on GaitSet and Multimodal Fusion,” IEEE Access, vol. 13, pp. 20017–20024, 2025, doi: 10.1109/ACCESS.2025.3533571.
[12] H. Chao, K. Wang, Y. He, J. Zhang, and J. Feng, “GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, 2021, doi: 10.1109/TPAMI.2021.3057879.
[13] B. Yaprak and E. Gedikli, “Enhancing part-based gait recognition via ensemble learning and feature fusion,” Pattern Anal. Appl., vol. 28, no. 2, p. 98, Jun. 2025, doi: 10.1007/s10044-025-01478-x.
[14] I. R. Tinini Alvarez and G. Sahonero-Alvarez, “Cross-View Gait Recognition Based on U-Net,” in 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Jul. 2020, pp. 1–7. doi: 10.1109/IJCNN48605.2020.9207501.
[15] J. Huang, X. Wang, and J. Wang, “Gait recognition algorithm based on feature fusion of GEI dynamic region and gabor wavelets,” J. Inf. Process. Syst., vol. 14, no. 4, pp. 892–903, 2018, doi: 10.3745/JIPS.02.0088.
[16] S. R. Palla, G. Sahu, and P. Parida, “Human gait recognition using firefly template segmentation,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis., vol. 10, no. 5, pp. 565–575, Sep. 2022, doi: 10.1080/21681163.2021.2012829.
[17] M. Asif, M. I. Tiwana, U. S. Khan, M. W. Ahmad, W. S. Qureshi, and J. Iqbal, “Human gait recognition subject to different covariate factors in a multi-view environment,” Results Eng., vol. 15, p. 100556, Sep. 2022, doi: 10.1016/j.rineng.2022.100556.
[18] M. Bilal, H. Jianbiao, H. Mushtaq, M. Asim, G. Ali, and M. ElAffendi, “GaitSTAR: Spatial–Temporal Attention-Based Feature-Reweighting Architecture for Human Gait Recognition,” Mathematics, vol. 12, no. 16, p. 2458, Aug. 2024, doi: 10.3390/math12162458.
[19] H. T. T. Vu et al., “A Review of Gait Phase Detection Algorithms for Lower Limb Prostheses,” Sensors, vol. 20, no. 14, p. 3972, Jul. 2020, doi: 10.3390/s20143972. Program Studi Ilmu Komputer (S2) FTI Universitas Nusa Mandiri 66
[20] J. N. Mogan, C. P. Lee, and K. M. Lim, “Ensemble CNN-ViT Using Feature-Level Fusion for Gait Recognition,” IEEE Access, vol. 12, pp. 108573–108583, 2024, doi: 10.1109/ACCESS.2024.3439602.
[21] X. Zhao, W. Zhang, T. Zhang, and Z. Zhang, “Cross-View Gait Recognition Based on Dual-Stream Network,” J. Adv. Comput. Intell. Intell. Informatics, vol. 25, no. 5, pp. 671–678, Sep. 2021, doi: 10.20965/jaciii.2021.p0671.
[22] C. Li, B. Wang, Y. Li, and B. Liu, “A Lightweight Pathological Gait Recognition Approach Based on a New Gait Template in Side-View and Improved Attention Mechanism,” Sensors, vol. 24, no. 17, p. 5574, Aug. 2024, doi: 10.3390/s24175574.
[23] E. T. Burges, Z. A. Oraibi, and A. Wali, “Gait Recognition Using Hybrid LSTM-CNN Deep Neural Networks,” J. Image Graph., vol. 12, no. 2, pp. 168–175, 2024, doi: 10.18178/joig.12.2.168-175.
[24] A. Khan et al., “Human Gait Recognition Using Deep Learning and Improved Ant Colony Optimization,” Comput. Mater. Contin., vol. 70, no. 2, pp. 2113–2130, 2022, doi: 10.32604/cmc.2022.018270.
[25] A. Sepas-Moghaddam and A. Etemad, “Deep Gait Recognition: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 264–284, Jan. 2023, doi: 10.1109/TPAMI.2022.3151865.
[26] Ç. Berke Erda?, E. Sümer, and S. Kibaro?lu, “CNN-based severity prediction of neurodegenerative diseases using gait data,” Digit. Heal., vol. 8, p. 205520762210751, Jan. 2022, doi: 10.1177/20552076221075147.
[27] A. Parashar, A. Parashar, W. Ding, M. Shabaz, and I. Rida, “Data preprocessing and feature selection techniques in gait recognition: A comparative study of machine learning and deep learning approaches,” Pattern Recognit. Lett., vol. 172, pp. 65–73, Aug. 2023, doi: 10.1016/j.patrec.2023.05.021.
[28] T. Wei et al., “Metric-Based Key Frame Extraction for Gait Recognition,” Electronics, vol. 11, no. 24, p. 4177, Dec. 2022, doi: 10.3390/electronics11244177.
[29] “Video Capture Class Reference,” Open Source Computer Vision. Accessed: Jul. 26, 2025. [Online]. Available: https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html. Program Studi Ilmu Komputer (S2) FTI Universitas Nusa Mandiri 67
[30] K. Yoshino, K. Nakashima, J. Ahn, Y. Iwashita, and R. Kurazume, “RGB-Based Gait Recognition With Disentangled Gait Feature Swapping,” IEEE Access, vol. 12, pp. 115515–115531, 2024, doi: 10.1109/ACCESS.2024.3445415.
[31] “Models,” Rembg. Accessed: Jul. 26, 2025. [Online]. Available: https://github.com/danielgatis/rembg.
[32] C. Xu, S. Liu, Z. Yang, Y. Huang, and K.-K. Wong, “Learning Rate Optimization for Federated Learning Exploiting Over-the-Air Computation,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3742–3756, Dec. 2021, doi: 10.1109/JSAC.2021.3118402.
[33] I. Salehin and D.-K. Kang, “A Review on Dropout Regularization Approaches for Deep Neural Networks within the Scholarly Domain,” Electronics, vol. 12, no. 14, p. 3106, Jul. 2023, doi: 10.3390/electronics12143106.
[34] E. Hassan, M. Y. Shams, N. A. Hikal, and S. Elmougy, “The effect of choosing optimizer algorithms to improve computer vision tasks: a comparative study,” Multimed. Tools Appl., vol. 82, no. 11, pp. 16591–16633, May 2023, doi: 10.1007/s11042-022-13820-0.
[35] N. Das and S. Das, “Epoch and accuracy based empirical study for cardiac MRI segmentation using deep learning technique,” PeerJ, vol. 11, p. e14939, Mar. 2023, doi: 10.7717/peerj.14939.
[36] M. S. S. Sayeed, P. P. Min, and M. A. Bari, “Deep Learning Based Gait Recognition Using Convolutional Neural Network in the COVID-19 Pandemic,” Emerg. Sci. J., vol. 6, no. 5, pp. 1086–1099, Aug. 2022, doi: 10.28991/ESJ-2022-06-05-012.
[37] C. Fan et al., “GaitPart: Temporal Part-Based Model for Gait Recognition,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2020, pp. 14213–14221. doi: 10.1109/CVPR42600.2020.01423.
[38] A. Anwar, “A Thesis Titled Deep Hypersphere Embedding with Laplacian Smooth Stochastic Gradient for Face Recognition,” no. July, 2024, doi: 10.13140/RG.2.2.11808.39682.
[39] J. Yin, A. Wu, and W.-S. Zheng, “Fine-Grained Person Re-identification,” Int. J. Comput. Vis., vol. 128, no. 6, pp. 1654–1672, Jun. 2020, doi: 10.1007/s11263-019-01259-0. Program Studi Ilmu Komputer (S2) FTI Universitas Nusa Mandiri 68
[40] Y. Makihara, M. S. Nixon, and Y. Yagi, “Gait Recognition: Databases, Representations, and Applications,” in Computer Vision, Cham: Springer International Publishing, 2021, pp. 1–13. doi: 10.1007/978-3-030-03243-2_883-1.
[41] K. Xu, X. Jiang, and T. Sun, “Gait Recognition Based on Local Graphical Skeleton Descriptor With Pairwise Similarity Network,” IEEE Trans. Multimed., vol. 24, pp. 3265–3275, 2022, doi: 10.1109/TMM.2021.3095809.
[42] H. Ye, T. Sun, and K. Xu, “Gait Recognition Based on Gait Optical Flow Network with Inherent Feature Pyramid,” Appl. Sci., vol. 13, no. 19, p. 10975, Oct. 2023, doi: 10.3390/app131910975.
[43] “Operations on arrays,” Open Source Computer Vision. Accessed: Jul. 26, 2025. [Online]. Available: https://docs.opencv.org/4.x/d2/de8/group__core__array.html.
[44] N. Chowdhury, P. P. Choudhury, and S. R. Moon, “Pneumonia stage analyzes through image processing,” Indones. J. Electr. Eng. Comput. Sci., vol. 36, no. 3, p. 1778, Dec. 2024, doi: 10.11591/ijeecs.v36.i3.pp1778-1786.
Detail Informasi
Tesis ini ditulis oleh :
- Nama : HAPPID RIDWAN ILMI
- NIM : 14230010
- Prodi : Ilmu Komputer
- Kampus : Margonda
- Tahun : 2025
- Periode : I
- Pembimbing : Dr. Agus Subekti, M.T
- Asisten :
- Kode : 0007.S2.IK.TESIS.I.2025
- Diinput oleh : SGM
- Terakhir update : 08 Desember 2025
- Dilihat : 87 kali
TENTANG PERPUSTAKAAN
E-Library Perpustakaan Universitas Nusa Mandiri merupakan
platform digital yang menyedikan akses informasi di lingkungan kampus Universitas Nusa Mandiri seperti akses koleksi buku, jurnal, e-book dan sebagainya.
INFORMASI
Alamat : Jln. Jatiwaringin Raya No.02 RT08 RW 013 Kelurahan Cipinang Melayu Kecamatan Makassar Jakarta Timur
Email : perpustakaan@nusamandiri.ac.id
Jam Operasional
Senin - Jumat : 08.00 s/d 20.00 WIB
Isitirahat Siang : 12.00 s/d 13.00 WIB
Istirahat Sore : 18.00 s/d 19.00 WIB
Perpustakaan Universitas Nusa Mandiri @ 2020