Keamanan Pengenalan Wajah Berbasis Deep Learning: Tinjauan Sistematis Serangan Adversarial dan Strategi Pertahanan (Systematic Literature Review)
Authors (s)
(1)  Fahmy Syahputra   (Universitas Negeri Medan)  
        Indonesia
(2)  Elsa Sabrina   (Universitas Negeri Medan)  
        Indonesia
(3) * Andika Sitorus  
(Universitas Negeri Medan)          Indonesia
(4)  Khodijah May Nuri Lubis   (Universitas Negeri Medan)  
        Indonesia
(5)  Frans Jhonatan Saragi   (Universitas Negeri Medan)  
        Indonesia
(6)  Suci Nurrahma   (Universitas Negeri Medan)  
        Indonesia
(7)  Novi Novanni Sinaga   (Universitas Negeri Medan)  
        Indonesia
(*) Corresponding Author
AbstractDeep learning–based face recognition is widely adopted due to its strong performance, yet its susceptibility to attacks—particularly adversarial attacks—poses critical risks to the security and reliability of biometric systems. This study presents a Systematic Literature Review (SLR) to synthesize evidence on performance, vulnerabilities, and defense strategies in deep learning–based face recognition. The review follows PRISMA guidelines, including literature retrieval from reputable scholarly sources, deduplication, title/abstract screening, and full-text eligibility assessment based on predefined inclusion and exclusion criteria. Study quality is examined through critical appraisal, and findings are synthesized using thematic analysis, yielding four major themes: (1) model performance and factors influencing accuracy, (2) attack types and their impact on recognition outcomes, (3) defense mechanisms and their effectiveness, and (4) real-world deployment constraints (e.g., illumination, pose, image quality, and identity scale). The synthesis indicates that high accuracy does not necessarily imply high robustness; several defenses (e.g., adversarial training, attack detection, and robust learning) can improve resilience but may introduce trade-offs in computational cost and/or accuracy. This review provides a comparative synthesis and a conceptual model linking accuracy–attacks–defenses, and offers practical recommendations for model selection and security evaluation design. Limitations include heterogeneity in datasets and experimental protocols, inconsistent reporting metrics, and potential publication bias
|
Keywords
adversarial attacks; deep learning; face recognition; robustness; systematic literature review.
Full Text: PDF
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Andika Sitorus

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This ejournal system and its contents are licensed under
a Creative Commons Attribution-ShareAlike 4.0 International License






.png)







