Months ago — perhaps even more than a year ago — I applied to open an account with an online bank. After completing several stages of the application process, I was asked to upload identification documents and take a live selfie showing both my face and my ID card.
What appeared to be a simple step turned into a surprisingly exhausting process.
I tried repeatedly to capture an acceptable image. After many unsuccessful attempts, I finally managed to take a selfie in which both my face and the information on my identification document were visible. I even remember thinking that I looked relatively calm and presentable in that final photograph. By that point, however, I had already spent hours struggling with the application process and was mentally exhausted.
At one moment I found myself thinking:
“I wish there were simply a physical bank branch where I could go and open an account in person.”
After all, online identity verification systems are supposed to make life easier. In my case, the process felt anything but easy.
Eventually, despite all the effort, my application was rejected.
What surprised me even more was that no meaningful explanation was provided. I tried repeatedly to learn the reason for the rejection, but every avenue seemed closed. I was informed that I could file a complaint with an ombudsman if I wished. After briefly researching the process, I realized that pursuing the matter would require significant additional time and effort, and I eventually decided to let it go.
Still, as a forensic document examiner, understanding why my application was rejected would have been professionally fascinating.
Naturally, I began considering possible explanations.
I do not believe the issue was related to financial history or creditworthiness. I have never had unpaid debts or unresolved credit obligations with any bank or individual.
The timing of the rejection — immediately after the facial and ID verification stage — led me to suspect that the problem originated there.
Several possibilities came to mind:
- The system may have concluded that the photograph on my identification document did not sufficiently match my live selfie.
- Alternatively, the authenticity of the identification document itself may have been questioned.
- It is also unclear whether the comparison was conducted by an artificial intelligence system, an automated biometric verification tool, or a human reviewer.
I still do not know the answer.
What I do know is that my application was rejected without a transparent explanation, and I did not have the time or energy to pursue a formal challenge.
This experience also raised broader questions for me:
How Reliable Are Online Identity Verification Systems?
Digital identity verification systems are increasingly used by banks, financial technology companies, immigration authorities, and online platforms. They are often presented as efficient, objective, and secure.
But how reliable are they in practice?
Independent and impartial research on this subject is increasingly necessary.
False rejections may be more common than publicly acknowledged. Human appearance changes over time. Lighting conditions, camera quality, facial expressions, stress, age, ethnicity, and even fatigue may influence verification outcomes.
I have also encountered public discussions and social media posts alleging problems such as racial profiling in facial verification systems — the idea that individuals may be judged differently based on appearance or perceived ethnic background.
Personally, I do not believe ethnic bias alone necessarily determines such outcomes. However, I do believe that when systems or reviewers are uncertain, unconscious bias and appearance-based assumptions may potentially influence decisions.
Whether these systems are fully automated, AI-assisted, or partially dependent on human review, the lack of transparency creates a serious problem:
Individuals may be rejected by critical financial systems without ever understanding why.
A Need for Transparency
As societies increasingly move toward digital identity verification, transparency and accountability become essential.
People should not be left guessing why they were rejected.
And systems designed to improve efficiency should not unintentionally create new forms of exclusion, opacity, or unfairness.
My experience may ultimately have been a minor inconvenience. But it also illustrates a broader issue that deserves careful examination — both technologically and ethically
Eyüp Aydoğdu
Forensic Document Examiner, Netherlands