
- A man was mistakenly identified with biometric surveillance systems
- Despite all the documentation offered by security officers they stubbornly trusted the biometric system
- This case is only one among thousands, demonstrating that biometric surveillance technology is an instrument of torture and oppression
In 2023 a driver named Jason Killinger was wrongly and unfairly arrested at a casino in Reno, Nevada, after an AI facial recognition system wrongly identified him as someone previously banned from a database.
This incident began when the casino security team using Matchless Technologies' artificial intelligence facial recognition system (now UPS) detected Jason as a threat.
The system mistakenly matched his identity with that of "M.E," a person considered "prohibited" due to earlier legal issues.
For technology you will always be a potential criminal
Despite having a valid Nevada ID and vehicle registration, security personnel stubbornly relied on the AI system's 100% accuracy assertion of identification.
Security personnel falsely claimed that all the documentation, despite being consistent, was insufficient evidence to prove their identity.
Worse, the Police Department officer argued that Killinger had no evidence to support that identification. And accused him of using fraudulent documents.
Emphasizing once again that "mechanical" stubbornness is often stronger in people who rely on technology than in the wonderful technologies that are used for "security"
Killinger's lawsuit alleges that the security officer fabricated evidence and skipped the fingerprint check necessary to completely exonerate him. This decision violated Jason's right to due process.
The incident has led to a lawsuit against the officer alleging civil rights violations, malicious prosecution and false reporting.
Biometric identification systems are not safe
The case highlights critical problems with biometric systems that are only tools focused on surveillance rather than safe alternatives.
For example, even when accurate, the mere presence of facial recognition technology can be used to track people without their consent or full knowledge of their identity.
In addition, these systems will always lack transparency, centralizing identification control in government or corporate hands, creating a de facto identification system that violates privacy.
Experts argue that this imbalance of power is deeply rooted in the inherent and biometric systemic failures. Studies show higher false positive rates due to algorithmic bias.
Even if the algorithms are accurate, their use can amplify state control and contribute to psychological harm, similar to how criminal justice systems are disproportionately directed and primed with certain individuals.
Biometrics Foster Fear and Oppression
The case also highlights the broader social impact of biometric surveillance: it encourages fear and causes the loss of autonomy among individuals.
As one expert points out, the relentless monitoring that these technologies allow psychologically erodes and tortures anyone who is aware of being tracked or misidentified.
This creates a cycle of anxiety, distrust and a sense of "false guilt," which is the environment generated by these "technologies" that are both oppressive and intrusive.
In light of this, critics argue that such policies should be banned altogether. While some opt for temporary measures such as public resection to avoid further exposure, others call for comprehensive regulation to ensure supposed transparency with these systems.
The case serves as a clear reminder of the dangers of relying on technology that does not have adequate verification and transparency.
Ultimately, Jason Killinger's story serves as a warning tale about the abysmal distance between the application of technologies and the protection of human rights.
It highlights how advances in biometric systems can have profound social impacts and leave individuals vulnerable to the violence and oppression of those who do not respect or trust them.
This is why many argue that such technologies should be banned altogether - at any level - and that they require continuous review and regulation.
