
- An AI biometric surveillance system identified a package that a child was carrying as a firearm.
- Biometric monitoring and identification create an environment of psychological torture for all who are subjected to these systems
- Biometric impetus for monitoring traits and behaviors goes beyond safety, it is also applied in examinations and digital platforms
- The organizations call for greater regulations, in reality what needs to be done is to completely reject biometrics.
Taki Allen, an innocent 16-year-old boy, was deeply traumatized by his experience during an incident involving an AI-powered security system.
The incident took place outside Kenwood High School in Baltimore County, Maryland, on October 20, 2025.
Taki who was carrying a pack of Doritos out of school ended up being assaulted by police due to confusion when the artificial intelligence system mistakenly identified him as a firearm carrier.
School and Developer Responses
This situation raises important concerns about the reliability of AI monitoring systems, particularly in relation to false positives and potential bias.
The incident has brought the immediate attention of both the school administration and the developers behind the AI system.
School officials acknowledged the issue and issued apologies, while Omnilert, the company behind the AI, also released a statement expressing remorse.
This incident has underlined how these systems produce confrontations that will escalate into traumatic situations and cause lasting psychological damage.
All this highlights how these technologies when used only produce fear and trauma for individuals, even if they are not directly involved in the event.
Psychological Trauma
Taki's emotional distress in the incident caused him great trauma, making it difficult for him to return home or do various activities for fear of being identified again. At the time of the incident, he feared he would be killed.
The situation serves as a clear reminder of how misidentifications made with these "wonderful" technologies actually only serve to oppress and cause deep trauma.
Taki's experience reflects the situation that many others have been misidentified in similar contexts.
Moreover, the consequences are potentially far worse than mere emotional distress - as they can result in physical harm, legal repercussions and worse situations, if the authorities act inappropriately, which is quite common.
Another exemplary case of police stubbornness where they legitimize their behavior based on biometric systems and legislations that allow the use of digital systems, was when a driver named Jason Killinger was wrongly and unjustly arrested at a Casino in Reno, Nevada.
The media and bureaucrats are trying to cover up that biometric systems have these mistakes because of racial "prejudices," which cause failures. However, it can be absolutely proved that these arguments are fallacious, due to the great diversity that exists in the large number of victims of these digital "tools."
The role of AI surveillance
Despite the seriousness of these incidents, there remains little concern in "society" about the use of these biometric systems driven in public settings.
The incident also raises broader questions about how biometric systems are used to monitor public spaces
Another good example of these intrusive tools is proctoring systems, supposedly created to prevent students from committing exam fraud.
Many unscrupulous and authoritarian universities have been introducing these biometric tools in recent years, violating privacy and abusing innocent and honest students, submitting them without justification to constant surveillance and supervision.
But since it is "the great marvelous world of technologies," students who did not accept such violations by universities were cunning and perversely pointed out as problematic.
Very similar is the increasingly common imposition of requiring biometric data in order to use digital platforms.
Supposedly for security reasons, they are not honest, as biometric data are only to prevent phishing or identity theft, and yet however it can be proven that they are extremely easy to fake.
This leaves people in a state of extreme vulnerability because, while the biometric crap is accepted as authentic data, only one error of analysis by an ia, ends up allowing the system to react with brutal violence against innocent people, not to mention torture and stress due to surveillance and loss of privacy.
Monotonous regulations
Experts and organizations call for greater responsibility in the deployment of these technologies.
They claim that the lack of regulation leads to widespread misuse or even to the installation of cameras with functions that expand beyond mere security measures and are perceived as threats to privacy.
They argue that while AI can play a crucial role, it should not be used indiscriminately without careful consideration.
And they underline the need for stronger monitoring mechanisms to ensure that these technologies are implemented responsibly and effectively in public spaces where they can have a significant impact on people's lives.
Reality
The incident serves as a clear reminder of how even the most advanced systems can fail to adequately address real-world situations.
The media show these incidents as isolated cases. Institutions and officials feign dismay and apologize.
Organizations call for more regulations, when ironically it is because of the same bureaucracy that is invested in these invasive technologies.
Corrupt institutions do so with the excuse of "speeding up" processes, including surveillance
The only thing that can be done to prevent this tortuous situation from becoming routine and part of a "new normal" is to completely reject all kinds of biometric tools and digital surveillance.
