A high school student in Baltimore County was handcuffed and searched after a school’s artificial intelligence security system falsely flagged his empty Doritos bag as a firearm.
According to CNN, the encounter occurred Monday (October 20) evening at Kenwood High School, where 17-year-old Taki Allen, who is Black, was waiting for a ride home after football practice. Allen told CNN affiliate WBAL that officers arrived in force after the school’s AI-powered gun detection system identified a “suspicious object.” Within moments, he said, multiple police cars surrounded him, and officers ordered him to kneel.
“They made me get on my knees, put my hands behind my back, and cuffed me,” Allen said. “They searched me, and they figured out I had nothing. Then, they went over to where I was standing and found a bag of chips on the floor.”
Allen said officers pointed their weapons in his direction. “The first thing I was wondering was, was I about to die? Because they had a gun pointed at me,” he recalled. “I was just holding a Doritos bag — they said it looked like a gun.”
Baltimore County police later confirmed that officers responded to “a report of a suspicious person with a weapon” but quickly determined that the student was unarmed. School administrators said the district’s security office had already canceled the AI alert after confirming there was no weapon, but the message had not reached police before they arrived.
In a statement to parents, Kenwood Principal Kate Smith said the school “understands how upsetting this was for the individual that was searched as well as the other students who witnessed the incident.” She added that ensuring student safety remains the district’s top priority. Baltimore County Superintendent Myriam Rogers called the incident “truly unfortunate” and pledged to review both the security system and its protocols.
The AI system involved, created by Omnilert, is used throughout Baltimore County schools to analyze video from existing surveillance cameras. The company said it regrets the incident but defended its technology, stating that the process “functioned as intended” by alerting humans to a potential threat for rapid review.
The incident underscores growing national debate over artificial intelligence in education. Many schools have turned to AI tools to help keep students safe, but critics say the technology can make serious mistakes and reflect racial bias, putting the very students it is supposed to protect at risk.






