Its weird how object detection models are "AI" now. These models and their weird errors have been around for quite a while. The issue is vendors claiming that there is no chance of errors. Ideally you would have a 2 eyes system such that if AI has a tolerable false positive rate and have a human review. But of course, you cant fire people with AI so why would we do the sane thing.
And of course there are policy wonks who would make gun ownership a human rights issue even though its fundamentally unsafe to have such free gun ownership.
AI has also been around for quite a while. LLMs are hardly the first instance of AI we've seen, just the one that's suddenly getting all the hype. But yes, people trust it too much.
According to the article, they did have a human verify the images before sending the alert. Apparently they and the school still think they made the right call.
And of course there are policy wonks who would make gun ownership a human rights issue even though its fundamentally unsafe to have such free gun ownership.