MIT-based research analyzed Amazon Ring footage using AI models like GPT-4 to assess police intervention decisions. The study revealed inconsistencies, including high false-positive rates and bias against minority neighborhoods, potentially leading to unnecessary police calls in non-criminal situations in the future.https://www.notebookcheck.net/AI-systems-like-GPT-4-and-Gemini-misinterpreting-Ring-Camera-footage-could-lead-to-false-police-calls-especially-in-minority-neighborhoods.891320.0.html
"We've tried several LLMs that were not trained to identify crime and found out they can't identify crime" .. Well, okay? I guess
AI analysis of video footage should be illegal. This is completely and utterly dystopian.