Home  >  Article  >  Hardware Tutorial  >  AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

Susan Sarandon
Susan SarandonOriginal
2024-09-20 21:15:30347browse

AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

As more homeowners turn to smart security solutions like Amazon’s Ring cameras (currently $149.99 on Amazon), AI will play a bigger role in keeping homes safe. But a new study is raising concerns about whether these future AI systems might be too quick to call the cops, even when nothing criminal is happening.

Researchers from MIT and Penn State analyzed 928 publicly available Ring surveillance videos to see how AI models like GPT-4, Claude, and Gemini make decisions about contacting law enforcement.The results showed that while 39% of the videos contained actual criminal activity, the AI models often failed to recognize this. In most cases, the models either stated no crime occurred or gave ambiguous responses. Despite this, they still recommended police intervention in some situations.

One of the study’s key findings was how the AI models reacted differently depending on the neighborhood. Even though the AI wasn’t given explicit details about the areas, it was more likely to suggest calling the police in majority-minority neighborhoods. In these areas, Gemini recommended police action in nearly 65% of cases where crimes occurred, compared to just over 51% in predominantly white neighborhoods. Additionally, the study noted that 11.9% of GPT-4's police recommendations happened even when no criminal activity was annotated in the footage, raising questions about false positives.

AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

What's interesting is, Amazon has also been exploring AI-driven features for its Ring systems, including advanced tools like facial recognition, emotional analysis, and behavior detection, as suggested byrecent patents. In the future, AI might play a way bigger role in identifying suspicious activities or people, further stepping up what our home security systems can do.

For homeowners using Ring cameras, there is no immediate cause for worry. As of now, Ring cameras have limited AI capabilities (mostly motion detection) and do not independently make such decisions. The advanced AI models used in the study, like GPT-4 and Claude, were applied externally to analyze Ring footage, not integrated into the cameras themselves. The gist of the research is that while future AI updates can help monitor your home to a higher degree, it might also be prone to making errors—errors that will have to be eliminated before these features become mainstream in upcoming Ring cameras.

Check out another research that covers AI’s bias against African American English dialectshere.

AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

The above is the detailed content of AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn