AI For Gun Regulation: Not the Best Solution?

AI For Gun Regulation: Not the Best Solution?
Joyce Li | July 7, 2022

Though school districts are pouring money into AI-based gun detection products in light of recent shootings, widespread implementation of these products may not be overall beneficial for society.

Image 1: Gun-free sign outside of a school.

 

In the first 6 months of 2022, there have been at least 198 mass shootings in the United States. Especially in the light of the recent Highland Park Parade and Robb Elementary Uvalde shootings, many groups have begun lobbying for increased gun control regulations as a response to increasing gun violence throughout the country. In the meantime, schools and government agencies have turned their attention to AI products that can aid with gun detection. Existing technologies vary from AI camera feed scanners to 3D imaging and surveillance products. These products aim to detect guns quickly before a shooting happens; the idea is that if people and law enforcement are notified that there is a gun in the area, people can get to safety before any harm can be done. However, with technical limitations along with the high prices associated with the technology, it is difficult to justify AI gun detection technology as the best solution to reducing gun violence in the US. Despite the beneficial intent behind these systems, there is much skepticism about whether or not these systems are ready for deployment.

Product Limitations

Many gun detection products are already on the market and actively being used by schools and government agencies. However, these products already have several limitations in their current use cases. For example, ZeroEyes is a product that scans school camera feeds using ML to identify up to 300 types of firearms. Because state-of-the-art algorithms are not quite at a stage where accuracy can be 100%, military veterans are employed by the company 24/7 to check whether flagged footage is correctly classified as guns. Despite how well the technology could work, a major issue with the system is that the algorithm is only trained to classify guns in plain sight. This brings into question how useful the technology really is, given that experienced shooters would not typically brandish their guns in plain sight when planning for a large-scale shooting.

Image 2: ZeroEyes gun detection example.

Another example is with the company Evolv, which an AI-based metal detector that aims to flag weapons in public spaces. The system is currently already being used by sports stadiums such as the SoFi Stadium and in North Carolina’s Charlotte-Mecklenburg’s school system as a scanner before people enter the area. When suspicious objects are detected, a bounding box is immediately drawn around the area where a person could be carrying the item, allowing security officers to continue with a manual pat-down. Again, despite the potential for the tool to reduce friction in the security process, there are still major technical limitations for the tool despite its establishment in the market. One major issue is that the AI still confuses other metal objects as guns, such as Google Chromebook — quite a large mistake to be making in a school setting. The fact is that ML models are nowhere near close to perfect still, and the fact that manual checks are still a large part of both mentioned products signals that gun-detection AI may not be ready for full adoption.

Ethical Implications

In addition to the technical limitations of state-of-the-art gun detection algorithms, there are many questions about how ethical such products are when put into use. The main concern is how to ensure that these safety tools can be equally and fairly accessed by everyone. The justice principle from the Belmont Principles emphasizes that benefits from a fair product should be distributed equally. Because these products have such steep costs ( for reference, ZeroEyes costs $5,000/month for 200 cameras and Evolv costs $5 million for 52 scanners), it does not seem that schools and other institutions from low-income communities can afford such security measures. These prices sees even more unfortunate given that areas with higher income inequality are more likely to see mass shootings. This also leads to the question of why school districts, which are already notoriously underfunded in the US government system, should have to spend extra money to ensure that students can simply stay safe at school.

According to another aspect of the Belmont principles, all subjects should be treated with respect, meaning that they should be treated autonomously and given the choice to participate in monitoring or to opt out. With widespread technology, to what extent should students or sports arena attendees be monitored? How does an institution get consent from these people to be scanned and to use their data to train machine learning algorithms? There has always been a balance to be discovered between privacy and security, but in the situation with gun detection AI, there seems to be no choice for people whether to opt in or out. This raises many questions as to how much these AI products could harm its users despite the proposed benefits.

Looking Forward

While AI technologies have the potential to be useful one day, it seems that they are not ready to be the optimal solution to ending gun violence in America. These solutions need to surpass current technical limitations, become scalable, and address many ethical questions before they become widespread. Companies and schools can spend their money better elsewhere in providing mental health resources for students and employees, donating money to support institutional security measures, and lobbying for more comprehensive gun control laws.