The news came in quick succession this week. First, Toronto was considering adoption of the ShotSpotter gunshot detection system. Then, Toronto had decided to adopt it. As the latter article pointed out, after a spate of recent shootings, authorities felt that they had to take action, including this acquisition. The question is: Is ShotSpotter a good response?
Briefly, ShotSpotter is a kind of "gunfire locator," that is, a system in which a network of microphones installed in a given area relay what they pick up to a central system that tries to discern gunshots from the other noise and provide police with time and location reports of any positive results.
The basic notion and some reasons for it are presented in the following news report:
There's nothing like repeated gunshot noises to punch up a video about crime!
As the video suggests, ShotSpotter may produce more comprehensive data about gun violence and it may do so more promptly and accurately than other means, such as 9-1-1 calls.
At the same time, there are a number of criticisms. Some cities have not found the system to be cost-effective. Also, the system might be gamed, e.g., by people playing recordings of gunfire from car stereos.
Or, it could worsen relations between police and members of minority communities where ShotSpotter is likely to be installed, rather like another form of carding.
There is also the point that ShotSpotter does little to resolve the problem of gun violence: Time and money spent on it might produce better results if focused on prevention.
To these points, I would add one that I discussed in "Fairness and regulation of violence in technological design" (2011). To make a somewhat long story short, I pointed out that ShotSpotter's design creates a conflict of interest between social groups, in this case between the group surveilled by it and the broader community.
ShotSpotter works by comparing signals coming from its microphones to a library of gunshot profiles. If the signal matches a profile closely enough, it generates a "positive", that is, a gunshot report by the system. Otherwise, the no gunshot report is generated.
Like other such systems, ShotSpotter can make two types of errors:
- False positives: Gunshot reports in response to non-gunshots, and
- False negatives: No reports in response to actual gunshots.
False positives tend to bring police to scenes where no gunshots occurred. These incidents may spark hostile encounters as police arrive expecting to find shooters while community members may not welcome the extra and unwarranted presence of police.
False negatives tend to disappoint members of the broader community. After all, their concerns for public safety motivated the purchase of ShotSpotter in the first place, so it is irksome when gunshots go uninvestigated by police while the system is running.
At a given level of accuracy, there is a trade-off between errors, such that the less of one kind the system makes, the more of the other kind it makes. This trade-off puts the two social groups in a conflict: The more one group minimizes the errors it dislikes, the more the other group faces the errors it deplores. Such a situation raises the problem of fairness to the social groups involved.
ShotSpotter is a private company, so its performance data is proprietary and not available for public scrutiny or study. However, one informal study (commissioned by the company) suggests that the false positive rate is about 33% while false negative occurrence is "very rare". Such a result is what would be expected if the ratio of errors were set to favor the interests of public security, as noted above.
Is that fair?
Arguably, the configuration of ShotSpotter is unfair. The reason for this situation is that the sensitivity of the system normally seems to be set up by police, whose interest tends to be minimization of false negatives. In contrast, the surveilled community normally seems to have no say in the matter, in spite of their conflicting interest in its effect. This procedure is not fair.
If ShotSpotter is ultimately adopted by the City of Toronto, its configuration should be set in conjunction with an open and unbiased process involving all parties with an interest in its operational effects. That would mean inclusion of members of any communities surveilled by the system. Furthermore, all data generated in its operation should be freely available to researchers and members of the public.