A number of interesting posts concerning privacy showed up today, which made good reading together. In particular, these articles concern privacy, in this case, the control that people have about data concerning themselves.
In "The real costs of cheap surveillance", Professor Jonathan Weinberg of Wayne State summarizes how the increasing affordability of surveillance technology has increased intrusions on privacy by governments. License plate readers and facial recognition systems increasingly track people's movements, while increasing computing power allows data from disparate sources to be aggregated to construct detailed personal profiles.
Weinberg notes that privacy laws were instituted at a time when such capabilities could only be imagined: "The effort needed to collect that sort of data meant that governments would engage in surveillance only rarely, and only for compelling reasons. For most Americans, little about their everyday comings and goings, likes and dislikes, hopes and dreams was tabulated and collected in any central source. But that’s now changed."
In effect, the social default has flipped. Before, we could assume that being closely monitored was an exceptional condition, unlikely except when unusual suspicion applied to someone. Now, being closely monitored has become the norm, privacy being limited to those who can afford their own island.
April Glaser writes in Slate about an argument that Google has successfully made in court to limit access to its data in a government investigation. The US Department of Labor has been investigating whether or not Google's salaries are sexist, that is, whether or not men are paid more highly than women for similar work. As part of this investigation, the Department requested records of 21,000 Googlers.
Google resisted this request, arguing that turning over these records would constitute a risk to employee privacy. After all, government records have been subject to hacking, so Googlers' records in possession of the Department could fall into the wrong hands. A court agreed, limiting the disclosure to 8,000 records.
Glaser raises two concerns. First, regulators simply cannot do their jobs if they are not allowed to obtain relevant records, risks of hacking notwithstanding. Second, limiting the number of records limits the chances that the investigation will find a problem. In general, larger data sets are needed to detect effects of small size, so the investigation may miss problems at Google, even if those are present. Perhaps this was Google's intention in resisting the initial request.
Ciarán McMahon writes in Slate about Woebot, a chatbot Facebook users can access as a form of therapy for mental health issues. There is some evidence that services like Woebot may be somewhat effective in helping people in such circumstances.
McMahon points out that there is also a privacy issue here. Because the Woebot service is delivered via Facebook Messenger, Facebook is privy to whatever people say on the service. Could Facebook use this information to help target Woebot users for certain sort of advertisements? It would seem crass, at least, for the social media giant to exploit vulnerable people in this way.
Facebook denies that it does this currently but will not say that it won't do so in future. Perhaps it will and perhaps it won't. McMahon points out that Facebook once said that it "couldn't" combine Whatsapp data with its own after it bought the service. However, it was recently fined $122m by the European Commission for doing exactly that.
These stories illustrate how technological developments can make it hard for people to control data about themselves. Faced with governments or companies with powerful and distributed computing power, it is difficult for individuals to gain much control over personal data. Possession of such data gives institutions leverage over people, leverage that could be used for or against the interests of the latter. This observation illustrates why appropriate regulation of privacy is an important social concern.