Intellectual Privacy #2: the harm of invasion; obligations of data collectors.

Intellectual Privacy #2: the harm of invasion; obligations of data collectors.

Legal professional with strong interest in antitrust, strategy and new technology.

This is the second of two posts about Neil Richards’ book, “Intellectual Privacy: Rethinking Civil Liberties in the Digital Age”.

A few thoughts from the second half of the book.

The harm of intellectual privacy invasion.

One issue I take with Richards’ argument that invasion (or suspicion of invasion) of intellectual privacy will have a “seriously inhibiting effect upon the willingness to voice critical and constructive ideas” is that it rests on the assumption that victims of such invasion have been endowed with the knowledge or suspicion that their communications are being monitored. The harm Richards identifies seems to be the person’s subjective notion of invasion, rather than the actual invasion, leaving a situation where active monitoring of “sensitive data” would only have Richards’ feared “chilling effect” when the victims are aware of the possibility that their communications are being monitored or their data is being collected by the intermediary through which they communicate.

I don’t know what the purpose of the National Security Agency’s secret collection of personal information was, or if it was justified, but it struck me that Richards mentions Edward Snowden’s leaks and publication of the fact that this collection occurred without directly addressing where the harm to our intellectual privacy stemmed from. On page 185 he implies that the collection of the data itself harmed our intellectual privacy. But following his “information-sharing function of confidentiality” argument, the harm to intellectual privacy stems from our knowledge that the confidentiality of our communications is or could be comprised. In this sense, Snowden’s leaks create a kind of paradox whereby, in seeking to raise public awareness of invasions of our intellectual privacy, he himself harmed the values underpinning the concept.

“Without a meaningful expectation of confidentiality, then, we would have fewer ideas, and those that we did have might be unlikely to be shared.” True. And when an invasion occurs, there will always be corresponding publication of that fact by a free and open press and by whistleblowers. But I think it is a point worth making that without such publication and increased awareness people would continue to explore new ideas and voice novel opinions in the absence of knowledge that their communications are not privy to the same level of confidentiality as they expect when they make them.

Data-collectors’ heightened obligations.

Richards identifies at least three obligations that should fall on intermediaries when they are dealing with and collecting “sensitive data” (ISP logs, search engine records, web-browsing habits, book and movie purchases, etc.).

First, he argues for a higher notice requirement when intermediaries deal with sensitive data, as opposed to a term buried deep in a privacy policy that the user may never read. The European Union’s ePrivacy Directive requires websites that place cookies on users’ computers (which can be used to track movements on the web and collect sensitive data) to ask users to explicitly agree first. For consent to be valid, it must be informed, specific, freely given and must constitute a real indication of the individual’s wishes (i.e., a pop-up element on the website with an “agree” button). Richards appears to advocate this sort of express agreement mechanism on page 164.

However, I would dispute their effectiveness in achieving Richards’ desired goal of obtaining informed consent. Broadly, once a user lands on a web-page, it is either the page they are looking for and want to browse on, or it isn’t (in which case they view it briefly and go back to continue their search). I think, at this point, the user’s primary goal is to access the information they browsed to look at – not to read a privacy or cookie policy. You only visit a website to access specific information that that specific website contains, and I suggest that this will result in the vast majority of users consenting to the collection of sensitive data without pausing for thought. At the moment of browsing, the user’s value in accessing the information on the website probably exceeds the expected cost that attaches to the risk that the website’s collection of sensitive data will somehow harm the user’s intellectual privacy, plus the cost of spending time reading the data collection policy. In this sense, disruptive consent forms don’t make much of a difference to the browsing habits of web users (except, perhaps, the informed minority). A more sensible approach would be to offer users a tailored experience, i.e. “Agree” or “Browse without cookie collection”, as opposed to “Agree” or “Ok, leave the website”. I’m not sure if any websites do this.

Second, Richards argues that intermediaries through or to which we communicate sensitive data should be under the same sort of confidentiality fiduciary status as doctors or lawyers. If we accept Richard’s argument that duties of confidentiality inspire socially valuable communications, then this is probably a sensible idea. Modern collectors of sensitive data – the Googles and Facebooks – use the data primarily to improve their services for customers. Take Google for example. Its development of targeted advertisements revolutionized the advertising industry, creating a two-sided advertising-supported market where both the advertiser and the viewer derive positive value from the advertisements, as opposed to the classic market (magazines, television, newspapers) where the advertising is characterized as the price the user pays to view the content. If we believe that a heightened duty of confidentiality will inspire more disclosure of data to these firms, and that this will have socially beneficial effects (in the form of connecting buyers and sellers and creating transactions that would not otherwise take place), then I agree with Richards’ suggestion.

Richards also argues that general-purpose intermediaries (e.g. Google, Facebook) should be subject to heightened moral obligations to embrace free speech and privacy values on their platforms, drawing inspiration from the Library Bill of Rights. Can we expect Internet giants to act on moral impetus? Maybe, especially when the company’s values are closely associated with the personality of the CEO (think Zuckerberg); but I suspect the market has a better answer. The growth of services like DuckDuckGo indicates an awareness of sensitive data-collection and a demand for services with zero data-collection commitments. Google makes money from targeted advertising and relies on data-collection to make its ads as relevant as possible to the individual user. But I’m sure it would rather have a user running searches on its engine without collecting their data than not have that user at all, and this probably goes to explain why Google now allows users to delete their search history. Privacy is a selling point here, and every user of DuckDuckGo is probably an ex-Googler. So instead of appealing to the morality of multinational corporations whose services our lives have become inextricably linked to, it might be the case that we should let the market do the work.

Data Platform Technologies Privacy

Comments