This last week I attended and presented at the Readex Digital Institute in Chester Vermont. This was a fun conference; it was small enough to interact easily and informally with others, and diverse enough to trigger fascinating conversations among people with very different perspectives. I am also delighted that my Facebook-friend, Meredith, is now a real world friend as well; she has an account of the conference on her blog, and her own presentation is at slideshare.
The talk that I gave (here per Meredith’s accounting) concerned some speculations I have been slowly shaping about how reading will change, and the ramifications particularly for privacy. I was not originally disposed to share it just yet, but after reading the recent BoingBoing post concerning the Canadian privacy commissioner’s desire to see software architects design for privacy, I had second thoughts — coincidentally, I also had quoted the Canadian privacy commissioner.
When I gave the talk, as so often happens, a lot of what I wound up thinking about most was not what I was presenting in my slides, so I have edited and added and tinkered; I hope the result manages to eke across the threshold into intelligibility.
The talk, as it stands now, is about how the book as a commodity is being transformed through its conversion to digital, and as a consequence, reading is being converted from an often individual, solitary act, to an inherently social act, embedded within the network. Not, to be plain, social as in a reading circle, but social as in a (gag, I can’t believe I am going to say it) web 2.0 sense.
One of the consequences of this re-creation of reading is that privacy is at tremendous risk of itself becoming a commodity, which must be purchased. Thus the Canadian commissioner’s plea that we must architect for privacy, or as I would similarly express it, that we must accept the costs of engineering privacy into our applications and networks as a social obligation.
Perhaps the greatest threats to privacy are from the aggregators of such stuff as: found or corporal data like websites, books, articles, and news; user-contributed data; and the click-stream traces of our actions; when combined these yield rich and beneficial applications. This is the trade-off: release of personal information in exchange for amazing new utility. The solution to this conundrum has often been rendered meanly. It is worth noting that Google’s approach to the management of personal information is much the same as it is to copyright: appropriation, with an opt-out option for those who are aware of, and object to, the taking.
But as many of us concerned with privacy are aware, privacy can readily exist even amidst applications and systems which assume the presence of user-identifying and user-generated information, through the creation and provision of mechanisms by which we as individuals can transparently control the release of information that should always be ours to own. Privacy is not a thing incarnate; it is a continuous construction, and its construction should reside in the hands of the people, not in the profit motives of search engines and advertising companies.
The ramifications for us are significant. It is up to us to shape what reading the next book looks like, and up to us to form our understanding of how we control our identity, our privacy, and our rights in the social space filling the firmament of the network.
The presentation via slideshare, or below.