Elisabeth A. Jones, The Information School, University of Washington, Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read
Google’s concept of a virtual book store is to get every book online – searchable and for free. But there a fundamental differences to haptic books: when a potential reader is in a public library he/she can be seen by the others attendents. By walking through the history section for example, the attendent reveals his/her interests, even though the other attendents might not take care, or the limited potential of the brain will make this information volatile. Moreover, the staff at the library has pretty strict rules how to deal with the information they gather. On the other hand, Google Library can not forget and the information collected is stored potentially forever.
Vili Lehdonvirta, University of Tokyo and Helsinki Institute for Information Technology, A New Frontier in Digital Content Policy: Case Studies in the Regulation of Virtual Goods and Artificial Scarcity
People are attracted to virtual goods for pretty much the same reasons as real goods attract people. Interestingly enough, although in the virtual world there is no limit, virtual goods are also real. (And people who claim to have fancy sports cars are actually bragging around.) His theses can be read online.
Sandra Braman, Department of Communication, University of Wisconsin-Milwaukee, Technical Design of the Internet and the Law: The First Decade
Sandra Braman’s presentation was about the first decade of the Internet (1969 to 1979) and the role of privacy. It’s remarkable that 1/3 of recommendations and technical papers dealt with the issue of privacy. This is not too astonishing given the fact that the first concepts of the Internet actually were the result of a project funded by DARPA (Defense Advanced Research Projects Agency). There used to be a header protocol of the early Internet which transmitted the payload and the offline place where the transmitted information is stored. This header protocol has been completely abandonded for good privacy reasons.
Valerie Frissen, TNO Information Communication Technology, The Netherlands, Challenges of the future internet: towards a data-driven European society? A critical analysis of the European agenda for the future internet
Eric Schmitt (CEO of Google) stated that from the very beginning of mankind up to 2003, 5 XB (Exabytes) of data have been created. As of today (2010) we create that amount of information every two days with increasing pace. This will apparently lead to an explosion of information, but also to a deep and ireversible exposure of people’s privacies. Valerie Frissen presented a project called 23andme, a platform where you can submit genetic traces (of yourself) which get analysed . Out of pre-defined properties the genetic profile is assigned to groups of interests. The European community is re-assessing their Internet strategy with funding programmes focusing on topics such as the internet of things, nomadic use of the internet, pricacy, and enhancing interoperability. According to Frissen, the focus of the strategy is too much tied to technology and lacks a societal perspective.
Jakob Linaa Jensen, Department of Information and Media Studies, University of Aarhus, Denmark, Citizenship 2.0. – changing aspects of citizenship in the age of digital media
Jakob started his presentation with an overview of different definitions of what constututes citizenship in different aeras. The key question of the presentation was: What defines citizenship in a digital aera? The experiences of citizenship are political efficacy, trust, connectedness, and politcal consumption. According to him, the factors which contribute to a classification of citizenship are political consumption, instutional belonging, geographical belonging, internal & external efficacy, and social captital. One important outcome of his survey is that the Internet reinforces existing behaviour patterns rather than changing them.
Michiko Yoshida, Graduate School of Economics, Kyoto University, Kyoto, Japan, Utilizing the regional SNS to participate in politics
The local SNS market in Japan is very fragmented: There are more than 500 sites and communication happens in all of the 14 different main dialects. The mortality rate of those sites is high. Since 2004 the kumomota prefecture is operating a government driven SNS. The site is used for disaster preparation, information about nurturing, local events, education & enlightenment activiteis. Citizens freely give away information to improve the situation in their region.
Meelis Kitsing, Department of Political Science, National Center for Digital Government, University of Massachusetts Amherst, Political Economy of the Network Neutrality in the European Union
In his presentation Meelis compared the discussion of net neutrality between the US and Europe. According to him, there is little to none discussion in Europe on the topic, mostly due to the fragmentation of consens findung at the policy level. It was concluded that EU telecom package contributions are minimal in ehancing network neutrality regulations.
Ismael Peña-López, Open University of Catalonia, presented a framework in which he classifies 49 countries according to a factor analysis out of 14 publicly available data bases after a cluster analysis. According to him, those countries can be classified as leaders, drivers, laggards and leapfroggers. To put it shortly, if a country classifies as leader, the country will do well on all factors he identified. The only exception are leapfroggers, which do only well on certain aspects, for example cheap costs for underdeveloped ICT infrastructures, whereas lagging countries have high prices for underdeveloped ICT infrastructures.
Consequently, ICT development happens in stages. One common pattern of leapfroggers is that they do well in focusing “outside”, i.e. with cheap telefone lines & mobile phones penetrations. Public policies have a strong influence on whether a country classifies as a leader or rather as a lagger. According to him those policies should mainly address strategies to incentivise the demand through pull-based initiavtives.
Viktor Mayer-Schönberger presented elements of his last book “Delete, the virtue of forgetting”. A society built on digital data has lost the ability to “forget”. Google knows a lot about its users, as it stores every search query and thus more results then the users would ever be able to memorize. Humans have developed painting and writing to remember. However: forgetting is still the default (biological forgetting vs. digital remembering).
Storage density, processing power, and bandwith follows Moore’s law. Now in the digital age, remembering is the default rather than forgetting. Power and time are the terms at stake. The internet may create a global panopticon with the threat of being watched without actually knowing. Forgetting over time had the property to re-asses situations and judge them from a different point of view. But digitaly enshrined information prevents this ability.
A perfect memory is a curse for people as they stick to details rather than having the ability to generalise. Being able to draw on a perfect memory also includes that it is harder to forgive – resulting in the danger of an unforgiving society. Solutions to this tendency are regulatory restrictions of what information can be stored and for how long. Amongst the advantages of this concept is protection against an uncertain future. As we cannot forsee the future and the potential risk of repurpose of information, it may be better to store less than more.
But would digital abstinence be a realistic solution? Would full contextualitsation, e.g. storing as much as possible electronically, be the solution? If we could store everything including context, the risk of an inappropriate interpreation of situations would not exist. The cognitive adjustment is to learn devaluing older information. Instead of changing human behaviour it may be easier to change technology by DRM.
Another technical solution would be an expiration date of information. Infomation loses its value as time proceeds. We have to implement the time dimension in digital storage. Differences are that humans forget gradually while digital forgetting does not. The solution here: “digital rusting”. In order to make this reality, we have to return to the concept of forgetting as the default and add temporal information as a metadta to support digital rusting. With technology, the negotiation costs can be significantly lowered.
- Q: Shouldn’t we forgive rather then forget?
- A: The human brain works in such a way that it closely ties forgiving and forgetting together. It is only possible to forgive once you are able to forget.
- Q: Remembering is good. By remembering we can remind politicians to stick to their promises.
- A: Acknowledged.