Workshops, 2nd day of the Berlin Symposium on Internet and Society.
Following the three principles of Open Government (transparency, participation and collaboration), lots of sites and projects were set up, but where is the research on this (where does data come from, where does it flow to, what sort of data is used, who does it empower, who uses it etc.)?
So far, it seems that there haven’t been too much detailed research on this. What would help is a theoretical framework of the social-political context and looking of other places where these processes have been taken place. Presenters introduced several case studies and the hopes related to them.
The first case study context is E-Rulemaking, which is often cited as a revolution in citizen empowerment through mass participation. However, in terms of democratic participation e-rulemaking is a failure, e.g. if hundreds of comments were exactly the same. That said, the outcomes are actually improved in terms of quality beyond lobbying. Information flows were redirected and people participated who would never have participated before.
The second case study is open science or open source. There has been a significant amount of participatory input, but most of it is relatively concentrated. There was a hope of a democratization of software develpment in the early 90ies. However, the redirection of information flows created a shift within information intermediaries (the result being a shift in control over access). The third example, the blogosphere, was also seen as a way to open up journalism and to re-structure information flows. But what are the consequences? Again, these two things can be observed.
Open Data content analysis of 175 open data web applications
The analysis revealed that there were 215 “developers”, 3/4 of them being lone individuals. As for the question who uses open data and why, there was a lack of community. Instead, a loose alliance of bloggers, politicians and charities were following different goals and agendas (“there are lots of communities around things of open data, but not around open data itself.”) Also, the idea of collecting different data sources into one application was not so present.
Will dominant intermediaries ever form? The youth of the open data landscape means that intermediaries are still forming. In other contexts, they do form (see the open source community or the blogosphere), and if so, quite powerful intermediaries are popping out. These are ones that are well-known and regarded in the field. Centralisation can help to nuture involvement initially. What is also needed is a new data literacy and research that focuses on the different functions of services. More empirical studies are needed, e.g. on how meta data categories are defined or the politics of the meta data game from a comparative perspective. Again, it was pointed out that open data in the public discourse tends to be open data in the sense of statistics etc and not following open data principles.
Fruitful questions from the audience (besides questioning the main issues and terms in a “good postmodernist style” (A. Bruns) included the question of an open data divide – what if the situation, i.e. the current social context never changes and we will not be able to provide data in a way that it is understandable for every housewife? Whilst it is evident that we need some sort of data literacy, it is often unclear what actions should be taken exactly. Also the concentration on intermediaries (related to the tonality of the presentation) might interfere with a general mindset that would not see open data as information provided from governments, but a more general thing that can also come from the people.
Issues that could be further explored from the research perspective were, for instance:
- What actually is open data?
- Data Literacy
- Do different types of intermediaries lead to different oucomes? How does their role copare to intermediaries in other areas?
Afternoon Workshop: Foresight
Predicting the future is more than difficult. One example was the introduction of mobilephones, where only in a short period of years the number of sold items increased like noone expected it. Despite the impossibility of real predictions, we can use concepts and methods to understand the future better. This workshop is also about how to get more people to forcasting by methods like crowdsourcing.
The Berlin Institute for Future Studies and Technology Assessment has been using several methods, whilst the delphi method and scenario building are probably the most used ones. What are future studies? They are a scientific study of future developments, assuming that there are different futures.There are five coure research lines which are currently under discussion. All research lines are able to support further work in the field of internet and society.
1) Horizon Scanning is a foresight tool with the aim of identifying emerging issues to initiate research and to develop policy and responses (as opposed to predicting the future), to structure the general context.
2) Transdisciplinary Technology Forecasting (TTF): An approach valuable to visioning alternative or desirable future developments (based on cross-discipline and cross-stakeholder cooperations.
3) Wild Cards: Examples are the crisis of global financial markets or the ethics of 9/11 events. Most people have experienced surprise situations and unforeseen developments that have altered their expectations.
4) Sourcing the Crowd: From the point of future studies, issues of data protection and privacy are important.
5 ) Social Shaping of Technology: This approach is developed as a response and extension to the ideas of techno-economic rationality and linear conceptions of technology development. The social shaping perspective explores social choices involved in the co-evolution of technology and society.
Research questions can be categorized into two groups: a) providing long-termin orentation and b) participation.
Cyberantropology: Being human on the internet
How does the human being change if it’s on the internet? 🙂 There is a new relationship between technology and lifeworld evident in what we call a virtalisation of our lifeworld. So is still appropriate to speak of a virtual reality and is the internet a “real” reality? In terms of intensity and concentration, there is no difference. Both categories (on- and offline experience) are a real part of our lifeworld. The concept that shaped the presented paper is that of the mimesis circle (Ricoeur).
We probably have very little knowledge about what reality or being human actually is, having an anthroprocentric view of the world. There is also a significant difference between virtual avatars (like in second life) and the usage of social networking sites. Questions to answer would be: Is there a difference between feelings or emotions online and offline and why do we need different forms of self-expression? The research presented was obviously in the early stage so authors are happy about comments on research questions on the wall.
The main finding of the workshop that online and offline worlds are intertwined might be a bit 90ies. 🙂 However, the point of the paper was to show what happens to the individual from a philosophical perspective – which left us all a bit confused; for instance, it would have been nice to show how one can really apply different philosphical concepts to the topic (maybe this is investigated in the paper in more detail).