mr_open_data

Open Data, Transparency and Open Innovation #cedem16

CeDEM16
CeDEM – the international Conference for e-Democracy and Open Government – brings together e-democracy, e-participation and open government specialists working in academia, politics, government and business to critically analyse the innovations, issues, ideas and challenges in the networked societies of the digital age. The CeDEM16 will be held from May 18th to May 20th 2016 at the Danube University Krems.

» More about the CeDEM16
» All CeDEM16 Sessions

Open Data, Transparency and Open Innovation

Using Open Research Data for Public Policy Making: Opportunities of Virtual Research Environments (Anneke Zuiderwijk, Keith Jeffery, Daniele Bailo and Yi Yin)

IMG_6448
Scientists are looking for ways to publish their data and meet people that are searching for data to use for their applications. Usually these do not come together at the same place. Therefore, the need arises regarding Research Infrastructures and Virtual Research Environments (VRE).

These introduced environments offer:

  • Data, Tools, Resource/infrastructure, Collaboration and co-operation between researchers
  • Collaboration and co-operation between researchers on intra/inter institutional levels
  • Preservation of data and the associated outputs

The overall idea is to combine data from different domains in order to foster multidisciplinary research.

Yet there still exist challenges and open issues regarding the realisation of this concept:

  • Open issues
    • Data context is essential (esp. as certain disciplines have spec. methods, approaches
    • Heterogeneity in terms of semantics (concepts and terms treated differently in different disciplines)
    • User experience in terms of data acquisition
    • Fast updates to reflect current status quo/state-of-the-art
    • Data quality affects all methods and results based on the outcomes
    • Data privacy including researchers and individuals is important
    • Software issues in terms of compatibility
  • What are the requirements for a VRE combination of existing infrastructures
  • Fusion of OGD with ORD
  • General Empowerment of multidisciplinary research

Derived from a case study from the field of earth sciences, the following 13 requirements are crucial regarding the instantiation of a VRE:

  • Data storage
  • Data accessing
  • Data computational services
  • Data curation
  • Data cataloguing
  • Linkage between VREs
  • User identification
  • Researcher or community collaboration support
  • User communities training and support services
  • Service interface
  • Simplicity and ease of use
  • Accounting service
  • Sustainable business model for long-term operation of a VRE

Regarding future developments, promotion, and adoption of VREs, it will be necessary to not only intensify the

Presentation slides

Towards a Linked Data Publishing Methodology (Eduard Klein, Adrian Gschwend and Alessia C. Neuroni)

klein.png
Businesses and organisations struggle with the existence of legacy data (events, pois, glam data) in their repositories, as these affect the applicability of tools and techniques in terms of compatibility throughout the entire data life-cycle. Linked data approaches can help not only regarding the interconnection of data, but also regarding general integration aspects and the overall enrichment strategy.

Overall the two most important aspects are:

  • Higher usability
  • Creation of sustainability

Klein et al. have therefore developed a methodology, which is dedicated to sketching the necessary steps in conjunction with the associated affordances to ease the planning and implementation of linked data approaches. The main goals of this methodology are:

  • Re-use of linked data publishing process
  • Completeness (of planning) necessary project skills
  • Documentation of essential tasks to help answering:
    • How long will it take to develop use cases with this platform
    • Necessary technical skills
  • Better estimation for future projects

The seven steps of the suggested Linked Data Publishing Methodology (LIDAPUME) are:

  • Stakeholder analysis
  • Requirements analysis
  • Use case development
  • Data identification
  • Data modeling
  • Transformation configuration
  • Data processing

It will take several iterations of discussion regarding the above-mentioned steps to achieve a final result for a particular application. At the moment, the developed methodology is tested in several projects regarding applicability. The feedback from the project participants can then be used to not only refine the methodology, but also – over time – to derive templates for particular application/data cases, which in turn will speed up the entire process and will provide better means of sustainability.

Presentation slides

Measuring the Promise of Open Data: Development of the Impact Monitoring Framework (Matthias Stürmer and Marcus M. Dapp)

markus
Currently, the movement of Open (Gov) Data is getting more and more momentum, which is reflected in the increasing numbers of open data portals. However, the overall status of Open Data has yet to be assessed, together with the actual impact that is created by publishing Open Data. At the moment, there exist no real systematic approaches to assess the before-mentioned impact, nor are there suitable tools available. There are already organisations working towards potential solutions regarding these issues such as the ODI, but this is more towards experts „personal“ views. Coming from the other side of perspective, barcamps and hackathons are bringing together people and data, yet it is -again- hard to assess the exact impact of open data during these events. Also when taking a closer look at the Open Data and Open Source community, time and resources spend by the people involved are not or hardly available, as a lot of interaction happens on an anonymous basis. The overall question remains: were the people and the spend resources actually worth it?

The framework presented by Matthias Stürmer and Marcus M. Dapp builds upon the concept of Social Return on Investment (SROI), related towards the theory of change, which was adapted to represent a Theory of Change for Open Gov Data. 

The main steps of this framework include:

  • Input: all resources such as money, people equipment and facilities; native proprietary data;
  • Output: direct and tanglible deliverables
  • Outcome (Value chain): all direct and indirect consequences of reuse of open data
  • Impact: results that occur due to the release of open data

These steps are then combined with 14 high-value data categories from the G8 open data charter:

  1. Companies
  2. Crime and Justice
  3. Earth observation
  4. Education
  5. Energy and Environment
  6. Finances and contracts
  7. Geospatial
  8. Global development
  9. Government accountability and democracy
  10. Health
  11. Science and Research
  12. Statistics
  13. Social mobility and welfare
  14. Transport and Infrastructure

The resulting methodology can be used, e.g., for a retrospective analysis of open data activities. The derived knowledge and experience can then be used to develop a release plan for open data, predicting benefits and impacts from related actions. Furthermore, the presented framework can serve as a monitoring instance, which can show the impact caused over time. While the framework in its current state is not yet complete, the first conceptual steps are complete and the project can now proceed to the second phase, where it will be tested on real use-cases (finished projects) as well as on future endeavours in the realm of open data.

Presentation Slides

 

Ein Kommentar

Schreibe einen Kommentar

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

WordPress.com-Logo

Du kommentierst mit Deinem WordPress.com-Konto. Abmelden / Ändern )

Twitter-Bild

Du kommentierst mit Deinem Twitter-Konto. Abmelden / Ändern )

Facebook-Foto

Du kommentierst mit Deinem Facebook-Konto. Abmelden / Ändern )

Google+ Foto

Du kommentierst mit Deinem Google+-Konto. Abmelden / Ändern )

Verbinde mit %s