5. By using the Benchlearning approach, we can build on organisations’ experiences to increase the Impact of eInclusion activities Policy Context Measuring Impact The Concept of Benchlearning Benchlearning Process Benchlearning Share knowledge Exchange experiences Contextualise data Find commonalities Test the relevance, feasibility and comparability of the chosen indicators Evaluate and adapt Impact measurement Accelerate improvement Recognize an impact Understand the drivers of impact (enablers, barriers, CSF) Comparable indicators Validated indicators Self-assessment Tool
6. Therefore a continuous cycle of learning and sharing has to be established.. Policy Context Measuring Impact The Concept of Benchlearning Benchlearning Process Bench- learning Cycle
Plan: uitkiezen van organisaties op basis van: - Project type - Leading actor - Funding type - Project objectives - Target audience (elderly or low-skilled societal groups) - Geographical scope - Availability of data and evidence - Data quality - Level of Innovation - Willingness to participate - Relevance Applicability or results Collect: Data gathering templates Nu: Compare en Analyse
Behavioral success factors Trust: Benchlearning requires that participants are willing to share sensitive information amongst each other: stories of failure, financial data, examples of moral hazard etc. The collaborative network on which the study builds needs to cater for these sensitivities. Learning by experiencing: The project must not stay at the theoretical level but must actively engage actors to make them ‘experience’. ‘Experiencing’ is vitally important to make Benchlearning actors ‘learn things the hard way’ instead of limiting their work to abstract levels of analysis. As a simple metaphor, children have to fall to learn the dangers of speed and movements. Likewise, Benchlearning participants will need to challenge and test each other’s initiatives to effectively learn and progress. Experience can come through peer reviewing, challenging and questioning (each other’s policies for example), monitoring and partnering. It is important that the study is organised as a project for mutual learning and learning from experience. Each actor must take an active role and get the opportunity to reflect on his experience. The consortium’s facilitators will support this reflection process. Structured knowledge work: There are three basic categories of knowledge: explicit knowledge which is articulated by the ‘owner’; implicit knowledge which is not but can be articulated; tacit knowledge which is not articulated and of which the owner does not own ‘consciously’. Each type of knowledge requires a different type of knowledge work. Explicit knowledge is commonly shared by the owner on his own initiative; implicit knowledge can be brought to the surface by using question-and-answer sessions; tacit knowledge can only be decoded through mutual experiencing and observation. Management of moral hazard: Benchlearning participants will be assigned an official role in the study but at the same time will also embody other role models. Typically there will be ‘talkers’ and ‘listeners’, ‘visionaries’ and ‘grounded implementers’. ‘IT specialists’ and ‘policy makers’. These groups are likely to adopt different behaviours. They pursue different goals. They have a different vision on Benchlearning and different expectations as regards the outcome they expect from the project. In all this, it is important that participants are given sufficient room to express their viewpoints and do not find themselves in isolation. Structural success factors Clear link of study to organizational strategy: The study’s raison d’être must be rooted in the agency’s vision or strategy; something that the organization really must do as opposed to a simple ‘nice to have’. For example, increase the take up of eGovernment services in remote areas by Y%. Ensure a participation rate of Z% in eHealth amongst the elderly. Commonly, the project goals will be derived from the organization’s objectives: its response to an EU policy or Directive, a national law, a mission statement or vision, and similar. These high-level objectives help to frame the Benchlearning study and align it with goals that have been commonly agreed upon and adhered to. Clear link of study to management approach: The study’s in fine goal is not the measurement itself. Rather, the focus must be on managing what has been measured and implementing related follow-up actions, including a potential re-design of policy. In this sense, the measurement must not be a one-off exercise but become an inherent pillar of the strategic steering of the organization. Then and only then, the Benchlearning project’s outcomes will effectively be used for example for decision making, human resource management, risk management, and suchlike, as part of a fully-fledged management cycle. Scalability: To build the data gathering capabilities of organizations, data will need to be gathered at the level of single initiatives i.e. bottom-up. This can have advantages but also bears a distinct risk: lack of scalability. The main challenge will therefore be to identify at an early stage of the study how the data could possibly be aggregated further, to the organizational, sectoalr or even national level for example. Scalability in general can be named as one of the major challenges of any Benchlearning project.
Overall goal project: Find Impact indicators that are comparable and valuable across eInclusion programmes. Day 1: Goal: introduction to the study and find out what participants are really measuring and what measurement of impact means to them Morning: Presentation Introduction to the Study (Dinand): Context and content of the Study Concept of Benchlearning: What is it, why do we do it, what does the process look like and what do we expect from the participants Workshop: why, what is the programme, follow up and what do we expect from the participants à also capture the expectations from the participants to feed into the workshop (or do so at 3.) Presentations Introduction participants (participants): Who are they and what are their main activities What do they want to learn and what are they good at/do they have experience in Presentation Preliminary results data gathering analysis (Gabriell/Richard, based on input from Trudy): High-level overview of goals, input and activities of participants Commonalities and differences between participants (as input for break-outs in afternoon) What are the organizations already measuring, why and how (as input for break-outs in afternoon) Afternoon: Discussion on initial thoughts of common strengths and weaknesses programmes (to be derived from morning session and expectations projects return(ed) by email) Break-outs: To identify two elements of other projects that could be useful to their project/ two learning elements (e.g. in measurement methods, analyzing results, proving the value of the activities, setting learning goals, reaching target groups) To explore how to build comparability into measurement results To identify leading experiences/good practices Introduce Partnering structure Day 2: Goal: Set up learning structure and find concrete ways to measure impact and show a project’s value Morning: Presentation General methodology (Gabriella): What methodology do we use What do impact indicators look like (Previous research on impact indicators?) Presentations on experiences with measurement by participants (eg Pane e Internet, Bibliotekas Pazangai, Commonwell). Both Pane e Internet and Pazangai confirmed already. Afternoon: Set up learning structure: Make learning teams Define common goals Define ways to learn from each other (flying circus, virtual community, peer review) Define ways Consortium can add value to the learning process