SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere Nutzervereinbarung und die Datenschutzrichtlinie.
SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere unsere Datenschutzrichtlinie und die Nutzervereinbarung.
Scribd wird den Betrieb von SlideShare ab 1. Dezember 2020 übernehmen.Ab diesem Zeitpunkt liegt die Verwaltung Ihres SlideShare-Kontos sowie jeglicher Ihrer Inhalte auf SlideShare bei Scribd. Von diesem Datum an gelten die allgemeinen Nutzungsbedingungen und die Datenschutzrichtlinie von Scribd. Wenn Sie dies nicht wünschen, schließen Sie bitte Ihr SlideShare-Konto. Mehr erfahren
Evaluation methodology of practices of science communication
Evaluation Methodology of Practicesin Science Communication- Trial for Definition and Systematization of theConcept of Evaluation - Hokkaido University Gensei Ishimura
Evaluation of Practices in ScienceCommunication: Present Condition• In Japan, cases of practices in science communication have been accumulated since 2004.• However, the followings are necessary to continue practices with higher-quality. – Conduct of evaluation which is more acceptable and useful for stakeholders and interested parties – Systematization of evaluation methodology
What kind of problems are brought by lack ofsystematization in evaluation of practices?1. We can’t share platform to compare various practices.2. It is difficult to develop good theories based on practices.3. It is difficult to achieve accountability to citizens.4. It is difficult for practitioners to acquire stable evaluation.
Why is it difficult to evaluatepractices of science communication?• Problems to be solved are not well defined. – Definitions of “science communication” or “practices” are diverse, in reality. – Purposes of practices are also diverse. – Multiple purposes are included in a practice, and combination of the purposes are diverse among practices.
Reasons of Diversity1. Diversity of actors – Diverse actors are involved, such as science teachers, curators of science museums or science centers, scientists, science journalists, science writers, PR persons for research institutes, science policy makers, industries, scholars in STS(science, technology, and society), citizens, and so on.2. Complexity of objects – Objects for practices of science communication themselves are complicated from the viewpoint of trans-science.
My Standpoint• These diversities should not be “eliminated” but are “essential” for science communication.• Suppose that there is diversity in purposes of practices, in order to acquire “common language” for evaluation in spite of the diversity, it is necessary and effective to try to systematize evaluation methodology for practices.
Is it possible to “borrow” evaluationmethodologies from other areas?• It might be possible to borrow evaluation methodologies for related fields such as public relations, social marketing, or formal education to apply to practices of science communication.
Is it possible to “borrow” evaluationmethodologies from other areas?• However, in science communication; – mutuality is very important, different from traditional “one- way” communication methodology of public relations – it is difficult to have basic consensus about “what is good”, different from social marketing which deal with health promotion or conquest of poverty – communication is conducted in much more diverse contexts, different from in formal education where there is, to some extent, framework of curriculum in order to make students acquire practical knowledge and skills• Although it might be possible to partially borrow these evaluation methodologies of related areas, it is difficult to apply them directly to that of science communication (Ishimura 2011).
Hierarchical structure of purposes andmeans and its design possibility• A practice of science communication has purposes and higher purposes.• On the other hand, it has means(=lower purposes) to achieve the purposes, and lower means to achieve them.• If they composed a clear hierarchical structure which could be designed by practitioners in advance, it would be possible to systematize evaluation methodology for practices.
Problems brought by “mutuality”• However, if practitioners try to apply the concept of mutuality, which is said to be important in science communication, to practices, they put some part of decisions about lower purposes to participants and stakeholders.• The more mutual practitioners try to design practices, the higher purposes and the more parts practitioners put to them.
Problems brought by “mutuality”• Basically, evaluation is conducted in terms of to what extent the given purpose has been achieved.• So, it is difficult to uniquely define the subject of evaluation under the condition that the purpose itself changes dynamically through interaction among participants and stakeholders.• It is necessary to introduce novel evaluation methodology appropriate for the concept of mutuality.
Definition of concepts• Science communication – To improve collective decision-making function in the entire society, by conducting communications about science and technology• Practices – To affect decision-making of any individuals or organizations by conducting practical activities in the society• Practices of science communication – To improve collective decision-making function in the entire society, through affecting decision-making of any individuals or organizations by conducting practical communication activities about science and technology• Evaluation – To transfer information about any objects to information available for subjects to make decisions to affect the objects
Definition of concepts• Although these definitions are all very abstract and general, only such comprehensiveness can lead adaptation of the definitions to a variety of purposes and contents of practices, purposes, subjects of evaluation, stakeholders in common, and systematization of evaluation methodology later.
What is appropriate evaluation forpractices of science communication?
Ex post facto re-interpretation ofpractices• Practitioners of science communication make hypotheses about appropriateness of relationships among purposes and means of practices before they design practices.• The process is inevitable in order to design optimal practices.• However, for the reason mentioned before, it doesn’t work well just to hold on validating the hypotheses made in advance.
Ex post facto re-interpretation ofpractices• Rather, based on phenomena that occurred in the field of practices , evaluation methodology should be developed by re-interpreting for which purpose the practice was conducted, and which means configured them, and by extracting a hierarchical purposes-means model (= the program theory), as a result.• Although such a re-interpretation tends to lay practitioners open to opportunistic self- praise, well described program theories should enable others to validate them.
Ex post facto re-interpretation ofpractices• If program theories were explicitly described by respective practitioners, those who might have been cohabiting but living in different worlds might face with essential confrontation about purposes of practices.• However, only such a confrontation might lead the next step of practices in science communication.
Program theory• An element of the “program evaluation” which is the representative evaluation methodology for social programs (Rossi et al. 2004)• Logical conceptualization of the relationship between purposes and means, and causal relationship which a given social program suppose.• A series of hypotheses about relationships among strategies and tacticses adopted by the program, and about expected social benefits• It consists of the management plan of the program, the logic to bring about intended outcomes, the theoretical basis for the reason for the program implementation, etc.• The concept of the program theory itself also is a subject of evaluation. – That is to say, the clearer and persuasive the conceptualization of the program is, the more easily evaluators can judge functions and effects of the program which the evaluation should focus on.• “Logic model” is one of the representative descriptions of the program theory.
Logic model• A model of functions and logical structure of operations of the given program (Yasuda 2011)• The model represents constellation of processes from resources given into the program to expected ripple effects, based on hypotheses about their causal relationships. inputs activities outputs outcomes impacts
Logic model• Inputs – Invested resources directly or indirectly necessary to conduct the program • conductor of the program, provider of the services/budget/facilities/management/time/information/organization/social capital• Activities – Activities which consist of core of the program/organizational and structural foundations to support activities• Outputs – Products and situations which are produced by implementation of the activity • frequency of the activity/period of the activity/the number of participants/the number of distributed documents/page views of the website• Outcomes(proximal/middle/distal) – Effects on participants or users occurred by their participation in the program or acceptance of the services – Mainly, change in behavior, attitude, motivation, knowledge, skills, etc. – Benefits which participants or users acquired• Impacts – Derivative or secondary effects after some period in a broader area beyond direct participants (Yasuda(2011), partially modified）
Cons of program theory• Too much “plan-oriented” – The theory premises; • Right purposes and goals are self-evident. • Appropriate means and sub-means for achieving purposes and goals can be well planned in advance. • Planned means should be well conducted, and if they had not, practitioners were responsible. • When you conduct the program, interference is negligibly small. • Results and effects of practices can be measured relatively easily.• Human, time, and financial costs for evaluation are huge.
Evaluation of practices by ex post factodescription of program theories1. To make a program theory (logic model)before the given practice is conducted, or as a part of the practice.2. To conduct the practice.3. To describe the program theory again, as the re-interpretation of the practice, considering various conditions which have been dynamically changed by interactions with participants and stakeholders.4. To evaluate the practice by using the methodology of program theory from the viewpoint of the described “ex post facto program theory”.5. To compare “ex post facto program theories” of different practices.6. Temporal Logic Model (den Heyer, M. 2002)7. Considering their nature, practices of science communication are typical cases which temporal logic model should be applied for evaluation.
In the condition of various actors and various and changingpurposes;A series of versions oflogic models of a practice Mutual comparisons validations, and compensations Collective systematization of evaluations of various practices as accumulation of “the whole logic models”
A case of ex post facto descriptionof program theories
The practice class of communication fordisaster prevention• Background – Our educational program(CoSTEP) invites the public to the “project contest” and chooses the appropriate ones as yearly practice classes. – In 2012, “the practice class of communication for disaster prevention” was designed which aimed to plan and conduct educational program for tsunami disaster prevention in Hokkaido.• Content of the practice – To conduct workshops in some areas in Hokkaido where there’s considerable probability for tsunami disaster. – Dr. Nishimura, who specializes in science in tsunami disaster and an applicant for the project contest, guides students. – To collaborate with the team “TSUNA-SUP” which has conducted similar disaster prevention workshops and which is composed of CoSTEP finished students.
design/ manage supportteachers educational program provide feedback design/ manage practices of students science communication provide feedback feedback participants
Actors who are involved in thepractice and their relationship teachers Dr. Nishimura collaboration students “TSUNA-SUP” Stakeholders in the local areas
A case of ex post facto description of program theories the logic model in the beginningThe practice class of communication for disaster prevention: input activity output outcome outcome outcome impact (proximal) (middle) (distal)organization of practice budget change in appropriate development of about 50 knowledge and awareness of evacuation disaster prevention participants culture in the area participants behavior yearly plan planning preparation workshop 1 teacher (1) acquisition of explicit workshop 2 knowledge of spread of students know- how know- how of (5) of practices practices practicesclient of the project of education improvement human achievement of the improvement of experience better relationship of knowledge resource quality of science between science of practice and skills development communication and society (*Not all cause-matter relationships or purpose-mean relationships are described.)
A case of ex post facto description of program theories the current logic modelThe practice class of communication for disaster prevention: input activity output outcome outcome outcome impact (proximal) (middle) (distal)organization appropriate evacuation of practice behavior budget change in participants development of about 50 knowledge and leadership for awareness of non - disaster prevention participants culture in the area participants participants yearly plan planning preparation workshop 1 increase in the number of teacher participants by word of mouth (1) acquisition of explicit workshop 2 knowledge of spread of students know- how know- how of (5) of practices practices practices improvement ofclient of the educational project program of educationaccumulation of past improvement human improvement of achievement of the activities experience better relationship of knowledge resource quality of science between science of practice and skills development communication and societyStakeholders in the local (*Not all cause-matter relationships or purpose-mean relationships are described.) areas
Discussion1. How should we think of “change” in ex post facto description of program theories? – Failure of planning? – Failure of conduct? – Considerable change of the plan?2. Structure of program theory – In reality, it is not unidirectional but circulatory and hierarchical. – Application of the system theory (Tahara, Takahashi 2012)
For constructive evaluation• Evaluation tends to remind us of administration by the power. However, it is practitioners who should equip their fields with evaluation system, in order to build a seawall to prevent the fields from being eroded by the power of decision makers/resource providers in a narrow sense.• It might be necessary for practitioners to take the initiative to propose how evaluation should be conducted, not waiting that the evaluation system is given from outside.• It is important for citizens, as decision makers/resource providers in a broad sense, to support practices by providing various “resources”, of course, only if they judge the practices are valuable.
For constructive evaluation• On the other hand, there is not always antagonism over resources between decision makers/resource providers and practitioners. Sometimes, decision makers/resource providers just need means of communication for evaluation with higher decision makers/resource providers. In that case, if practitioners provide decision makers/resource providers with evaluation information to help the communication, they can form a “united front”.• Evaluation is not conducted by specific individuals or organizations alone. Community of practice and learning surrounding the given practices creates and shares evaluation system. It is important to pursuit systematization of evaluation and development of the community simultaneously.
For constructive evaluation• On the other hand, from the view point of the survival strategy for practitioners, although it might be successful in a short period, it is not appropriate in terms of public good to excessively rely on optimization of communication for evaluation toward specific decision makers.• Even from the view point of utilitarianism, the possibility that specific decision makers change their judgment criteria or that decision makers themselves take turns or extinct, may raise significant risk. It is rather appropriate for long-term survival strategy to develop multi-track communications for evaluation with diverse decision makers.• It is necessary for practitioners to equip evaluation system in order to prevent themselves from self-satisfaction and to keep their mind open to creativity of decision makers/ resource providers in a broad sense.• Evaluation is a quite creative conduct where values are found, interpreted, transmitted, and created.
Science communication as evaluation ofscience and technology• Science communication as evaluation of science and technology – Participatory technology assessment • consensus conference, deliberative polling, etc. – Science journalism, publishing scientific books – Scientific public relations, science education• Nested structure of evaluations – Self application of science communication
ReferencesIshimura, G. 2011: “Evaluation Methodology of Practices of Science Communication - Trial for Defining and Systematization of the Concept of Evaluation –”, Japanese Journal of Science Communication, 10, 33-49den Heyer, M. 2002, “The Temporal Logic Model Concept,“ The Canadian Journal of Program Evaluation Vol.17 No.2: 27-47.Rossi et al. 2004: Evaluation: A Systematic Approach, 7th editionSato, T. 2010: “On evaluation methodology for activities of social welfare council; Use of logic model in program evaluation”Tahara, K. Takahashi, S. 2012: “A case analysis by Viable System Model for program evaluation”Yasuda, T. 2011: Program Evaluation