Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Research methods - PSYA1 psychology AS

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Nächste SlideShare
Research Methodology
Research Methodology
Wird geladen in …3
×

Hier ansehen

1 von 25 Anzeige
Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie Research methods - PSYA1 psychology AS (20)

Anzeige

Aktuellste (20)

Anzeige

Research methods - PSYA1 psychology AS

  1. 1. Research Methods (Whitehead / Arthur)
  2. 2. Laboratory experiment: - carried out in a lab setting - Highest level of control - Loss of validity (esp. - Repeatable, therefore ecological validity) reliable if similar results - Demand characteristics. are produced - Can use sophisticated measuring equipment in a lab - More control over variables (could lead to knowing the cause/effect)
  3. 3. Field Experiment: - carried out in their natural environment • Improved ecological • Less control over IV and validity measuring DV. With • Reduction of demand addition of EV’s characteristics (though (extraneous variables) there may still be some) • Results cannot be generalised to other situations • Often more costly (as things have to be arranged outside – could inc. technical equipment)
  4. 4. Natural experiment: - IV is naturally occurring • Reduction of demand • Loss of control – The characteristics investigator doesn’t • The investigator doesn’t control the IV. intervene (however, the • A cause/effect presence of an relationship is difficult investigator could affect to establish ppts behaviour)
  5. 5. Correlation: - a term that refers to the extent to which values co-vary • Measures the strength • No cause/effect can be of relationships measured + 1 = perfect positive correlation -1 = perfect negative correlation
  6. 6. Observation: - No deliberate manipulation of the variables Naturalistic: Observed in a natural environment, e.g. school or workplace. Lab-based observation: Labs can be ‘dressed up’ to look more natural, like a playroom, where children can be observed using a one-way mirror.
  7. 7. Observation – evaluation: • Good research can be • Control – cause/effect collected relationship cannot be • Ecological validity can established be good • Replication may be difficult due to variables • Observer effects/demand characteristics • Ecological validity may be lower • Costs can be high
  8. 8. Self report: Questionnaires: • Closed questions – • Simple Tickboxes • Cheap & quite quick • Open ended questions • Researcher doesn’t – ‘What are your views intrude on…?’ • Ambiguous questions • Leading questions – You could be misconstrued love this PowerPoint, • Leading Q’s don’t you? • Social desirability bias
  9. 9. Self report: Interviews: • Structured interviews – • Flexible (In set set of Q’s semi&unstructured) • Unstructured interviews – • Able to tackle personal Q’s aren’t decided in topics advance • Data can be • Semi-structured misinterpreted interviews – Some Q’s are • Time consuming pre-prepared, however • Interviewees may not be the investigator is free to able to convey their add more during the thoughts interview • Demand characteristics / social desirability bias
  10. 10. Case studies • In depth studies • Not generalisable - • Rich/interesting data they’re unique to the individual (or small group) • Findings may be subjective • Lots of data to chose from
  11. 11. Quantitative & Qualitative: Quantitative data: Tends to be numerical. Comes from things like tick boxes. (easily processed) Qualitative data: Data received from longer answer questions, often from interviews. (gives more detail)
  12. 12. Hypothesis’: • Directional hypothesis: Predicts the direction in which results will occur. E.g. ‘More words are recalled from a list when using rehearsal as a mnemonic technique than when no technique is being used.’ • Non-directional hypothesis: Does not predict the direction of the outcome: ‘There is a difference in the number of words recalled from word lists presented with or without background music’ • Null hypothesis: Would predict that the IV would have no effect. E.g. ‘Using mnemonic techniques will not improve memory’
  13. 13. Experimental design: Independent groups: Using different participants for each condition of the experiment. Matched pairs: Matching each ppt with someone who is similar to them, and placing them in different conditions. Repeated measures: Exposing each ppt to each condition, so the ppts are (technically) their own controls.
  14. 14. DV and IV • Dependent variable: The variable that is assumed to be effected by the IV. Changes in the DV are presumed to have been caused by the IV. • Independent variable: The variable which is manipulated by the experimenter that is presumed to effect the DV.
  15. 15. Operationalising the variable: General statement: ‘Mnemonics improve memory’ It means ‘narrowing down the research focus’ So, figuring out the most simple IV and DV from a question.
  16. 16. Pilot study: • Small scale trial run of the actual experiment • Allows the investigator to identify flaws of the experiment • Tests for problems with - design of the experiment - Clarity of instructions for the ppts - Measuring instruments • Also allows a time scale of the actual experiment to be estimated
  17. 17. Extraneous variables: • Should (try to) be controlled so as not to affect the IV or the DV
  18. 18. Reliability & Validity: Reliability: Test whether something is reliable or not by doing repeats & seeing if similar results are gathered. Validity: Ecological validity – the extent to which the findings can be generalised to outside the research setting Population validity – the extent to which the findings can be generalised to other groups of people
  19. 19. Subjective & objective: Subjective: ‘Based on or influenced by personal feelings, tastes, or opinions.’ Objective: Data which is based on scientific information. Eg. Using blood samples would be considered objective.
  20. 20. Ethical issues: • Informed consent (though, sometimes presumptive consent is used on the basis that the investigator would think that they’d get consent) • Confidentiality • Right to withdraw • Deception • Protection from harm • Debriefing
  21. 21. Cost-benefit analysis: Is the cost of the experiment worth the amount of data we would get?
  22. 22. Types of sampling: 1. Random sampling - Everyone in the population has an equal chance of participating. E.g. using a random number generator to find numbers 2. Opportunity sampling - Unlikely to generate a representative sample, so investigation could be offered to everyone at a school, but then results couldn’t be generalised externally. 3. Volunteer sampling - People sign up to the experiment (e.g. in Milgram 1960) Unlikely to be generalisable, as people who sign up are most likely to have a certain personality type
  23. 23. Demand characteristics: • Predicting what the experiment is going to measure and acting accordingly – which could hinder results • Acting out-of-character due to surrounding environment • Displaying social desirability bias Investigator effects – When the investigator can get too involved in the experiment, causing a change in results
  24. 24. Measures of central tendency and dispersion: • Central tendency: - Mean (add up all no. Divide by amount of data) - Median (middle number when arranged numerically) - Mode (most frequently occurring no.) • Dispersion – ‘shows the spread of data’: - Range (highest score – lowest score) - Standard deviation
  25. 25. Points on the spec. Not covered in this PowerPoint: • How to reduce investigator effects • Be able to present quantitative data in appropriate graphs • Define types of reliability • Analyse and interpret correlation data • Define and know how to do content analysis • Present qualitative data in a ticklist/table • Pro’s and con’s of the matched pairs, independent measures, and repeat designs

×