4. Jul 2022•0 gefällt mir•36 views

Downloaden Sie, um offline zu lesen

Melden

Daten & Analysen

Data categories are groupings of data with common characteristics or features. They are useful for managing the data because certain data may be treated differently based on their classification. Understanding the relationship and dependency between the different categories can help direct data quality effort

Krishna Krish KrishFolgen

Quantitative_analysis.pptmousaderhem1

Tests for one-sample_sensitivity_and_specificityTulio Batista Veras

non para.docAnnamalai University

Medical statistics2Amany El-seoud

Fundamental of Statistics and Types of CorrelationsRajesh Verma

Non-parametric Statistical tests for Hypotheses testingSundar B N

- 2. Categorical Data Analysis Text Book: “ An Introduction to Categorical Data Analysis” By “ Alan Agresti”
- 4. Scales Of Measurement Four Scales Of Measurement: Nominal : No Order (e.g)- gender Ordinal: Order (e.g)- Income Status Ratio: Equal intervals with no True 0 (e.g): Height Interval: Equal intervals with True 0 (e.g): Temperature Categorical Data Analysis
- 5. Categorical Data: Analysis Strategies Hypothesis Testing: Is there any association? Chi Square Test, Fishers Exact test, etc Chapter- 1, 2 , 3. Modeling: What is the nature of Association? Logistic Regression, Log linear Models Chapter- 4, 5, 6,7
- 6. What is categorical data? The measurement scale for the response consists of a number of categories Variable Measurement Scale Farm system Organic & non organic Education Good , average, poor Food texture Very soft, Soft, Hard, Very hard Nutrition status Grade 1, 2, 3 KAP- public health “yes” or “No”
- 7. Data Analysis considered: Response variable(s) –( Dependent Variable or Y variable) is categorical Explanatory variable(s) –(Independent or X variable) may be categorical or continuous or both Example: Diabetes (categorical response) depend on the explanatory variables? Sex (categorical) Age (continuous) Example: Y = Diabetes( Present, absent/ Normal, mild , moderate, severe- Independent) X’s = Income, Education, gender, age, Sedentary life style, Hereditary etc.
- 8. Important Note Methods designed for nominal variables give the same results no matter how the categories are listed Methods for ordinal variables utilize the category ordering. Whether we list the categories from low to high or from high to low is irrelevant in terms of substantive conclusions, but results would change if the categories were reordered in any other way. Methods designed for ordinal variables cannot be used with nominal variables However, Methods designed for nominal variables can be used with nominal or ordinal variables If used, it results in serious loss of power. •nominal < ordinal < interval
- 9. Probability Distributions For continuous response variable – Normal distribution For Categorical response variable – Binomial distribution or multinomial distribution
- 10. Binomial Distribution n Bernoulli trials - two possible outcomes for each (success, failure) ∏ = P(success), 1 − ∏ = P(failure) for each trial Y = number of successes out of n trials Trials are independent Y has binomial distribution , y= 0,1, 2,…, n
- 11. Example: Binomial Distribution Vote (Democrat, Republican) Suppose = prob(Democrat) = 0.50. For n = 3 persons, let y = number of Democratic votes then, p(0) = 0.125 p(1) = 0.375 p(2)= 0.375 p(3) = 0.125
- 12. Multinomial distribution When each trial has >2 possible outcomes, no of outcomes in various categories have multinomial distribution. Let c denote the number of outcome categories The binomial distribution is the special case with c = 2 categories.
- 13. Properties of the Multinomial Experiment 1. The experiment consists of n identical trials. 2. There are k possible outcomes to each trial. These outcomes are called classes, categories, or cells. 3. The probabilities of the k outcomes, denoted by p1, p2,…, pk, remain the same from trial to trial,where p1 + p2 + … + pk = 1. 4. The trials are independent. 5. The random variables of interest are the cell counts, n1, n2, …, nk, of the number of observations that fall in each of the k classes.
- 14. Statistical Inference for a proportion The parameters of a Binomial and Multinomial distribution are estimated using the sample data. Methods of estimation is “Maximum Likelihood Estimation” (ML Estimation) The likelihood function(denoted by l) is the probability of the observed data, expressed as a function of the parameter value.
- 15. Contd… Example: Consider a Binomial case, n = 2, observe y = 1 The likelihood function defined for between 0 and 1 If = 0, probability is l (0) = 0 of getting y = 1 If = 0.5, probability is l(0.5) = 0.5 of getting y = 1
- 16. Maximum Likelihood The maximum likelihood (ML) estimate is the parameter value at which the likelihood function takes its maximum. Example l( ) = 2(1 − ) maximized at ˆ = 0.5 i.e., y = 1 in n = 2 trials is most likely if = 0.5. ML estimate of is ˆ = 0.50. In general, ML estimate of is p= y/n.
- 17. Binomial Likelihood functions for y=0 successes and y=6 successes in n =10 trials The result y = 6 in n = 10 trials is more likely to occur when π = 0.60 than when π equals any other value.
- 18. Significance Test for binomial parameter A significance test merely indicates whether a particular value for a parameter is plausible. The ML estimator for the Binomial Distribution is the sample proportion , p.
- 19. Confidence interval and significance tests Three different test methods to find CI and test statistic: Wald Method Likelihood-ratio method Score method
- 20. Wald Test Let be the ML estimator. Then the Wald Test statistic to test is given by Where SE is the Standard Error of the ML estimate and this follows standard normal distribution and Z2 follows Chisquare distribution with d.f = 1. The z or chi-squared test using this test statistic is called a Wald test.
- 21. Likelihood Ratio Test This alternative test uses the likelihood function through the ratio of two maximizations of it: 1. the maximum over the possible parameter values that assume the null hypothesis, 2. the maximum over the larger set of possible parameter values, permitting the null or the alternative hypothesis to be true.
- 22. Contd.. Let l0 denote the maximized value of the likelihood function under the null hypothesis, and let l1 denote the maximized value more generally. For instance, when there is a single parameter β, l0 is the likelihood function calculated at β0, and 1 is the likelihood function calculated at the ML estimate ˆ β. Then l1 is always at least as large as l0, because l1 refers to maximizing over a larger set of possible parameter values.
- 23. Remarks For ordinary regression models assuming a normal distribution for Y , the three tests provide identical results. In other cases, for large samples they have similar behaviour when H0 is true. Wald CI often has poor performance in categorical data analysis unless n quite large. For inference about proportions, score method tends to perform better than Wald method, in terms of having actual error rates closer to the advertised levels. In practice, Wald inference is popular because of simplicity, ease of forming it using software output
- 24. Thank you