# 1. What is a type I and type II errors in hypothesis testing What w.pdf

2. Apr 2023
1 von 1

### 1. What is a type I and type II errors in hypothesis testing What w.pdf

• 1. 1. What is a type I and type II errors in hypothesis testing? What would be examples of each? Explain 2. What is the difference between statistical significance and practical significance? Why is statistical significance not necessarily of practical important difference to a business decision? Provide an example of this. Solution There are two kinds of errors that can be made in significance testing: (1) a true null hypothesis can be incorrectly rejected and (2) a false null hypothesis can fail to be rejected. The former error is called a Type I error and the latter error is called a Type II error. These two types of errors are defined in the table. The Alpha-Fetoprotein (AFP) Test has both Type I and Type II error possibilities. This test screens the mother’s blood during pregnancy for AFP and determines risk. Abnormally high or low levels may indicate Down syndrome. Ho: patient is healthy Ha: patient is unhealthy Error Type I (False positive) is: Test wrongly indicates that patient has a Down syndrome, which means that pregnancy must be aborted for no reason. Error Type II (False negative) is: Test is negative and the child will be born with multiple anomalies. 2. Statistical significance means that the observed mean differences are not likely due to sampling error Practical significance looks at whether the difference is large enough to be of value in a practical sense. One not necessarily attractive reason is that 80% of the business people in the world have no idea what the heck that means. 80% of the rest think they do, but wildly misinterpret anything. Many are simply dismissive of what they see as "geek science," and the geeks speaking geekspeak don't help. [I'm reminded of the saying that "a million economists laid end to end would never reach a conclusion.] But the real reason is that business people have to make decisions in the real (non normally distributed) world with less than perfect information, whereas statistics folks tend to stick with the extremes of "very probable" or "can't really say." So for example, if we have to decide whether to make blue products or pink products, it may be difficult to say to a small level of significance which the public might really prefer based on sampling. "Within the margin of error" doesn't help make decisions. We need to have some idea which is more likely, even if isn't gospel.