SlideShare ist ein Scribd-Unternehmen logo
1 von 74
© 2008 Prentice-Hall, Inc.
Regression Models
© 2009 Prentice-Hall, Inc. 4 – 2
Introduction
 Regression analysis is a very valuable
tool for data analyst
 Regression can be used to
 Understand the relationship between
variables
 Predict the value of one variable based on
another variable
 Examples
 Determining best location for a new store
 Studying the effectiveness of advertising
dollars in increasing sales volume
© 2009 Prentice-Hall, Inc. 4 – 3
Introduction
 Regression analysis is a form of predictive
modelling technique which investigates
the relationship between a dependent
(target) and independent variable
(s) (predictor). This technique is used for
forecasting, time series modelling and
finding the causal effect
relationship between the variables.
© 2009 Prentice-Hall, Inc. 4 – 4
Introduction
 The variable to be predicted is called the
dependent variable
 Sometimes called the response variable
 The value of this variable depends on
the value of the independent variable
 Sometimes called the explanatory or
predictor variable
Independent
variable
Dependent
variable
Independent
variable
= +
© 2009 Prentice-Hall, Inc. 4 – 5
Introduction
 Regression models are two types :
 Simple regression model and
 multiple regression model.
 Both are divided into linear and nonlinear
models.
© 2009 Prentice-Hall, Inc. 4 – 6
Scatter Diagram
 Graphing is a helpful way to investigate
the relationship between variables
 A scatter diagram or scatter plot is
often used
 The independent variable is normally
plotted on the X axis
 The dependent variable is normally
plotted on the Y axis
© 2009 Prentice-Hall, Inc. 4 – 7
Triple A Construction
 Triple A Construction renovates old homes
 They have found that the dollar volume of
renovation work is dependent on the area
payroll
SALES
($100,000’s)
LOCAL PAYROLL
($100,000,000’s)
6 3
8 4
9 6
5 4
4.5 2
9.5 5
Table 4.1
© 2009 Prentice-Hall, Inc. 4 – 8
Triple A Construction
Figure 4.1
12 –
10 –
8 –
6 –
4 –
2 –
0 –
Sales($100,000)
Payroll ($100 million)
| | | | | | | |
0 1 2 3 4 5 6 7 8
ŷ
© 2009 Prentice-Hall, Inc. 4 – 9
Triple A Construction
Figure 4.1
12 –
10 –
8 –
6 –
4 –
2 –
0 –
Sales($100,000)
Payroll ($100 million)
| | | | | | | |
0 1 2 3 4 5 6 7 8
Ȳ
Ŷ
Ŷ - Ȳ
Y - Ŷ
© 2009 Prentice-Hall, Inc. 4 – 10
Simple Linear Regression
where
Y = dependent variable (response)
X = independent variable (predictor or explanatory)
0 = intercept (value of Y when X = 0)
1 = slope of the regression line
e = random error
 Regression models are used to test if there is a
relationship between variables (predict sales
based on payroll)
 There is some random error that cannot be
predicted
e  XY 10
© 2009 Prentice-Hall, Inc. 4 – 11
Simple Linear Regression
 True values for the slope and intercept are not
known so they are estimated using sample data
XbbY 10 ˆ
where
Y = dependent variable (response)
X = independent variable (predictor or explanatory)
b0 = intercept (value of Y when X = 0)
b1 = slope of the regression line
^
© 2009 Prentice-Hall, Inc. 4 – 12
Triple A Construction
 Triple A Construction is trying to predict sales
based on area payroll
Y = Sales
X = Area payroll
 The line chosen in Figure 4.1 is the one that
minimizes the errors
Error = (Actual value) – (Predicted value)
YYe ˆ
© 2009 Prentice-Hall, Inc. 4 – 13
Least Squares Regression
Errors can be positive or negative so the average error could
be zero even though individual errors could be large.
Least squares regression minimizes the sum of the squared
errors.
Payroll Line Fit Plot
0
2
4
6
8
10
0 2 4 6 8
Payroll ($100.000,000's)
Sales
($100,000)
© 2009 Prentice-Hall, Inc. 4 – 14
Triple A Construction
 For the simple linear regression model, the
values of the intercept and slope can be
calculated using the formulas below
XbbY 10 ˆ
valuesof(mean)average X
n
X
X 

valuesof(mean)average Y
n
Y
Y 





 21
)(
))((
XX
YYXX
b
XbYb 10 
© 2009 Prentice-Hall, Inc. 4 – 15
Triple A Construction
Y X (X – X)2 (X – X)(Y – Y)
6 3 (3 – 4)2 = 1 (3 – 4)(6 – 7) = 1
8 4 (4 – 4)2 = 0 (4 – 4)(8 – 7) = 0
9 6 (6 – 4)2 = 4 (6 – 4)(9 – 7) = 4
5 4 (4 – 4)2 = 0 (4 – 4)(5 – 7) = 0
4.5 2 (2 – 4)2 = 4 (2 – 4)(4.5 – 7) = 5
9.5 5 (5 – 4)2 = 1 (5 – 4)(9.5 – 7) = 2.5
ΣY = 42
Y = 42/6 = 7
ΣX = 24
X = 24/6 = 4
Σ(X – X)2 = 10 Σ(X – X)(Y – Y) = 12.5
Table 4.2
 Regression calculations
© 2009 Prentice-Hall, Inc. 4 – 16
Triple A Construction
4
6
24
6

 X
X
7
6
42
6

Y
Y
251
10
512
21 .
.
)(
))((






XX
YYXX
b
24251710  ))(.(XbYb
 Regression calculations
XY 2512 .ˆ Therefore
© 2009 Prentice-Hall, Inc. 4 – 17
Triple A Construction
4
6
24
6

 X
X
7
6
42
6

Y
Y
251
10
512
21 .
.
)(
))((






XX
YYXX
b
24251710  ))(.(XbYb
 Regression calculations
XY 2512 .ˆ Therefore
sales = 2 + 1.25(payroll)
If the payroll next
year is $600 million
000950$or5962512 ,.)(.ˆ Y
© 2009 Prentice-Hall, Inc. 4 – 18
Measuring the Fit
of the Regression Model
 Regression models can be developed
for any variables X and Y
 How do we know the model is actually
helpful in predicting Y based on X?
 We could just take the average error, but
the positive and negative errors would
cancel each other out
 Three measures of variability are
 SST – Total variability about the mean
 SSE – Variability about the regression line
 SSR – Total variability that is explained by
the model
© 2009 Prentice-Hall, Inc. 4 – 19
Measuring the Fit
of the Regression Model
 Sum of the squares total
2
)(  YYSST
 Sum of the squared error
   22
)ˆ( YYeSSE
 Sum of squares due to regression
  2
)ˆ( YYSSR
 An important relationship
SSESSRSST 
© 2009 Prentice-Hall, Inc. 4 – 20
Measuring the Fit
of the Regression Model
Y X (Y – Y)2 Y (Y – Y)2 (Y – Y)2
6 3 (6 – 7)2 = 1 2 + 1.25(3) = 5.75 0.0625 1.563
8 4 (8 – 7)2 = 1 2 + 1.25(4) = 7.00 1 0
9 6 (9 – 7)2 = 4 2 + 1.25(6) = 9.50 0.25 6.25
5 4 (5 – 7)2 = 4 2 + 1.25(4) = 7.00 4 0
4.5 2 (4.5 – 7)2 = 6.25 2 + 1.25(2) = 4.50 0 6.25
9.5 5 (9.5 – 7)2 = 6.25 2 + 1.25(5) = 8.25 1.5625 1.563
∑(Y – Y)2 = 22.5 ∑(Y – Y)2 = 6.875 ∑(Y – Y)2 = 15.625
Y = 7 SST = 22.5 SSE = 6.875 SSR = 15.625
^
^^
^^
Table 4.3
© 2009 Prentice-Hall, Inc. 4 – 21
 Sum of the squares total
2
)(  YYSST
 Sum of the squared error
   22
)ˆ( YYeSSE
 Sum of squares due to regression
  2
)ˆ( YYSSR
 An important relationship
 SSR – explained variability
 SSE – unexplained variability
SSESSRSST 
Measuring the Fit
of the Regression Model
For Triple A Construction
SST = 22.5
SSE = 6.875
SSR = 15.625
© 2009 Prentice-Hall, Inc. 4 – 22
Measuring the Fit
of the Regression Model
Figure 4.2
12 –
10 –
8 –
6 –
4 –
2 –
0 –
Sales($100,000)
Payroll ($100 million)
| | | | | | | |
0 1 2 3 4 5 6 7 8
Y = 2 + 1.25X
^
Y – Y
Y – Y
^
YY – Y
^
© 2009 Prentice-Hall, Inc. 4 – 23
Coefficient of Determination
 The proportion of the variability in Y explained by
regression equation is called the coefficient of
determination
 The coefficient of determination is r2
SST
SSE
SST
SSR
r  12
 For Triple A Construction
69440
522
625152
.
.
.
r
 About 69% of the variability in Y is explained by
the equation based on payroll (X)
© 2009 Prentice-Hall, Inc. 4 – 24
Correlation Coefficient
 The correlation coefficient is an expression of the
strength of the linear relationship
 It will always be between +1 and –1
 The correlation coefficient is r
2
rr 
 For Triple A Construction
8333069440 .. r
© 2009 Prentice-Hall, Inc. 4 – 25
Correlation Coefficient
*
*
*
*
(a) Perfect Positive
Correlation:
r = +1
X
Y
*
* *
*
(c) No Correlation:
r = 0
X
Y
* *
*
*
* *
* **
*
(d) Perfect Negative
Correlation:
r = –1
X
Y
*
*
*
*
* *
*
*
*
(b) Positive
Correlation:
0 < r < 1
X
Y
*
*
*
*
*
*
*
Figure 4.3
© 2009 Prentice-Hall, Inc. 4 – 26
Using Computer Software
for Regression
Program 4.1A
© 2009 Prentice-Hall, Inc. 4 – 27
Using Computer Software
for Regression
Program 4.1B
© 2009 Prentice-Hall, Inc. 4 – 28
Using Computer Software
for Regression
Program 4.1C
© 2009 Prentice-Hall, Inc. 4 – 29
Using Computer Software
for Regression
Program 4.1D
© 2009 Prentice-Hall, Inc. 4 – 30
Using Computer Software
for Regression
Program 4.1D
Correlation coefficient is
called Multiple R in Excel
© 2009 Prentice-Hall, Inc. 4 – 31
Assumptions of the Regression Model
1. Errors are independent
2. Errors are normally distributed
3. Errors have a mean of zero
4. Errors have a constant variance
 If we make certain assumptions about the errors
in a regression model, we can perform statistical
tests to determine if the model is useful
 A plot of the residuals (errors) will often highlight
any glaring violations of the assumption
© 2009 Prentice-Hall, Inc. 4 – 32
Residual Plots
 A random plot of residuals
Figure 4.4A
Error
X
© 2009 Prentice-Hall, Inc. 4 – 33
Residual Plots
 Nonconstant error variance
Errors increase as X increases, violating the
constant variance assumption
Figure 4.4B
Error
X
© 2009 Prentice-Hall, Inc. 4 – 34
Residual Plots
 Nonlinear relationship
Errors consistently increasing and then consistently
decreasing indicate that the model is not linear
Figure 4.4C
Error
X
© 2009 Prentice-Hall, Inc. 4 – 35
Estimating the Variance
 Errors are assumed to have a constant
variance ( 2), but we usually don’t know
this
 It can be estimated using the mean
squared error (MSE), s2
1
2


kn
SSE
MSEs
where
n = number of observations in the sample
k = number of independent variables
© 2009 Prentice-Hall, Inc. 4 – 36
Estimating the Variance
 For Triple A Construction
71881
4
87506
116
87506
1
2
.
..





kn
SSE
MSEs
 We can estimate the standard deviation, s
 This is also called the standard error of the
estimate or the standard deviation of the
regression
31171881 ..  MSEs
© 2009 Prentice-Hall, Inc. 4 – 37
Testing the Model for Significance
 When the sample size is too small, you
can get good values for MSE and r2 even if
there is no relationship between the
variables
 Testing the model for significance helps
determine if the values are meaningful
 We do this by performing a statistical
hypothesis test
© 2009 Prentice-Hall, Inc. 4 – 38
Testing the Model for Significance
 We start with the general linear model
e  XY 10
 If 1 = 0, the null hypothesis is that there is
no relationship between X and Y
 The alternate hypothesis is that there is a
linear relationship (1 ≠ 0)
 If the null hypothesis can be rejected, we
have proven there is a relationship
 We use the F statistic for this test
© 2009 Prentice-Hall, Inc. 4 – 39
Testing the Model for Significance
 The F statistic is based on the MSE and MSR
k
SSR
MSR 
where
k = number of independent variables in the model
 The F statistic is
MSE
MSR
F 
 This describes an F distribution with
degrees of freedom for the numerator = df1 = k
degrees of freedom for the denominator = df2 = n – k – 1
© 2009 Prentice-Hall, Inc. 4 – 40
Testing the Model for Significance
 If there is very little error, the MSE would
be small and the F-statistic would be large
indicating the model is useful
 If the F-statistic is large, the significance
level (p-value) will be low, indicating it is
unlikely this would have occurred by
chance
 So when the F-value is large, we can reject
the null hypothesis and accept that there is
a linear relationship between X and Y and
the values of the MSE and r2 are
meaningful
© 2009 Prentice-Hall, Inc. 4 – 41
Steps in a Hypothesis Test
1. Specify null and alternative hypotheses
010 :H
011 :H
2. Select the level of significance (). Common
values are 0.01 and 0.05
3. Calculate the value of the test statistic using the
formula
MSE
MSR
F 
© 2009 Prentice-Hall, Inc. 4 – 42
Steps in a Hypothesis Test
4. Make a decision using one of the following
methods
a) Reject the null hypothesis if the test statistic is
greater than the F-value from the table in Appendix D.
Otherwise, do not reject the null hypothesis:
21
ifReject dfdfcalculated FF ,,
kdf 1
12  kndf
b) Reject the null hypothesis if the observed significance
level, or p-value, is less than the level of significance
(). Otherwise, do not reject the null hypothesis:
)( statistictestcalculatedvalue-  FPp
value-ifReject p
© 2009 Prentice-Hall, Inc. 4 – 43
Triple A Construction
Step 1.
H0: 1 = 0 (no linear relationship between
X and Y)
H1: 1 ≠ 0 (linear relationship exists
between X and Y)
Step 2.
Select  = 0.05
625015
1
625015
.
.

k
SSR
MSR
099
71881
625015
.
.
.

MSE
MSR
F
Step 3.
Calculate the value of the test statistic
© 2009 Prentice-Hall, Inc. 4 – 44
Triple A Construction
Step 4.
Reject the null hypothesis if the test statistic
is greater than the F-value in Appendix D
df1 = k = 1
df2 = n – k – 1 = 6 – 1 – 1 = 4
The value of F associated with a 5% level of
significance and with degrees of freedom 1
and 4 is found in Appendix D
F0.05,1,4 = 7.71
Fcalculated = 9.09
Reject H0 because 9.09 > 7.71
© 2009 Prentice-Hall, Inc. 4 – 45
F = 7.71
0.05
9.09
Triple A Construction
Figure 4.5
 We can conclude there is a
statistically significant
relationship between X and Y
 The r2 value of 0.69 means
about 69% of the variability
in sales (Y) is explained by
local payroll (X)
© 2009 Prentice-Hall, Inc. 4 – 46
r2 coefficient of determination
 The F-test determines whether or not there
is a relationship between the variables.
 r2 (coefficient of determination) is the best
measure of the strength of the prediction
relationship between the X and Y variables.
• Values closer to 1 indicate a strong prediction
relationship.
• Good regression models have a low
significance level for the F-test and high r2
value.
© 2009 Prentice-Hall, Inc. 4 – 47
Coefficient Hypotheses
 Statistical tests of significance can be performed
on the coefficients.
 The null hypothesis is that the coefficient of X (i.e.,
the slope of the line) is 0 i.e., X is not useful in
predicting Y
 P values are the observed significance level and
can be used to test the null hypothesis.
 Values less than 5% negate the null hypothesis and
indicate that X is useful in predicting Y
 For a simple linear regression, the test of the
regression coefficients gives the same information
as the F-test.
© 2009 Prentice-Hall, Inc. 4 – 48
Analysis of Variance (ANOVA) Table
 When software is used to develop a regression
model, an ANOVA table is typically created that
shows the observed significance level (p-value)
for the calculated F value
 This can be compared to the level of significance
() to make a decision
DF SS MS F SIGNIFICANCE
Regression k SSR MSR = SSR/k MSR/MSE P(F > MSR/MSE)
Residual n - k - 1 SSE MSE =
SSE/(n - k - 1)
Total n - 1 SST
Table 4.4
© 2009 Prentice-Hall, Inc. 4 – 49
ANOVA for Triple A Construction
 Because this probability is less than 0.05, we
reject the null hypothesis of no linear relationship
and conclude there is a linear relationship
between X and Y
Program 4.1D
(partial)
P(F > 9.0909) = 0.0394
© 2009 Prentice-Hall, Inc. 4 – 50
Multiple Regression Analysis
 Multiple regression models are
extensions to the simple linear model
and allow the creation of models with
several independent variables
Y = 0 + 1X1 + 2X2 + … + kXk + e
where
Y = dependent variable (response variable)
Xi = ith independent variable (predictor or explanatory
variable)
0 = intercept (value of Y when all Xi = 0)
I = coefficient of the ith independent variable
k = number of independent variables
e = random error
© 2009 Prentice-Hall, Inc. 4 – 51
Multiple Regression Analysis
 To estimate these values, a sample is taken
the following equation developed
kk XbXbXbbY  ...ˆ 22110
where
= predicted value of Y
b0 = sample intercept (and is an estimate of 0)
bi = sample coefficient of the ith variable (and is
an estimate of i)
Yˆ
© 2009 Prentice-Hall, Inc. 4 – 52
Jenny Wilson Realty
 Jenny Wilson wants to develop a model to
determine the suggested listing price for houses
based on the size and age of the house
kk XbXbXbbY  ...ˆ 22110
where
= predicted value of dependent variable (selling
price)
b0 = Y intercept
X1 and X2 = value of the two independent variables (square
footage and age) respectively
b1 and b2 = slopes for X1 and X2 respectively
Yˆ
 She selects a sample of houses that have sold
recently and records the data shown in Table 4.5
© 2009 Prentice-Hall, Inc. 4 – 53
Jenny Wilson Realty
SELLING
PRICE ($)
SQUARE
FOOTAGE
AGE CONDITION
95,000 1,926 30 Good
119,000 2,069 40 Excellent
124,800 1,720 30 Excellent
135,000 1,396 15 Good
142,000 1,706 32 Mint
145,000 1,847 38 Mint
159,000 1,950 27 Mint
165,000 2,323 30 Excellent
182,000 2,285 26 Mint
183,000 3,752 35 Good
200,000 2,300 18 Good
211,000 2,525 17 Good
215,000 3,800 40 Excellent
219,000 1,740 12 Mint
Table 4.5
© 2009 Prentice-Hall, Inc. 4 – 54
Jenny Wilson Realty
Program 4.2
21 289944146631 XXY ˆ
© 2009 Prentice-Hall, Inc. 4 – 55
Evaluating Multiple Regression Models
 Evaluation is similar to simple linear
regression models
 The p-value for the F-test and r2 are
interpreted the same
 The hypothesis is different because there
is more than one independent variable
 The F-test is investigating whether all
the coefficients are equal to 0
© 2009 Prentice-Hall, Inc. 4 – 56
Evaluating Multiple Regression Models
 To determine which independent
variables are significant, tests are
performed for each variable
010 :H
011 :H
 The test statistic is calculated and if the
p-value is lower than the level of
significance (), the null hypothesis is
rejected
© 2009 Prentice-Hall, Inc. 4 – 57
Jenny Wilson Realty
 The model is statistically significant
 The p-value for the F-test is 0.002
 r2 = 0.6719 so the model explains about 67% of
the variation in selling price (Y)
 But the F-test is for the entire model and we can’t
tell if one or both of the independent variables are
significant
 By calculating the p-value of each variable, we can
assess the significance of the individual variables
 Since the p-value for X1 (square footage) and X2
(age) are both less than the significance level of
0.05, both null hypotheses can be rejected
© 2009 Prentice-Hall, Inc. 4 – 58
Binary or Dummy Variables
 Binary (or dummy or indicator) variables
are special variables created for
qualitative data
 A dummy variable is assigned a value of
1 if a particular condition is met and a
value of 0 otherwise
 The number of dummy variables must
equal one less than the number of
categories of the qualitative variable
© 2009 Prentice-Hall, Inc. 4 – 59
Jenny Wilson Realty
 Jenny believes a better model can be developed if
she includes information about the condition of
the property
X3 = 1 if house is in excellent condition
= 0 otherwise
X4 = 1 if house is in mint condition
= 0 otherwise
 Two dummy variables are used to describe the
three categories of condition
 No variable is needed for “good” condition since
if both X3 and X4 = 0, the house must be in good
condition
© 2009 Prentice-Hall, Inc. 4 – 60
Jenny Wilson Realty
Program 4.3
© 2009 Prentice-Hall, Inc. 4 – 61
Jenny Wilson Realty
Program 4.3
Model explains about
90% of the variation
in selling price
F-value
indicates
significance
Low p-values
indicate each
variable is
significant
4321 369471623396234356658121 XXXXY ,,,.,ˆ 
© 2009 Prentice-Hall, Inc. 4 – 62
Model Building
 The best model is a statistically significant
model with a high r2 and few variables
As more variables are added to the model, the
r2-value usually increases
For this reason, the adjusted r2 value is often
used to determine the usefulness of an
additional variable
The adjusted r2 takes into account the number
of independent variables in the model
When variables are added to the model, the
value of r2 can never decrease; however, the
adjusted r2 may decrease.
© 2009 Prentice-Hall, Inc. 4 – 63
Model Building
SST
SSE
SST
SSR
 12
r
 The formula for r2
 The formula for adjusted r2
)/(SST
)/(SSE
1
1
1Adjusted 2



n
kn
r
 As the number of variables increases, the
adjusted r2 gets smaller unless the increase due
to the new variable is large enough to offset the
change in k
© 2009 Prentice-Hall, Inc. 4 – 64
Model Building
 It is tempting to keep adding variables to a model
to try to increase r2
 The adjusted r2 will decrease if additional
independent variables are not beneficial.
 As the number of variables (k) increases, n-k-1
decreases.
 This causes SSE/(n-k-1) to increase which in turn
decreases the adjusted r2 unless the extra variable
causes a significant decrease in SSE
 The reduction in error (and SSE) must be sufficient
to offset the change in k
© 2009 Prentice-Hall, Inc. 4 – 65
Model Building
 In general, if a new variable increases the adjusted
r2, it should probably be included in the model
 In some cases, variables contain duplicate
information
 When two independent variables are correlated,
they are said to be collinear (e.g., monthly salary
expenses and annual salary expenses)
 When more than two independent variables are
correlated, multicollinearity exists
 When multicollinearity is present, hypothesis
tests for the individual coefficients are not valid
but the model may still be useful
© 2009 Prentice-Hall, Inc. 4 – 66
Nonlinear Regression
 In some situations, variables are not linear
 Transformations may be used to turn a
nonlinear model into a linear model
*
* **
** *
* *
Linear relationship Nonlinear relationship
* *
** **
*
*
**
*
© 2009 Prentice-Hall, Inc. 4 – 67
Colonel Motors
 The engineers want to use regression analysis to
improve fuel efficiency
 They have been asked to study the impact of
weight on miles per gallon (MPG)
MPG
WEIGHT
(1,000 LBS.) MPG
WEIGHT
(1,000 LBS.)
12 4.58 20 3.18
13 4.66 23 2.68
15 4.02 24 2.65
18 2.53 33 1.70
19 3.09 36 1.95
19 3.11 42 1.92
Table 4.6
© 2009 Prentice-Hall, Inc. 4 – 68
Colonel Motors
Figure 4.6A
45 –
40 –
35 –
30 –
25 –
20 –
15 –
10 –
5 –
0 – | | | | |
1.00 2.00 3.00 4.00 5.00
MPG
Weight (1,000 lb.)





 




Linear model
110 XbbY ˆ
© 2009 Prentice-Hall, Inc. 4 – 69
Colonel Motors
Program 4.4
 A useful model with a small F-test for
significance and a good r2 value
© 2009 Prentice-Hall, Inc. 4 – 70
Colonel Motors
Figure 4.6B
45 –
40 –
35 –
30 –
25 –
20 –
15 –
10 –
5 –
0 – | | | | |
1.00 2.00 3.00 4.00 5.00
MPG
Weight (1,000 lb.)





 




Nonlinear model
2
210 weightweight )()(MPG bbb 
© 2009 Prentice-Hall, Inc. 4 – 71
Colonel Motors
 The nonlinear model is a quadratic model
 The easiest way to work with this model is to
develop a new variable
2
2 weight)(X
 This gives us a model that can be solved with
linear regression software
22110 XbXbbY ˆ
© 2009 Prentice-Hall, Inc. 4 – 72
Colonel Motors
Program 4.5
 A better model with a smaller F-test for
significance and a larger adjusted r2 value
21 43230879 XXY ...ˆ 
© 2009 Prentice-Hall, Inc. 4 – 73
Cautions and Pitfalls
 If the assumptions are not met, the statistical
test may not be valid
 Correlation does not necessarily mean
causation
 Your annual salary and the price of cars may be
correlated but one does not cause the other
 Multicollinearity makes interpreting
coefficients problematic, but the model may
still be good
 Using a regression model beyond the range of
X is questionable, the relationship may not
hold outside the sample data
© 2009 Prentice-Hall, Inc. 4 – 74
Cautions and Pitfalls
 t-tests for the intercept (b0) may be ignored
as this point is often outside the range of
the model
 A linear relationship may not be the best
relationship, even if the F-test returns an
acceptable value
 A nonlinear relationship can exist even if a
linear relationship does not
 Just because a relationship is statistically
significant doesn't mean it has any
practical value

Weitere ähnliche Inhalte

Was ist angesagt?

Bbs11 ppt ch13
Bbs11 ppt ch13Bbs11 ppt ch13
Bbs11 ppt ch13Tuul Tuul
 
Simple linear regression project
Simple linear regression projectSimple linear regression project
Simple linear regression projectJAPAN SHAH
 
Linear regression
Linear regression Linear regression
Linear regression Vani011
 
Simple Linier Regression
Simple Linier RegressionSimple Linier Regression
Simple Linier Regressiondessybudiyanti
 
Introduction to Regression Analysis
Introduction to Regression AnalysisIntroduction to Regression Analysis
Introduction to Regression AnalysisMinha Hwang
 
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...Simplilearn
 
Linear regression without tears
Linear regression without tearsLinear regression without tears
Linear regression without tearsAnkit Sharma
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inferenceKemal İnciroğlu
 
Regression: A skin-deep dive
Regression: A skin-deep diveRegression: A skin-deep dive
Regression: A skin-deep diveabulyomon
 
Simple linear regression and correlation
Simple linear regression and correlationSimple linear regression and correlation
Simple linear regression and correlationShakeel Nouman
 
Diagnostic methods for Building the regression model
Diagnostic methods for Building the regression modelDiagnostic methods for Building the regression model
Diagnostic methods for Building the regression modelMehdi Shayegani
 
20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)HassanShah124
 

Was ist angesagt? (20)

Bbs11 ppt ch13
Bbs11 ppt ch13Bbs11 ppt ch13
Bbs11 ppt ch13
 
Simple linear regression project
Simple linear regression projectSimple linear regression project
Simple linear regression project
 
Simple Regression
Simple RegressionSimple Regression
Simple Regression
 
Linear regression
Linear regressionLinear regression
Linear regression
 
Linear regression
Linear regression Linear regression
Linear regression
 
Simple Linier Regression
Simple Linier RegressionSimple Linier Regression
Simple Linier Regression
 
Linear regression
Linear regressionLinear regression
Linear regression
 
Introduction to Regression Analysis
Introduction to Regression AnalysisIntroduction to Regression Analysis
Introduction to Regression Analysis
 
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
 
Malhotra17
Malhotra17Malhotra17
Malhotra17
 
Linear regression without tears
Linear regression without tearsLinear regression without tears
Linear regression without tears
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inference
 
Regression: A skin-deep dive
Regression: A skin-deep diveRegression: A skin-deep dive
Regression: A skin-deep dive
 
Simple linear regression and correlation
Simple linear regression and correlationSimple linear regression and correlation
Simple linear regression and correlation
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression Analysis
 
Chap11 simple regression
Chap11 simple regressionChap11 simple regression
Chap11 simple regression
 
Diagnostic methods for Building the regression model
Diagnostic methods for Building the regression modelDiagnostic methods for Building the regression model
Diagnostic methods for Building the regression model
 
Coefficient of correlation
Coefficient of correlationCoefficient of correlation
Coefficient of correlation
 
Regression Analysis
Regression AnalysisRegression Analysis
Regression Analysis
 
20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)
 

Ähnlich wie 2.1 regression

Chapter 4 power point presentation Regression models
Chapter 4 power point presentation Regression modelsChapter 4 power point presentation Regression models
Chapter 4 power point presentation Regression modelsJustinXerri
 
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docxLynellBull52
 
Bba 3274 qm week 6 part 1 regression models
Bba 3274 qm week 6 part 1 regression modelsBba 3274 qm week 6 part 1 regression models
Bba 3274 qm week 6 part 1 regression modelsStephen Ong
 
161783709 chapter-04-answers
161783709 chapter-04-answers161783709 chapter-04-answers
161783709 chapter-04-answersBookStoreLib
 
Quantitative Analysis for ManagementThirteenth EditionLesson.docx
Quantitative Analysis for ManagementThirteenth EditionLesson.docxQuantitative Analysis for ManagementThirteenth EditionLesson.docx
Quantitative Analysis for ManagementThirteenth EditionLesson.docxhildredzr1di
 
Multiple Regression.ppt
Multiple Regression.pptMultiple Regression.ppt
Multiple Regression.pptTanyaWadhwani4
 
Regression analysis
Regression analysisRegression analysis
Regression analysisSrikant001p
 
Quantitative Analysis for Management new ppt
Quantitative Analysis for Management new pptQuantitative Analysis for Management new ppt
Quantitative Analysis for Management new pptkrrish242
 
Regression analysis: Simple Linear Regression Multiple Linear Regression
Regression analysis:  Simple Linear Regression Multiple Linear RegressionRegression analysis:  Simple Linear Regression Multiple Linear Regression
Regression analysis: Simple Linear Regression Multiple Linear RegressionRavindra Nath Shukla
 
Business Quantitative - Lecture 2
Business Quantitative - Lecture 2Business Quantitative - Lecture 2
Business Quantitative - Lecture 2saark
 
Multiple Linear Regression.pptx
Multiple Linear Regression.pptxMultiple Linear Regression.pptx
Multiple Linear Regression.pptxBHUSHANKPATEL
 
Lecture - 8 MLR.pptx
Lecture - 8 MLR.pptxLecture - 8 MLR.pptx
Lecture - 8 MLR.pptxiris765749
 
manecohuhuhuhubasicEstimation-1.pptx
manecohuhuhuhubasicEstimation-1.pptxmanecohuhuhuhubasicEstimation-1.pptx
manecohuhuhuhubasicEstimation-1.pptxasdfg hjkl
 
Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help HelpWithAssignment.com
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsIRJET Journal
 
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...Dr.Summiya Parveen
 
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...Data Approximation in Mathematical Modelling Regression Analysis and curve fi...
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...Dr.Summiya Parveen
 

Ähnlich wie 2.1 regression (20)

Rsh qam11 ch04 ge
Rsh qam11 ch04 geRsh qam11 ch04 ge
Rsh qam11 ch04 ge
 
Chapter 4 power point presentation Regression models
Chapter 4 power point presentation Regression modelsChapter 4 power point presentation Regression models
Chapter 4 power point presentation Regression models
 
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx
© 2008 Prentice-Hall, Inc.Regression ModelsChapter 4.docx
 
Bba 3274 qm week 6 part 1 regression models
Bba 3274 qm week 6 part 1 regression modelsBba 3274 qm week 6 part 1 regression models
Bba 3274 qm week 6 part 1 regression models
 
161783709 chapter-04-answers
161783709 chapter-04-answers161783709 chapter-04-answers
161783709 chapter-04-answers
 
Quantitative Analysis for ManagementThirteenth EditionLesson.docx
Quantitative Analysis for ManagementThirteenth EditionLesson.docxQuantitative Analysis for ManagementThirteenth EditionLesson.docx
Quantitative Analysis for ManagementThirteenth EditionLesson.docx
 
Multiple Regression.ppt
Multiple Regression.pptMultiple Regression.ppt
Multiple Regression.ppt
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 
Quantitative Analysis for Management new ppt
Quantitative Analysis for Management new pptQuantitative Analysis for Management new ppt
Quantitative Analysis for Management new ppt
 
Regression analysis: Simple Linear Regression Multiple Linear Regression
Regression analysis:  Simple Linear Regression Multiple Linear RegressionRegression analysis:  Simple Linear Regression Multiple Linear Regression
Regression analysis: Simple Linear Regression Multiple Linear Regression
 
Business Quantitative - Lecture 2
Business Quantitative - Lecture 2Business Quantitative - Lecture 2
Business Quantitative - Lecture 2
 
Multiple Linear Regression.pptx
Multiple Linear Regression.pptxMultiple Linear Regression.pptx
Multiple Linear Regression.pptx
 
Lecture - 8 MLR.pptx
Lecture - 8 MLR.pptxLecture - 8 MLR.pptx
Lecture - 8 MLR.pptx
 
Msb12e ppt ch11
Msb12e ppt ch11Msb12e ppt ch11
Msb12e ppt ch11
 
manecohuhuhuhubasicEstimation-1.pptx
manecohuhuhuhubasicEstimation-1.pptxmanecohuhuhuhubasicEstimation-1.pptx
manecohuhuhuhubasicEstimation-1.pptx
 
LINEAR REGRESSION.pptx
LINEAR REGRESSION.pptxLINEAR REGRESSION.pptx
LINEAR REGRESSION.pptx
 
Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using Statistics
 
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...
Data Approximation in Mathematical Modelling Regression Analysis and Curve Fi...
 
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...Data Approximation in Mathematical Modelling Regression Analysis and curve fi...
Data Approximation in Mathematical Modelling Regression Analysis and curve fi...
 

Kürzlich hochgeladen

Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxolyaivanovalion
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFxolyaivanovalion
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxolyaivanovalion
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxJohnnyPlasten
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfRachmat Ramadhan H
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptxAnupama Kate
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAroojKhan71
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz1
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxolyaivanovalion
 
Data-Analysis for Chicago Crime Data 2023
Data-Analysis for Chicago Crime Data  2023Data-Analysis for Chicago Crime Data  2023
Data-Analysis for Chicago Crime Data 2023ymrp368
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Mature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptxMature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptxolyaivanovalion
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% SecurePooja Nehwal
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxolyaivanovalion
 

Kürzlich hochgeladen (20)

Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptx
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
VIP Call Girls Service Miyapur Hyderabad Call +91-8250192130
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptx
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptx
 
Data-Analysis for Chicago Crime Data 2023
Data-Analysis for Chicago Crime Data  2023Data-Analysis for Chicago Crime Data  2023
Data-Analysis for Chicago Crime Data 2023
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Mature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptxMature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptx
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptx
 

2.1 regression

  • 1. © 2008 Prentice-Hall, Inc. Regression Models
  • 2. © 2009 Prentice-Hall, Inc. 4 – 2 Introduction  Regression analysis is a very valuable tool for data analyst  Regression can be used to  Understand the relationship between variables  Predict the value of one variable based on another variable  Examples  Determining best location for a new store  Studying the effectiveness of advertising dollars in increasing sales volume
  • 3. © 2009 Prentice-Hall, Inc. 4 – 3 Introduction  Regression analysis is a form of predictive modelling technique which investigates the relationship between a dependent (target) and independent variable (s) (predictor). This technique is used for forecasting, time series modelling and finding the causal effect relationship between the variables.
  • 4. © 2009 Prentice-Hall, Inc. 4 – 4 Introduction  The variable to be predicted is called the dependent variable  Sometimes called the response variable  The value of this variable depends on the value of the independent variable  Sometimes called the explanatory or predictor variable Independent variable Dependent variable Independent variable = +
  • 5. © 2009 Prentice-Hall, Inc. 4 – 5 Introduction  Regression models are two types :  Simple regression model and  multiple regression model.  Both are divided into linear and nonlinear models.
  • 6. © 2009 Prentice-Hall, Inc. 4 – 6 Scatter Diagram  Graphing is a helpful way to investigate the relationship between variables  A scatter diagram or scatter plot is often used  The independent variable is normally plotted on the X axis  The dependent variable is normally plotted on the Y axis
  • 7. © 2009 Prentice-Hall, Inc. 4 – 7 Triple A Construction  Triple A Construction renovates old homes  They have found that the dollar volume of renovation work is dependent on the area payroll SALES ($100,000’s) LOCAL PAYROLL ($100,000,000’s) 6 3 8 4 9 6 5 4 4.5 2 9.5 5 Table 4.1
  • 8. © 2009 Prentice-Hall, Inc. 4 – 8 Triple A Construction Figure 4.1 12 – 10 – 8 – 6 – 4 – 2 – 0 – Sales($100,000) Payroll ($100 million) | | | | | | | | 0 1 2 3 4 5 6 7 8 ŷ
  • 9. © 2009 Prentice-Hall, Inc. 4 – 9 Triple A Construction Figure 4.1 12 – 10 – 8 – 6 – 4 – 2 – 0 – Sales($100,000) Payroll ($100 million) | | | | | | | | 0 1 2 3 4 5 6 7 8 Ȳ Ŷ Ŷ - Ȳ Y - Ŷ
  • 10. © 2009 Prentice-Hall, Inc. 4 – 10 Simple Linear Regression where Y = dependent variable (response) X = independent variable (predictor or explanatory) 0 = intercept (value of Y when X = 0) 1 = slope of the regression line e = random error  Regression models are used to test if there is a relationship between variables (predict sales based on payroll)  There is some random error that cannot be predicted e  XY 10
  • 11. © 2009 Prentice-Hall, Inc. 4 – 11 Simple Linear Regression  True values for the slope and intercept are not known so they are estimated using sample data XbbY 10 ˆ where Y = dependent variable (response) X = independent variable (predictor or explanatory) b0 = intercept (value of Y when X = 0) b1 = slope of the regression line ^
  • 12. © 2009 Prentice-Hall, Inc. 4 – 12 Triple A Construction  Triple A Construction is trying to predict sales based on area payroll Y = Sales X = Area payroll  The line chosen in Figure 4.1 is the one that minimizes the errors Error = (Actual value) – (Predicted value) YYe ˆ
  • 13. © 2009 Prentice-Hall, Inc. 4 – 13 Least Squares Regression Errors can be positive or negative so the average error could be zero even though individual errors could be large. Least squares regression minimizes the sum of the squared errors. Payroll Line Fit Plot 0 2 4 6 8 10 0 2 4 6 8 Payroll ($100.000,000's) Sales ($100,000)
  • 14. © 2009 Prentice-Hall, Inc. 4 – 14 Triple A Construction  For the simple linear regression model, the values of the intercept and slope can be calculated using the formulas below XbbY 10 ˆ valuesof(mean)average X n X X   valuesof(mean)average Y n Y Y        21 )( ))(( XX YYXX b XbYb 10 
  • 15. © 2009 Prentice-Hall, Inc. 4 – 15 Triple A Construction Y X (X – X)2 (X – X)(Y – Y) 6 3 (3 – 4)2 = 1 (3 – 4)(6 – 7) = 1 8 4 (4 – 4)2 = 0 (4 – 4)(8 – 7) = 0 9 6 (6 – 4)2 = 4 (6 – 4)(9 – 7) = 4 5 4 (4 – 4)2 = 0 (4 – 4)(5 – 7) = 0 4.5 2 (2 – 4)2 = 4 (2 – 4)(4.5 – 7) = 5 9.5 5 (5 – 4)2 = 1 (5 – 4)(9.5 – 7) = 2.5 ΣY = 42 Y = 42/6 = 7 ΣX = 24 X = 24/6 = 4 Σ(X – X)2 = 10 Σ(X – X)(Y – Y) = 12.5 Table 4.2  Regression calculations
  • 16. © 2009 Prentice-Hall, Inc. 4 – 16 Triple A Construction 4 6 24 6   X X 7 6 42 6  Y Y 251 10 512 21 . . )( ))((       XX YYXX b 24251710  ))(.(XbYb  Regression calculations XY 2512 .ˆ Therefore
  • 17. © 2009 Prentice-Hall, Inc. 4 – 17 Triple A Construction 4 6 24 6   X X 7 6 42 6  Y Y 251 10 512 21 . . )( ))((       XX YYXX b 24251710  ))(.(XbYb  Regression calculations XY 2512 .ˆ Therefore sales = 2 + 1.25(payroll) If the payroll next year is $600 million 000950$or5962512 ,.)(.ˆ Y
  • 18. © 2009 Prentice-Hall, Inc. 4 – 18 Measuring the Fit of the Regression Model  Regression models can be developed for any variables X and Y  How do we know the model is actually helpful in predicting Y based on X?  We could just take the average error, but the positive and negative errors would cancel each other out  Three measures of variability are  SST – Total variability about the mean  SSE – Variability about the regression line  SSR – Total variability that is explained by the model
  • 19. © 2009 Prentice-Hall, Inc. 4 – 19 Measuring the Fit of the Regression Model  Sum of the squares total 2 )(  YYSST  Sum of the squared error    22 )ˆ( YYeSSE  Sum of squares due to regression   2 )ˆ( YYSSR  An important relationship SSESSRSST 
  • 20. © 2009 Prentice-Hall, Inc. 4 – 20 Measuring the Fit of the Regression Model Y X (Y – Y)2 Y (Y – Y)2 (Y – Y)2 6 3 (6 – 7)2 = 1 2 + 1.25(3) = 5.75 0.0625 1.563 8 4 (8 – 7)2 = 1 2 + 1.25(4) = 7.00 1 0 9 6 (9 – 7)2 = 4 2 + 1.25(6) = 9.50 0.25 6.25 5 4 (5 – 7)2 = 4 2 + 1.25(4) = 7.00 4 0 4.5 2 (4.5 – 7)2 = 6.25 2 + 1.25(2) = 4.50 0 6.25 9.5 5 (9.5 – 7)2 = 6.25 2 + 1.25(5) = 8.25 1.5625 1.563 ∑(Y – Y)2 = 22.5 ∑(Y – Y)2 = 6.875 ∑(Y – Y)2 = 15.625 Y = 7 SST = 22.5 SSE = 6.875 SSR = 15.625 ^ ^^ ^^ Table 4.3
  • 21. © 2009 Prentice-Hall, Inc. 4 – 21  Sum of the squares total 2 )(  YYSST  Sum of the squared error    22 )ˆ( YYeSSE  Sum of squares due to regression   2 )ˆ( YYSSR  An important relationship  SSR – explained variability  SSE – unexplained variability SSESSRSST  Measuring the Fit of the Regression Model For Triple A Construction SST = 22.5 SSE = 6.875 SSR = 15.625
  • 22. © 2009 Prentice-Hall, Inc. 4 – 22 Measuring the Fit of the Regression Model Figure 4.2 12 – 10 – 8 – 6 – 4 – 2 – 0 – Sales($100,000) Payroll ($100 million) | | | | | | | | 0 1 2 3 4 5 6 7 8 Y = 2 + 1.25X ^ Y – Y Y – Y ^ YY – Y ^
  • 23. © 2009 Prentice-Hall, Inc. 4 – 23 Coefficient of Determination  The proportion of the variability in Y explained by regression equation is called the coefficient of determination  The coefficient of determination is r2 SST SSE SST SSR r  12  For Triple A Construction 69440 522 625152 . . . r  About 69% of the variability in Y is explained by the equation based on payroll (X)
  • 24. © 2009 Prentice-Hall, Inc. 4 – 24 Correlation Coefficient  The correlation coefficient is an expression of the strength of the linear relationship  It will always be between +1 and –1  The correlation coefficient is r 2 rr   For Triple A Construction 8333069440 .. r
  • 25. © 2009 Prentice-Hall, Inc. 4 – 25 Correlation Coefficient * * * * (a) Perfect Positive Correlation: r = +1 X Y * * * * (c) No Correlation: r = 0 X Y * * * * * * * ** * (d) Perfect Negative Correlation: r = –1 X Y * * * * * * * * * (b) Positive Correlation: 0 < r < 1 X Y * * * * * * * Figure 4.3
  • 26. © 2009 Prentice-Hall, Inc. 4 – 26 Using Computer Software for Regression Program 4.1A
  • 27. © 2009 Prentice-Hall, Inc. 4 – 27 Using Computer Software for Regression Program 4.1B
  • 28. © 2009 Prentice-Hall, Inc. 4 – 28 Using Computer Software for Regression Program 4.1C
  • 29. © 2009 Prentice-Hall, Inc. 4 – 29 Using Computer Software for Regression Program 4.1D
  • 30. © 2009 Prentice-Hall, Inc. 4 – 30 Using Computer Software for Regression Program 4.1D Correlation coefficient is called Multiple R in Excel
  • 31. © 2009 Prentice-Hall, Inc. 4 – 31 Assumptions of the Regression Model 1. Errors are independent 2. Errors are normally distributed 3. Errors have a mean of zero 4. Errors have a constant variance  If we make certain assumptions about the errors in a regression model, we can perform statistical tests to determine if the model is useful  A plot of the residuals (errors) will often highlight any glaring violations of the assumption
  • 32. © 2009 Prentice-Hall, Inc. 4 – 32 Residual Plots  A random plot of residuals Figure 4.4A Error X
  • 33. © 2009 Prentice-Hall, Inc. 4 – 33 Residual Plots  Nonconstant error variance Errors increase as X increases, violating the constant variance assumption Figure 4.4B Error X
  • 34. © 2009 Prentice-Hall, Inc. 4 – 34 Residual Plots  Nonlinear relationship Errors consistently increasing and then consistently decreasing indicate that the model is not linear Figure 4.4C Error X
  • 35. © 2009 Prentice-Hall, Inc. 4 – 35 Estimating the Variance  Errors are assumed to have a constant variance ( 2), but we usually don’t know this  It can be estimated using the mean squared error (MSE), s2 1 2   kn SSE MSEs where n = number of observations in the sample k = number of independent variables
  • 36. © 2009 Prentice-Hall, Inc. 4 – 36 Estimating the Variance  For Triple A Construction 71881 4 87506 116 87506 1 2 . ..      kn SSE MSEs  We can estimate the standard deviation, s  This is also called the standard error of the estimate or the standard deviation of the regression 31171881 ..  MSEs
  • 37. © 2009 Prentice-Hall, Inc. 4 – 37 Testing the Model for Significance  When the sample size is too small, you can get good values for MSE and r2 even if there is no relationship between the variables  Testing the model for significance helps determine if the values are meaningful  We do this by performing a statistical hypothesis test
  • 38. © 2009 Prentice-Hall, Inc. 4 – 38 Testing the Model for Significance  We start with the general linear model e  XY 10  If 1 = 0, the null hypothesis is that there is no relationship between X and Y  The alternate hypothesis is that there is a linear relationship (1 ≠ 0)  If the null hypothesis can be rejected, we have proven there is a relationship  We use the F statistic for this test
  • 39. © 2009 Prentice-Hall, Inc. 4 – 39 Testing the Model for Significance  The F statistic is based on the MSE and MSR k SSR MSR  where k = number of independent variables in the model  The F statistic is MSE MSR F   This describes an F distribution with degrees of freedom for the numerator = df1 = k degrees of freedom for the denominator = df2 = n – k – 1
  • 40. © 2009 Prentice-Hall, Inc. 4 – 40 Testing the Model for Significance  If there is very little error, the MSE would be small and the F-statistic would be large indicating the model is useful  If the F-statistic is large, the significance level (p-value) will be low, indicating it is unlikely this would have occurred by chance  So when the F-value is large, we can reject the null hypothesis and accept that there is a linear relationship between X and Y and the values of the MSE and r2 are meaningful
  • 41. © 2009 Prentice-Hall, Inc. 4 – 41 Steps in a Hypothesis Test 1. Specify null and alternative hypotheses 010 :H 011 :H 2. Select the level of significance (). Common values are 0.01 and 0.05 3. Calculate the value of the test statistic using the formula MSE MSR F 
  • 42. © 2009 Prentice-Hall, Inc. 4 – 42 Steps in a Hypothesis Test 4. Make a decision using one of the following methods a) Reject the null hypothesis if the test statistic is greater than the F-value from the table in Appendix D. Otherwise, do not reject the null hypothesis: 21 ifReject dfdfcalculated FF ,, kdf 1 12  kndf b) Reject the null hypothesis if the observed significance level, or p-value, is less than the level of significance (). Otherwise, do not reject the null hypothesis: )( statistictestcalculatedvalue-  FPp value-ifReject p
  • 43. © 2009 Prentice-Hall, Inc. 4 – 43 Triple A Construction Step 1. H0: 1 = 0 (no linear relationship between X and Y) H1: 1 ≠ 0 (linear relationship exists between X and Y) Step 2. Select  = 0.05 625015 1 625015 . .  k SSR MSR 099 71881 625015 . . .  MSE MSR F Step 3. Calculate the value of the test statistic
  • 44. © 2009 Prentice-Hall, Inc. 4 – 44 Triple A Construction Step 4. Reject the null hypothesis if the test statistic is greater than the F-value in Appendix D df1 = k = 1 df2 = n – k – 1 = 6 – 1 – 1 = 4 The value of F associated with a 5% level of significance and with degrees of freedom 1 and 4 is found in Appendix D F0.05,1,4 = 7.71 Fcalculated = 9.09 Reject H0 because 9.09 > 7.71
  • 45. © 2009 Prentice-Hall, Inc. 4 – 45 F = 7.71 0.05 9.09 Triple A Construction Figure 4.5  We can conclude there is a statistically significant relationship between X and Y  The r2 value of 0.69 means about 69% of the variability in sales (Y) is explained by local payroll (X)
  • 46. © 2009 Prentice-Hall, Inc. 4 – 46 r2 coefficient of determination  The F-test determines whether or not there is a relationship between the variables.  r2 (coefficient of determination) is the best measure of the strength of the prediction relationship between the X and Y variables. • Values closer to 1 indicate a strong prediction relationship. • Good regression models have a low significance level for the F-test and high r2 value.
  • 47. © 2009 Prentice-Hall, Inc. 4 – 47 Coefficient Hypotheses  Statistical tests of significance can be performed on the coefficients.  The null hypothesis is that the coefficient of X (i.e., the slope of the line) is 0 i.e., X is not useful in predicting Y  P values are the observed significance level and can be used to test the null hypothesis.  Values less than 5% negate the null hypothesis and indicate that X is useful in predicting Y  For a simple linear regression, the test of the regression coefficients gives the same information as the F-test.
  • 48. © 2009 Prentice-Hall, Inc. 4 – 48 Analysis of Variance (ANOVA) Table  When software is used to develop a regression model, an ANOVA table is typically created that shows the observed significance level (p-value) for the calculated F value  This can be compared to the level of significance () to make a decision DF SS MS F SIGNIFICANCE Regression k SSR MSR = SSR/k MSR/MSE P(F > MSR/MSE) Residual n - k - 1 SSE MSE = SSE/(n - k - 1) Total n - 1 SST Table 4.4
  • 49. © 2009 Prentice-Hall, Inc. 4 – 49 ANOVA for Triple A Construction  Because this probability is less than 0.05, we reject the null hypothesis of no linear relationship and conclude there is a linear relationship between X and Y Program 4.1D (partial) P(F > 9.0909) = 0.0394
  • 50. © 2009 Prentice-Hall, Inc. 4 – 50 Multiple Regression Analysis  Multiple regression models are extensions to the simple linear model and allow the creation of models with several independent variables Y = 0 + 1X1 + 2X2 + … + kXk + e where Y = dependent variable (response variable) Xi = ith independent variable (predictor or explanatory variable) 0 = intercept (value of Y when all Xi = 0) I = coefficient of the ith independent variable k = number of independent variables e = random error
  • 51. © 2009 Prentice-Hall, Inc. 4 – 51 Multiple Regression Analysis  To estimate these values, a sample is taken the following equation developed kk XbXbXbbY  ...ˆ 22110 where = predicted value of Y b0 = sample intercept (and is an estimate of 0) bi = sample coefficient of the ith variable (and is an estimate of i) Yˆ
  • 52. © 2009 Prentice-Hall, Inc. 4 – 52 Jenny Wilson Realty  Jenny Wilson wants to develop a model to determine the suggested listing price for houses based on the size and age of the house kk XbXbXbbY  ...ˆ 22110 where = predicted value of dependent variable (selling price) b0 = Y intercept X1 and X2 = value of the two independent variables (square footage and age) respectively b1 and b2 = slopes for X1 and X2 respectively Yˆ  She selects a sample of houses that have sold recently and records the data shown in Table 4.5
  • 53. © 2009 Prentice-Hall, Inc. 4 – 53 Jenny Wilson Realty SELLING PRICE ($) SQUARE FOOTAGE AGE CONDITION 95,000 1,926 30 Good 119,000 2,069 40 Excellent 124,800 1,720 30 Excellent 135,000 1,396 15 Good 142,000 1,706 32 Mint 145,000 1,847 38 Mint 159,000 1,950 27 Mint 165,000 2,323 30 Excellent 182,000 2,285 26 Mint 183,000 3,752 35 Good 200,000 2,300 18 Good 211,000 2,525 17 Good 215,000 3,800 40 Excellent 219,000 1,740 12 Mint Table 4.5
  • 54. © 2009 Prentice-Hall, Inc. 4 – 54 Jenny Wilson Realty Program 4.2 21 289944146631 XXY ˆ
  • 55. © 2009 Prentice-Hall, Inc. 4 – 55 Evaluating Multiple Regression Models  Evaluation is similar to simple linear regression models  The p-value for the F-test and r2 are interpreted the same  The hypothesis is different because there is more than one independent variable  The F-test is investigating whether all the coefficients are equal to 0
  • 56. © 2009 Prentice-Hall, Inc. 4 – 56 Evaluating Multiple Regression Models  To determine which independent variables are significant, tests are performed for each variable 010 :H 011 :H  The test statistic is calculated and if the p-value is lower than the level of significance (), the null hypothesis is rejected
  • 57. © 2009 Prentice-Hall, Inc. 4 – 57 Jenny Wilson Realty  The model is statistically significant  The p-value for the F-test is 0.002  r2 = 0.6719 so the model explains about 67% of the variation in selling price (Y)  But the F-test is for the entire model and we can’t tell if one or both of the independent variables are significant  By calculating the p-value of each variable, we can assess the significance of the individual variables  Since the p-value for X1 (square footage) and X2 (age) are both less than the significance level of 0.05, both null hypotheses can be rejected
  • 58. © 2009 Prentice-Hall, Inc. 4 – 58 Binary or Dummy Variables  Binary (or dummy or indicator) variables are special variables created for qualitative data  A dummy variable is assigned a value of 1 if a particular condition is met and a value of 0 otherwise  The number of dummy variables must equal one less than the number of categories of the qualitative variable
  • 59. © 2009 Prentice-Hall, Inc. 4 – 59 Jenny Wilson Realty  Jenny believes a better model can be developed if she includes information about the condition of the property X3 = 1 if house is in excellent condition = 0 otherwise X4 = 1 if house is in mint condition = 0 otherwise  Two dummy variables are used to describe the three categories of condition  No variable is needed for “good” condition since if both X3 and X4 = 0, the house must be in good condition
  • 60. © 2009 Prentice-Hall, Inc. 4 – 60 Jenny Wilson Realty Program 4.3
  • 61. © 2009 Prentice-Hall, Inc. 4 – 61 Jenny Wilson Realty Program 4.3 Model explains about 90% of the variation in selling price F-value indicates significance Low p-values indicate each variable is significant 4321 369471623396234356658121 XXXXY ,,,.,ˆ 
  • 62. © 2009 Prentice-Hall, Inc. 4 – 62 Model Building  The best model is a statistically significant model with a high r2 and few variables As more variables are added to the model, the r2-value usually increases For this reason, the adjusted r2 value is often used to determine the usefulness of an additional variable The adjusted r2 takes into account the number of independent variables in the model When variables are added to the model, the value of r2 can never decrease; however, the adjusted r2 may decrease.
  • 63. © 2009 Prentice-Hall, Inc. 4 – 63 Model Building SST SSE SST SSR  12 r  The formula for r2  The formula for adjusted r2 )/(SST )/(SSE 1 1 1Adjusted 2    n kn r  As the number of variables increases, the adjusted r2 gets smaller unless the increase due to the new variable is large enough to offset the change in k
  • 64. © 2009 Prentice-Hall, Inc. 4 – 64 Model Building  It is tempting to keep adding variables to a model to try to increase r2  The adjusted r2 will decrease if additional independent variables are not beneficial.  As the number of variables (k) increases, n-k-1 decreases.  This causes SSE/(n-k-1) to increase which in turn decreases the adjusted r2 unless the extra variable causes a significant decrease in SSE  The reduction in error (and SSE) must be sufficient to offset the change in k
  • 65. © 2009 Prentice-Hall, Inc. 4 – 65 Model Building  In general, if a new variable increases the adjusted r2, it should probably be included in the model  In some cases, variables contain duplicate information  When two independent variables are correlated, they are said to be collinear (e.g., monthly salary expenses and annual salary expenses)  When more than two independent variables are correlated, multicollinearity exists  When multicollinearity is present, hypothesis tests for the individual coefficients are not valid but the model may still be useful
  • 66. © 2009 Prentice-Hall, Inc. 4 – 66 Nonlinear Regression  In some situations, variables are not linear  Transformations may be used to turn a nonlinear model into a linear model * * ** ** * * * Linear relationship Nonlinear relationship * * ** ** * * ** *
  • 67. © 2009 Prentice-Hall, Inc. 4 – 67 Colonel Motors  The engineers want to use regression analysis to improve fuel efficiency  They have been asked to study the impact of weight on miles per gallon (MPG) MPG WEIGHT (1,000 LBS.) MPG WEIGHT (1,000 LBS.) 12 4.58 20 3.18 13 4.66 23 2.68 15 4.02 24 2.65 18 2.53 33 1.70 19 3.09 36 1.95 19 3.11 42 1.92 Table 4.6
  • 68. © 2009 Prentice-Hall, Inc. 4 – 68 Colonel Motors Figure 4.6A 45 – 40 – 35 – 30 – 25 – 20 – 15 – 10 – 5 – 0 – | | | | | 1.00 2.00 3.00 4.00 5.00 MPG Weight (1,000 lb.)            Linear model 110 XbbY ˆ
  • 69. © 2009 Prentice-Hall, Inc. 4 – 69 Colonel Motors Program 4.4  A useful model with a small F-test for significance and a good r2 value
  • 70. © 2009 Prentice-Hall, Inc. 4 – 70 Colonel Motors Figure 4.6B 45 – 40 – 35 – 30 – 25 – 20 – 15 – 10 – 5 – 0 – | | | | | 1.00 2.00 3.00 4.00 5.00 MPG Weight (1,000 lb.)            Nonlinear model 2 210 weightweight )()(MPG bbb 
  • 71. © 2009 Prentice-Hall, Inc. 4 – 71 Colonel Motors  The nonlinear model is a quadratic model  The easiest way to work with this model is to develop a new variable 2 2 weight)(X  This gives us a model that can be solved with linear regression software 22110 XbXbbY ˆ
  • 72. © 2009 Prentice-Hall, Inc. 4 – 72 Colonel Motors Program 4.5  A better model with a smaller F-test for significance and a larger adjusted r2 value 21 43230879 XXY ...ˆ 
  • 73. © 2009 Prentice-Hall, Inc. 4 – 73 Cautions and Pitfalls  If the assumptions are not met, the statistical test may not be valid  Correlation does not necessarily mean causation  Your annual salary and the price of cars may be correlated but one does not cause the other  Multicollinearity makes interpreting coefficients problematic, but the model may still be good  Using a regression model beyond the range of X is questionable, the relationship may not hold outside the sample data
  • 74. © 2009 Prentice-Hall, Inc. 4 – 74 Cautions and Pitfalls  t-tests for the intercept (b0) may be ignored as this point is often outside the range of the model  A linear relationship may not be the best relationship, even if the F-test returns an acceptable value  A nonlinear relationship can exist even if a linear relationship does not  Just because a relationship is statistically significant doesn't mean it has any practical value