SlideShare ist ein Scribd-Unternehmen logo
SOLUTION FOR HOMEWORK 3, STAT 5352
Welcome to your third homework. We finish the point estimation; your Exam 1 is next
week and it will be close to HW1-HW3.
Recall that Xn
:= (X1, . . . , Xn) denotes the vector of n observations.
Try to find mistakes (and get extra points) in my solutions. Typically they are silly
arithmetic mistakes (not methodological ones). They allow me to check that you did your
HW on your own. Please do not e-mail me about your findings — just mention them on the
first page of your solution and count extra points.
Now let us look at your problems.
1. Problem 10.51. Let X1, . . . , Xn be iid according to Expon(θ), so
fX
θ (x) = (1/θ)e−x/θ
I(x > 0), θ ∈ Ω := (0, ∞).
Please note that it is important to write this density with indicator function showing its
support. In some cases the support may depend on a parameter of interest, and then this
fact is always very important. We shall see such an example in this homework.
For the exponential distribution we know that Eθ(X) = θ (you may check this by a direct
calculation), so we get a simple method of moments estimator
Θ̂MME = X̄.
This is the answer. But I would like to continue a bit. The method of moments estimator
(or a generalized one) allows you to work with any moment (or any function). Let us consider
the second moment and equate sample second moment to the theoretical one. Recall that
V arθ(X) = θ2
, and thus
Eθ(X2
) = V arθ(X) + (Eθ(X))2
= 2θ2
.
The sample second moment is n−1 Pn
i=1 X2
i , and we get another method of moments estimator
Θ̃MME = [n−1
n
X
i=1
X2
i /2]1/2
.
Note that these MM estimators are different, and this is OK. Then a statistician should
choose a better one. Which one do you think is better? You may use the notion of efficiency
to resolve the issue (compare their MSEs (mean squared errors) E(θ̂ − θ)2
and choose an
estimator with the smaller MSE). By the way, which estimator is based on the sufficient
statistic?
2. Problem 10.53. Here X1, . . . , Xn are Poisson(λ). Recall that Eλ(X) = λ and
V arλ(X) = λ.
The MME is easy to get via the first moment, and we have
λ̂MME = X̄.
1
This is the answer. But again, as an extra example, I can suggest a MME based on the
second moment. Indeed, Eλ(X2
) = V arλ(X) + (EλX)2
= λ + λ2
and this yields that
λ̃MME + λ̃2
MME = n−1
n
X
i=1
X2
i .
Then you need to solve this equation to get the MME. Obviously it is a more complicated
estimator, but it is yet another MME.
3. Problem 10.56. Let X1, . . . , Xn be iid according to the pdf
gθ(x) = θ−1
e−(x−δ)/θ
I(x > δ).
Please note that this is a location-exponential family because
X = δ + Z,
where Z is a classical exponential RV with fZ
θ (z) = θ−1
e−z/θ
I(z > 0). I can go either further
by saying that we are dealing with a location-scale family because
X = δ + θZ0,
where fZ0
(z) = e−z
I(z > 0).
So now we know the meaning of parameters δ and θ: the former is the location (shift)
and the latter is the scale (multiplier).
Note that this understanding simplifies all calculations because you can easily figure out
(otherwise do calculations) that
Eδ,θ(X) = δ + θ, V arδ,θ(X) = θ2
.
These two familiar results yield Eδ,θ(X2
) = θ2
+ (δ + θ)2
, and we get the following system of
two equations to find the pair of MMEs:
δ̂ + θ̂ = X̄,
2θ̂2
+ 2δ̂θ̂ + δ̂2
= n−1
n
X
i=1
Xi.
To solve this system, we square the both sides of the first equality and then subtract the
obtained equality from the second equality. We get a new system
δ̂ + θ̂ = X̄,
θ̂2
= n−1
n
X
i=1
X2
i − X̄2
.
This, together with a simple algebra, yields the answer
δ̂MME = X̄ − [n−1
n
X
i=1
X2
i − X̄2
]1/2
, θ̂MME = [n−1
n
X
i=1
X2
i − X̄2
]1/2
.
2
Remark: We need to check that n−1 Pn
i=1 X2
i − X̄2
≥ 0 for the estimator to be well
defined. This may be done via famous Hölder inequality
(
m
X
j=1
aj)2
≤ m
m
X
j=1
a2
j .
4. Problem 10.59. Here X1, . . . , Xn are Poisson(λ), λ ∈ Ω = (0, ∞). Recall that
Eλ(X) = λ and V arλ(X) = λ. Then, by definition of the MLE:
λ̂MLE := arg max
λ∈Ω
n
Y
l=1
fλ(Xl) =: arg max
λ∈Ω
LXn (λ)
= arg max
λ∈Ω
n
X
l=1
ln(fλ(Xl)) =: arg max
λ∈Ω
ln LXn (λ).
For the Poisson pdf fλ(x) = e−λ
λx
/x! we get
ln LXn (λ) = −nλ +
n
X
l=1
Xl ln(λ) −
n
X
l=1
ln(Xl!).
Now we need to find λ̂MLE at which the above loglikelihood attains its maximum over all
λ ∈ Ω. You can do this in a usual way: take derivative with respect to λ ( that is, calculate
∂ ln LXn (λ)/∂λ, then equate it to zero, solve with respect to λ, and then check that the
solution indeed maximizes the loglikelihood). Here equating of the derivative to zero yields
−n +
Pn
l=1 Xl/λ = 0, and we get
λ̂MLE = X̄.
Note that for the Poisson setting the MME and MLE coincide; in general they may be
different.
5. Problem 10.62. Here X1, . . . , Xn are iid N(µ, σ2
) with the mean µ being known and
the parameter of interest being the variance σ2
. Note that σ2
∈ Ω = (0, ∞). Then we are
interested in the MLE. Write:
σ̂2
MLE = arg max
σ2∈Ω
ln LXn (σ2
).
Here
ln LXx (σ2
) =
n
X
l=1
ln([2πσ2
]−1/2
e−(Xl−µ)2/(2σ2)
) = −(n/2) ln(2πσ2
) − (1/2σ2
)
n
X
l=1
(Xl − µ)2
.
This expression takes on its maximum at
σ̂2
MLE = n−1
n
X
l=1
(Xl − µ)2
.
Note that this is also the MME.
3
6. Problem 10.66. Let X1, . . . , Xn be iid according to the pdf
gθ(x) = θ−1
e−(x−δ)/θ
I(x > δ).
Then
LXn (δ, θ) = θ−n
e−
Pn
l=1
(Xl−δ)/θ
I(X(1) > δ).
Recall that X(1) = min(X1, . . . , Xn) is the minimal observation [the first ordered observation].
This is the case that I wrote you about earlier: it is absolutely crucial to take into account
the indicator function (the support) because here the parameter δ defines the support.
By its definition,
(δ̂MLE, θ̂MLE) := arg max
δ∈(−∞,∞),θ∈(0,∞)
ln(LXn (δ, θ)).
Note that
L(δ, θ) := ln(LXn (δ, θ)) = −n ln(θ) − θ−1
n
X
l=1
(Xl − δ) + ln I(X(1) ≥ δ).
Now the crucial step: you should graph the loglikelihood L as a function in δ and visualize
that it takes on maximum when δ = X(1). So we get δ̂MLE = X(1). Then by taking a
derivative we get that θ̂MLE = n−1 Pn
l=1(Xl − X(1)).
Answer: (δ̂MLEθ̂MLE) = (X(1), n−1 Pn
l=1(Xl − X(1)). Please note that δ̂MLE is a biased
estimator; this is a rather typical outcome.
7. Problem 10.73. Consider iid uniform observations X1, . . . , Xn with the parametric pdf
fθ(x) = I(θ − 1/2 < x < θ + 1/2).
As soon as the parameter is in the indicator function you should be very cautious: typically
a graphic will help you to find a MLE estimator, and not a differentiation. Also, it is very
helpful to figure out the nature of the parameter. Here it is obviously a location parameter,
and you can write
X = θ + Z, Z ∼ Uniform(−1/2, 1/2).
The latter helps you to guess about a correct estimator and check a suggested one and, if
necessary, simplify calculations of descriptive characteristics (mean, variance, etc.)
Well, now we need to write down the likelihood function (recall that this is just a joint
density only considered as a function in the parameter given a vector of observations):
LXn (θ) =
n
Y
l=1
I(θ − 1/2 < Xl < θ + 1/2) = I(θ − 1/2 < X(1) ≤ X(n) < θ + 1/2).
Note that the latter expression implies that (X(1), X(n)) is a sufficient statistic (due to the
Factorization Theorem). As a result, any good estimator, and the MLE in particular, must
be a function of only these two statistics. Another remark is: it is possible to show (there
exists a technique how to do this which is beyond this class objectives) that this pair of
4
extreme observations is also the minimal sufficient statistic. Please look at the situation: we
have 1 parameter and need 2 univariate statistics (X(1), X(n)) to have the sufficient statistics;
here this is the limit of data-reduction. Nonetheless, this is a huge data-reduction whenever
n is large. Just think about this: to estimate θ you do not need any observation which is
between the two extreme ones! This is not a trivial assertion.
Well, now let us return to the problem at hand. If you look at the graphic of the likelihood
function as a function in θ, then you may conclude that it attains its maximum on all θ such
that
X(n) − 1/2 < θ < X(1) + 1/2. (1)
As a result, we get a very curious MLE: any point within this interval can be declared as
the MLE (the MLE is not unique!).
Now we can consider the particular questions at hand.
(a). Let Θ̂1 = (1/2)(X(1) + X(n)). We need to check that this estimator satisfies (1). We
just plug-in this estimator in (1) and get
X(n) − 1/2 < (1/2)(X(1) + X(n)) < X(1) + 1/2.
The latter relation is true because it is equivalent to the following valid inequality:
X(n) − X(1) < 1.
(b) Let Θ̂2 = (1/3)(X(1) + 2X(n)) be another candidate for the MLE. Then it should
satisfy (1). In particular, if this is the MLE then
(1/3)(X(1) + 2X(n)) < X(1) + 1/2
should hold. The latter inequality is equivalent to
X(n) − X(1) < 3/4
which obviously may not hold. The contradictory shows that this estimator, despite being
a function of the sufficient statistic, is not the MLE.
8. Problem 10.74. Here we are exploring the Bayesian approach where the parameter of
interest is considered as a realization of a random variable, (it can be considered as a random
variable). For the problem at hand X ∼ Binom(n, θ) and θ is a realization (which we do
not directly observe) of a beta RV Θ ∼ Beta(α, β).
[Please note that here your knowledge of basic/classical distributions becomes absolutely
crucial: you cannot solve any problem without knowing formulae for pmf/pdf; so it is time
to refresh them.]
In other words, here we are observing a binomial random variable whose parameter
(probability of success has a beta prior.
To find a Bayesian estimator, we need to find a posterior distribution of the parameter
of interest and then calculate its mean. [Please note that your knowledge of means of clas-
sical distribution becomes very handy here: as soon as you realize the underlying posterior
distribution, you can use a formula for calculating its mean.]
5
Given this information, the posterior distribution of Θ given the observation X is
fΘ|X
(θ|x) =
fΘ
(θ)fX|Θ
(x|θ)
fX(x)
=
Γ(n + α + β)
Γ(x + α)Γ(n − x + β)
θx+α−1
(1 − θ)(n−x+β)−1
.
The algebra leading to the last equality is explained on page 345.
Now you can realize that the posterior distribution is again Beta(x+α, n−x+β). There
are two consequences from this fact. First, by a definition, if a prior density and a posterior
density are from the same family of distributions, then the prior is called conjugate. This
is the case that Bayesian statisticians like a lot because this methodologically support the
Bayesian approach and also simplifies formulae. Second, we know a formula for the mean of
a beta RV, and using it we get the Bayesian estimator
Θ̂B = E(Θ|X) =
X + α
(α + X) + (n − X + β)
=
X + α
α + n + β
Now we actually can consider the exercise at hand. A general remark: Bayesian estimator
is typically a linear combination of the prior mean and the MLE estimator with weights
depending on variances of these two estimates. In general, as n → ∞, Bayesian estimator
approaches the MLE.
Let us check that this is the case for the problem at hand. Write,
Θ̂B =
X
n
n
α + β + n
+
α
α + β
α + β
α + β + n
.
Now, if we denote
w :=
n
α + β + n
,
we get the wished presentation
Θ̂ = wX̄ + (1 − w)θ0.
where θ0 = E(Θ) = α/(α + β) is the prior mean of Θ.
Now, the problem at hand asks us to work a bit further on the weight. The variance of
the beta RV Θ is
V ar(Θ) := σ2
0 =
αβ
(α + β)2(α + β + 1)
.
Well, it is plain to see that
θ0(1 − θ0) =
αβ
(α + β)2
.
Then a simple algebra yields
σ2
0 =
θ0(1 − θ0)
α + β + 1
6
which in its turn yields
α + β =
θ0(1 − θ0)
σ2
0
− 1.
Using this we get the wished
w =
n
n + θ0(1 − θ − 0)σ−2
0 − 1
.
Problem is solved.
9. Problem 10.76. Here X ∼ N(µ, σ2
) with σ2
being known. A sample of size n is given.
The parameter of interest is the population mean µ, and a Bayesian approach is considered
with the Normal prior M ∼ N(µ0, σ2
0. In other words, the Bayesian approach suggests to
think about an estimated µ as a realization of a random variable M which has a normal
distribution with the given mean and variance.
As a result, we know that the Bayesian estimator is the mean of the posterior distribution.
The posterior distribution is calculated in Th.10.6, and it is again normal N(µ1, σ2
1) where
µ1 = X̄
nσ2
0
nσ2
0 + σ2
+ µ0
σ2
nσ2
0 + σ2
;
1
σ2
1
=
n
σ2
+
1
σ2
0
.
Note that this theorem implies that the normal distribution is the conjugate prior: the
prior is normal and the posterior is normal as well.
We can conclude that the Bayesian estimator is
M̂B = E(M|X̄) = wX̄ + (1 − w)µ0,
that is, the Bayesian estimator is a linear combination of the MLE estimator (here X̄) and
the prior mean (pure Bayesian estimator when no observations are available). Recall that
this is a rather typical outcome, and the Bayesian estimator approaches the MLE as n → ∞.
A direct (simple) calculation shows that
w = n/[n + σ2
/σ2
0].
Problem is solved.
10. Problem 10.77. Here a Poisson RV X with an unknown intensity λ is observed. The
problem is to estimate λ. A Bayesian approach is suggested with the prior distribution for
the intensity Λ being Gamma(α, β). In other words, X ∼ Poiss(Λ) and Λ ∼ Gamma(α, β).
To find a Bayesian estimator, we need to evaluate the posterior distribution of Λ given X
and then calculate its mean; that mean will be the Bayesian estimator. We do this in two
steps.
(a) To find the posterior distribution we begin with the joint pdf
fΛ,X
(λ, x) = fΛ
(λ)fX|Λ
(x|λ)
=
1
Γ(α)βα
λα−1
e−λ/β
e−λ
λx
[x!]−1
I(λ > 0)I(x ∈ {0, 1, . . .}).
7
Then the posterior pdf is
fΛ|X
(λ|x) =
fΛ,X
(λ, x)
fX(x)
=
λ(α+x)−1
e−λ(1+1/β)
Γ(α)βαfX(x)x!
I(λ > 0). (2)
Now I explain you what smart Bayesian statisticians do. They do not calculate fX
(x) or
try to simplify (2); instead they look at (1) as a density in λ and try to guess what family it
is from. Here it is plain to realize that the posterior pdf is again Gamma, more exactly it is
Gamma(α + x, β/(1 + β)). Note that the Gamma prior for the Poisson intensity parameter
is the conjugate prior because the posterior is from the same family.
As soon as you realized the posterior distribution, you know what the Bayesian estimator
is: it is the expected value of this Gamma RV, namely
Λ̂B = E(Λ|X) = (α + X)[β/(1 + β)] = β(α + X)/(1 + β).
The problem is solved.
11. Problem 10.94. This is a curious problem on application and analysis of Bayesian
approach. It is given that the observation X is a binomial RV Binom(n = 30, θ) and
someone believes that the probability of success θ is a realization of a Beta random variable
Θ ∼ Beta(α, β). Parameters α and β are not given; instead it is given that EΘ = θ0 = .74
and V ar(Θ) = σ2
0 = 32
= 9. [Do you think that this information is enough to find the
parameters of the underlying beta distribution? If “yes”, then what are they?]
Now we are in a position to answer the questions.
(a). Using only the prior information (that is, no observation is available), the best MSE
estimate is the prior mean
Θ̂prior = EΘ = .74.
(b) Based on the direct information, the MLE and the MME estimators are the same
and they are
Θ̂MLE = Θ̂MME = X̄ = X/n = 18/30.
[Please compare answers in (a) and (b) parts. Are they far enough?]
(c) The Bayesian estimator with Θ ∼ Beta(α, β) is (see p.345)
Θ̂B =
X + α
α + β + n
.
Now, we can either find α and β from the mean and variance information, or use results of
our homework problem 10.74 and get
Θ̂B = wX̄ + (1 − w)E(Θ),
where
w =
n
n + θ0(1−θ0)
σ2
0
− 1
=
30
30 + (.74)(.26)
9
− 1
.
8
12. Problem 10.96. Let X be a grade, and assume that X ∼ N(µ, σ2
) with σ2
= (7.4)2
.
Then there is a professor’s believe, based on a prior knowledge, that the mean M ∼ N(µ0 =
65.2, σ2
0 = (1.5)2
). After exam X̄ = 72.9 is the observation.
(a) Denote by Z the standard normal random variable. Then using z-scoring yields
P(63.0 < M < 68.0) = P
63.0 − µ0
σ0

M − µ0
σ0

68.0 − µ0
σ0

= P(
63 − 65.2
1.5
 Z 
68 − 65.2
1.5

= P

−
2.2
1.5
 Z 
2.8
1.5

.
Then you use Table — I skip this step here.
(b) As we know from Theorem 10.6, M|X̄ is normally distributed with
µ1 =
nX̄σ2
0 + µ0σ2
nσ2
0 + σ2
, σ2
1 =
σ2
σ2
0
σ2 + nσ2
0
.
Here: n = 40, X̄ = 72.9, σ2
0 = (1.5)2
, σ2
= (7.4)2
, µ0 = 65.2. Plug-in these numbers and
then
P(63  M  68|X̄ = 72.9) = P
63 − µ1
σ1
 Z 
68 − µ1
σ1

.
Find the numbers and use the Table.
9

Weitere ähnliche Inhalte

Ähnlich wie SAS Homework Help

Machine learning (10)
Machine learning (10)Machine learning (10)
Machine learning (10)NYversity
 
Synthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptSynthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptMarkVincentDoria1
 
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Xin-She Yang
 
Probabilistic approach to prime counting
Probabilistic approach to prime countingProbabilistic approach to prime counting
Probabilistic approach to prime countingChris De Corte
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
 
Numarical values
Numarical valuesNumarical values
Numarical valuesAmanSaeed11
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlightedAmanSaeed11
 
Math lecture 7 (Arithmetic Sequence)
Math lecture 7 (Arithmetic Sequence)Math lecture 7 (Arithmetic Sequence)
Math lecture 7 (Arithmetic Sequence)Osama Zahid
 
Problem_Session_Notes
Problem_Session_NotesProblem_Session_Notes
Problem_Session_NotesLu Mao
 
PaperNo23-habibiIJCMS5-8-2014-IJCMS
PaperNo23-habibiIJCMS5-8-2014-IJCMSPaperNo23-habibiIJCMS5-8-2014-IJCMS
PaperNo23-habibiIJCMS5-8-2014-IJCMSMezban Habibi
 
376951072-3-Greedy-Method-new-ppt.ppt
376951072-3-Greedy-Method-new-ppt.ppt376951072-3-Greedy-Method-new-ppt.ppt
376951072-3-Greedy-Method-new-ppt.pptRohitPaul71
 
SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)Eric Zhang
 

Ähnlich wie SAS Homework Help (20)

Machine learning (10)
Machine learning (10)Machine learning (10)
Machine learning (10)
 
Synthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptSynthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.ppt
 
Berans qm overview
Berans qm overviewBerans qm overview
Berans qm overview
 
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
Nature-Inspired Metaheuristic Algorithms for Optimization and Computational I...
 
Probabilistic approach to prime counting
Probabilistic approach to prime countingProbabilistic approach to prime counting
Probabilistic approach to prime counting
 
.
..
.
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
 
Numarical values
Numarical valuesNumarical values
Numarical values
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlighted
 
Stochastic Processes - part 6
Stochastic Processes - part 6Stochastic Processes - part 6
Stochastic Processes - part 6
 
Exponential function
Exponential functionExponential function
Exponential function
 
Mathematics
MathematicsMathematics
Mathematics
 
Math lecture 7 (Arithmetic Sequence)
Math lecture 7 (Arithmetic Sequence)Math lecture 7 (Arithmetic Sequence)
Math lecture 7 (Arithmetic Sequence)
 
Chapter 11
Chapter 11Chapter 11
Chapter 11
 
Problem_Session_Notes
Problem_Session_NotesProblem_Session_Notes
Problem_Session_Notes
 
Matlab Sample Assignment Solution
Matlab Sample Assignment SolutionMatlab Sample Assignment Solution
Matlab Sample Assignment Solution
 
PaperNo23-habibiIJCMS5-8-2014-IJCMS
PaperNo23-habibiIJCMS5-8-2014-IJCMSPaperNo23-habibiIJCMS5-8-2014-IJCMS
PaperNo23-habibiIJCMS5-8-2014-IJCMS
 
AJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdfAJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdf
 
376951072-3-Greedy-Method-new-ppt.ppt
376951072-3-Greedy-Method-new-ppt.ppt376951072-3-Greedy-Method-new-ppt.ppt
376951072-3-Greedy-Method-new-ppt.ppt
 
SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)
 

Mehr von Statistics Homework Helper

📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀Statistics Homework Helper
 
Your Statistics Homework Solver is Here! 📊📚
Your Statistics Homework Solver is Here! 📊📚Your Statistics Homework Solver is Here! 📊📚
Your Statistics Homework Solver is Here! 📊📚Statistics Homework Helper
 
Top Rated Service Provided By Statistics Homework Help
Top Rated Service Provided By Statistics Homework HelpTop Rated Service Provided By Statistics Homework Help
Top Rated Service Provided By Statistics Homework HelpStatistics Homework Helper
 
Statistics Multiple Choice Questions and Answers
Statistics Multiple Choice Questions and AnswersStatistics Multiple Choice Questions and Answers
Statistics Multiple Choice Questions and AnswersStatistics Homework Helper
 

Mehr von Statistics Homework Helper (20)

📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
📊 Conquer Your Stats Homework with These Top 10 Tips! 🚀
 
Your Statistics Homework Solver is Here! 📊📚
Your Statistics Homework Solver is Here! 📊📚Your Statistics Homework Solver is Here! 📊📚
Your Statistics Homework Solver is Here! 📊📚
 
Probability Homework Help
Probability Homework HelpProbability Homework Help
Probability Homework Help
 
Multiple Linear Regression Homework Help
Multiple Linear Regression Homework HelpMultiple Linear Regression Homework Help
Multiple Linear Regression Homework Help
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
R Programming Homework Help
R Programming Homework HelpR Programming Homework Help
R Programming Homework Help
 
Statistics Homework Helper
Statistics Homework HelperStatistics Homework Helper
Statistics Homework Helper
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
Do My Statistics Homework
Do My Statistics HomeworkDo My Statistics Homework
Do My Statistics Homework
 
Write My Statistics Homework
Write My Statistics HomeworkWrite My Statistics Homework
Write My Statistics Homework
 
Quantitative Research Homework Help
Quantitative Research Homework HelpQuantitative Research Homework Help
Quantitative Research Homework Help
 
Probability Homework Help
Probability Homework HelpProbability Homework Help
Probability Homework Help
 
Top Rated Service Provided By Statistics Homework Help
Top Rated Service Provided By Statistics Homework HelpTop Rated Service Provided By Statistics Homework Help
Top Rated Service Provided By Statistics Homework Help
 
Introduction to Statistics
Introduction to StatisticsIntroduction to Statistics
Introduction to Statistics
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
Multivariate and Monova Assignment Help
Multivariate and Monova Assignment HelpMultivariate and Monova Assignment Help
Multivariate and Monova Assignment Help
 
Statistics Multiple Choice Questions and Answers
Statistics Multiple Choice Questions and AnswersStatistics Multiple Choice Questions and Answers
Statistics Multiple Choice Questions and Answers
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
Advanced Statistics Homework Help
Advanced Statistics Homework HelpAdvanced Statistics Homework Help
Advanced Statistics Homework Help
 
Quantitative Methods Assignment Help
Quantitative Methods Assignment HelpQuantitative Methods Assignment Help
Quantitative Methods Assignment Help
 

Kürzlich hochgeladen

Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfjoachimlavalley1
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...Denish Jangid
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasiemaillard
 
Forest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFForest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFVivekanand Anglo Vedic Academy
 
NLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxNLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxssuserbdd3e8
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxbennyroshan06
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfVivekanand Anglo Vedic Academy
 
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptx
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptxJose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptx
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptxricssacare
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345beazzy04
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleCeline George
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXMIRIAMSALINAS13
 
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxMatatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxJenilouCasareno
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfTamralipta Mahavidyalaya
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePedroFerreira53928
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resourcesdimpy50
 
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptBasic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptSourabh Kumar
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
 

Kürzlich hochgeladen (20)

Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Forest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFForest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDF
 
NLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxNLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptx
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
 
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptx
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptxJose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptx
Jose-Rizal-and-Philippine-Nationalism-National-Symbol-2.pptx
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS Module
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxMatatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resources
 
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptBasic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 

SAS Homework Help

  • 1. SOLUTION FOR HOMEWORK 3, STAT 5352 Welcome to your third homework. We finish the point estimation; your Exam 1 is next week and it will be close to HW1-HW3. Recall that Xn := (X1, . . . , Xn) denotes the vector of n observations. Try to find mistakes (and get extra points) in my solutions. Typically they are silly arithmetic mistakes (not methodological ones). They allow me to check that you did your HW on your own. Please do not e-mail me about your findings — just mention them on the first page of your solution and count extra points. Now let us look at your problems. 1. Problem 10.51. Let X1, . . . , Xn be iid according to Expon(θ), so fX θ (x) = (1/θ)e−x/θ I(x > 0), θ ∈ Ω := (0, ∞). Please note that it is important to write this density with indicator function showing its support. In some cases the support may depend on a parameter of interest, and then this fact is always very important. We shall see such an example in this homework. For the exponential distribution we know that Eθ(X) = θ (you may check this by a direct calculation), so we get a simple method of moments estimator Θ̂MME = X̄. This is the answer. But I would like to continue a bit. The method of moments estimator (or a generalized one) allows you to work with any moment (or any function). Let us consider the second moment and equate sample second moment to the theoretical one. Recall that V arθ(X) = θ2 , and thus Eθ(X2 ) = V arθ(X) + (Eθ(X))2 = 2θ2 . The sample second moment is n−1 Pn i=1 X2 i , and we get another method of moments estimator Θ̃MME = [n−1 n X i=1 X2 i /2]1/2 . Note that these MM estimators are different, and this is OK. Then a statistician should choose a better one. Which one do you think is better? You may use the notion of efficiency to resolve the issue (compare their MSEs (mean squared errors) E(θ̂ − θ)2 and choose an estimator with the smaller MSE). By the way, which estimator is based on the sufficient statistic? 2. Problem 10.53. Here X1, . . . , Xn are Poisson(λ). Recall that Eλ(X) = λ and V arλ(X) = λ. The MME is easy to get via the first moment, and we have λ̂MME = X̄. 1
  • 2. This is the answer. But again, as an extra example, I can suggest a MME based on the second moment. Indeed, Eλ(X2 ) = V arλ(X) + (EλX)2 = λ + λ2 and this yields that λ̃MME + λ̃2 MME = n−1 n X i=1 X2 i . Then you need to solve this equation to get the MME. Obviously it is a more complicated estimator, but it is yet another MME. 3. Problem 10.56. Let X1, . . . , Xn be iid according to the pdf gθ(x) = θ−1 e−(x−δ)/θ I(x > δ). Please note that this is a location-exponential family because X = δ + Z, where Z is a classical exponential RV with fZ θ (z) = θ−1 e−z/θ I(z > 0). I can go either further by saying that we are dealing with a location-scale family because X = δ + θZ0, where fZ0 (z) = e−z I(z > 0). So now we know the meaning of parameters δ and θ: the former is the location (shift) and the latter is the scale (multiplier). Note that this understanding simplifies all calculations because you can easily figure out (otherwise do calculations) that Eδ,θ(X) = δ + θ, V arδ,θ(X) = θ2 . These two familiar results yield Eδ,θ(X2 ) = θ2 + (δ + θ)2 , and we get the following system of two equations to find the pair of MMEs: δ̂ + θ̂ = X̄, 2θ̂2 + 2δ̂θ̂ + δ̂2 = n−1 n X i=1 Xi. To solve this system, we square the both sides of the first equality and then subtract the obtained equality from the second equality. We get a new system δ̂ + θ̂ = X̄, θ̂2 = n−1 n X i=1 X2 i − X̄2 . This, together with a simple algebra, yields the answer δ̂MME = X̄ − [n−1 n X i=1 X2 i − X̄2 ]1/2 , θ̂MME = [n−1 n X i=1 X2 i − X̄2 ]1/2 . 2
  • 3. Remark: We need to check that n−1 Pn i=1 X2 i − X̄2 ≥ 0 for the estimator to be well defined. This may be done via famous Hölder inequality ( m X j=1 aj)2 ≤ m m X j=1 a2 j . 4. Problem 10.59. Here X1, . . . , Xn are Poisson(λ), λ ∈ Ω = (0, ∞). Recall that Eλ(X) = λ and V arλ(X) = λ. Then, by definition of the MLE: λ̂MLE := arg max λ∈Ω n Y l=1 fλ(Xl) =: arg max λ∈Ω LXn (λ) = arg max λ∈Ω n X l=1 ln(fλ(Xl)) =: arg max λ∈Ω ln LXn (λ). For the Poisson pdf fλ(x) = e−λ λx /x! we get ln LXn (λ) = −nλ + n X l=1 Xl ln(λ) − n X l=1 ln(Xl!). Now we need to find λ̂MLE at which the above loglikelihood attains its maximum over all λ ∈ Ω. You can do this in a usual way: take derivative with respect to λ ( that is, calculate ∂ ln LXn (λ)/∂λ, then equate it to zero, solve with respect to λ, and then check that the solution indeed maximizes the loglikelihood). Here equating of the derivative to zero yields −n + Pn l=1 Xl/λ = 0, and we get λ̂MLE = X̄. Note that for the Poisson setting the MME and MLE coincide; in general they may be different. 5. Problem 10.62. Here X1, . . . , Xn are iid N(µ, σ2 ) with the mean µ being known and the parameter of interest being the variance σ2 . Note that σ2 ∈ Ω = (0, ∞). Then we are interested in the MLE. Write: σ̂2 MLE = arg max σ2∈Ω ln LXn (σ2 ). Here ln LXx (σ2 ) = n X l=1 ln([2πσ2 ]−1/2 e−(Xl−µ)2/(2σ2) ) = −(n/2) ln(2πσ2 ) − (1/2σ2 ) n X l=1 (Xl − µ)2 . This expression takes on its maximum at σ̂2 MLE = n−1 n X l=1 (Xl − µ)2 . Note that this is also the MME. 3
  • 4. 6. Problem 10.66. Let X1, . . . , Xn be iid according to the pdf gθ(x) = θ−1 e−(x−δ)/θ I(x > δ). Then LXn (δ, θ) = θ−n e− Pn l=1 (Xl−δ)/θ I(X(1) > δ). Recall that X(1) = min(X1, . . . , Xn) is the minimal observation [the first ordered observation]. This is the case that I wrote you about earlier: it is absolutely crucial to take into account the indicator function (the support) because here the parameter δ defines the support. By its definition, (δ̂MLE, θ̂MLE) := arg max δ∈(−∞,∞),θ∈(0,∞) ln(LXn (δ, θ)). Note that L(δ, θ) := ln(LXn (δ, θ)) = −n ln(θ) − θ−1 n X l=1 (Xl − δ) + ln I(X(1) ≥ δ). Now the crucial step: you should graph the loglikelihood L as a function in δ and visualize that it takes on maximum when δ = X(1). So we get δ̂MLE = X(1). Then by taking a derivative we get that θ̂MLE = n−1 Pn l=1(Xl − X(1)). Answer: (δ̂MLEθ̂MLE) = (X(1), n−1 Pn l=1(Xl − X(1)). Please note that δ̂MLE is a biased estimator; this is a rather typical outcome. 7. Problem 10.73. Consider iid uniform observations X1, . . . , Xn with the parametric pdf fθ(x) = I(θ − 1/2 < x < θ + 1/2). As soon as the parameter is in the indicator function you should be very cautious: typically a graphic will help you to find a MLE estimator, and not a differentiation. Also, it is very helpful to figure out the nature of the parameter. Here it is obviously a location parameter, and you can write X = θ + Z, Z ∼ Uniform(−1/2, 1/2). The latter helps you to guess about a correct estimator and check a suggested one and, if necessary, simplify calculations of descriptive characteristics (mean, variance, etc.) Well, now we need to write down the likelihood function (recall that this is just a joint density only considered as a function in the parameter given a vector of observations): LXn (θ) = n Y l=1 I(θ − 1/2 < Xl < θ + 1/2) = I(θ − 1/2 < X(1) ≤ X(n) < θ + 1/2). Note that the latter expression implies that (X(1), X(n)) is a sufficient statistic (due to the Factorization Theorem). As a result, any good estimator, and the MLE in particular, must be a function of only these two statistics. Another remark is: it is possible to show (there exists a technique how to do this which is beyond this class objectives) that this pair of 4
  • 5. extreme observations is also the minimal sufficient statistic. Please look at the situation: we have 1 parameter and need 2 univariate statistics (X(1), X(n)) to have the sufficient statistics; here this is the limit of data-reduction. Nonetheless, this is a huge data-reduction whenever n is large. Just think about this: to estimate θ you do not need any observation which is between the two extreme ones! This is not a trivial assertion. Well, now let us return to the problem at hand. If you look at the graphic of the likelihood function as a function in θ, then you may conclude that it attains its maximum on all θ such that X(n) − 1/2 < θ < X(1) + 1/2. (1) As a result, we get a very curious MLE: any point within this interval can be declared as the MLE (the MLE is not unique!). Now we can consider the particular questions at hand. (a). Let Θ̂1 = (1/2)(X(1) + X(n)). We need to check that this estimator satisfies (1). We just plug-in this estimator in (1) and get X(n) − 1/2 < (1/2)(X(1) + X(n)) < X(1) + 1/2. The latter relation is true because it is equivalent to the following valid inequality: X(n) − X(1) < 1. (b) Let Θ̂2 = (1/3)(X(1) + 2X(n)) be another candidate for the MLE. Then it should satisfy (1). In particular, if this is the MLE then (1/3)(X(1) + 2X(n)) < X(1) + 1/2 should hold. The latter inequality is equivalent to X(n) − X(1) < 3/4 which obviously may not hold. The contradictory shows that this estimator, despite being a function of the sufficient statistic, is not the MLE. 8. Problem 10.74. Here we are exploring the Bayesian approach where the parameter of interest is considered as a realization of a random variable, (it can be considered as a random variable). For the problem at hand X ∼ Binom(n, θ) and θ is a realization (which we do not directly observe) of a beta RV Θ ∼ Beta(α, β). [Please note that here your knowledge of basic/classical distributions becomes absolutely crucial: you cannot solve any problem without knowing formulae for pmf/pdf; so it is time to refresh them.] In other words, here we are observing a binomial random variable whose parameter (probability of success has a beta prior. To find a Bayesian estimator, we need to find a posterior distribution of the parameter of interest and then calculate its mean. [Please note that your knowledge of means of clas- sical distribution becomes very handy here: as soon as you realize the underlying posterior distribution, you can use a formula for calculating its mean.] 5
  • 6. Given this information, the posterior distribution of Θ given the observation X is fΘ|X (θ|x) = fΘ (θ)fX|Θ (x|θ) fX(x) = Γ(n + α + β) Γ(x + α)Γ(n − x + β) θx+α−1 (1 − θ)(n−x+β)−1 . The algebra leading to the last equality is explained on page 345. Now you can realize that the posterior distribution is again Beta(x+α, n−x+β). There are two consequences from this fact. First, by a definition, if a prior density and a posterior density are from the same family of distributions, then the prior is called conjugate. This is the case that Bayesian statisticians like a lot because this methodologically support the Bayesian approach and also simplifies formulae. Second, we know a formula for the mean of a beta RV, and using it we get the Bayesian estimator Θ̂B = E(Θ|X) = X + α (α + X) + (n − X + β) = X + α α + n + β Now we actually can consider the exercise at hand. A general remark: Bayesian estimator is typically a linear combination of the prior mean and the MLE estimator with weights depending on variances of these two estimates. In general, as n → ∞, Bayesian estimator approaches the MLE. Let us check that this is the case for the problem at hand. Write, Θ̂B = X n n α + β + n + α α + β α + β α + β + n . Now, if we denote w := n α + β + n , we get the wished presentation Θ̂ = wX̄ + (1 − w)θ0. where θ0 = E(Θ) = α/(α + β) is the prior mean of Θ. Now, the problem at hand asks us to work a bit further on the weight. The variance of the beta RV Θ is V ar(Θ) := σ2 0 = αβ (α + β)2(α + β + 1) . Well, it is plain to see that θ0(1 − θ0) = αβ (α + β)2 . Then a simple algebra yields σ2 0 = θ0(1 − θ0) α + β + 1 6
  • 7. which in its turn yields α + β = θ0(1 − θ0) σ2 0 − 1. Using this we get the wished w = n n + θ0(1 − θ − 0)σ−2 0 − 1 . Problem is solved. 9. Problem 10.76. Here X ∼ N(µ, σ2 ) with σ2 being known. A sample of size n is given. The parameter of interest is the population mean µ, and a Bayesian approach is considered with the Normal prior M ∼ N(µ0, σ2 0. In other words, the Bayesian approach suggests to think about an estimated µ as a realization of a random variable M which has a normal distribution with the given mean and variance. As a result, we know that the Bayesian estimator is the mean of the posterior distribution. The posterior distribution is calculated in Th.10.6, and it is again normal N(µ1, σ2 1) where µ1 = X̄ nσ2 0 nσ2 0 + σ2 + µ0 σ2 nσ2 0 + σ2 ; 1 σ2 1 = n σ2 + 1 σ2 0 . Note that this theorem implies that the normal distribution is the conjugate prior: the prior is normal and the posterior is normal as well. We can conclude that the Bayesian estimator is M̂B = E(M|X̄) = wX̄ + (1 − w)µ0, that is, the Bayesian estimator is a linear combination of the MLE estimator (here X̄) and the prior mean (pure Bayesian estimator when no observations are available). Recall that this is a rather typical outcome, and the Bayesian estimator approaches the MLE as n → ∞. A direct (simple) calculation shows that w = n/[n + σ2 /σ2 0]. Problem is solved. 10. Problem 10.77. Here a Poisson RV X with an unknown intensity λ is observed. The problem is to estimate λ. A Bayesian approach is suggested with the prior distribution for the intensity Λ being Gamma(α, β). In other words, X ∼ Poiss(Λ) and Λ ∼ Gamma(α, β). To find a Bayesian estimator, we need to evaluate the posterior distribution of Λ given X and then calculate its mean; that mean will be the Bayesian estimator. We do this in two steps. (a) To find the posterior distribution we begin with the joint pdf fΛ,X (λ, x) = fΛ (λ)fX|Λ (x|λ) = 1 Γ(α)βα λα−1 e−λ/β e−λ λx [x!]−1 I(λ > 0)I(x ∈ {0, 1, . . .}). 7
  • 8. Then the posterior pdf is fΛ|X (λ|x) = fΛ,X (λ, x) fX(x) = λ(α+x)−1 e−λ(1+1/β) Γ(α)βαfX(x)x! I(λ > 0). (2) Now I explain you what smart Bayesian statisticians do. They do not calculate fX (x) or try to simplify (2); instead they look at (1) as a density in λ and try to guess what family it is from. Here it is plain to realize that the posterior pdf is again Gamma, more exactly it is Gamma(α + x, β/(1 + β)). Note that the Gamma prior for the Poisson intensity parameter is the conjugate prior because the posterior is from the same family. As soon as you realized the posterior distribution, you know what the Bayesian estimator is: it is the expected value of this Gamma RV, namely Λ̂B = E(Λ|X) = (α + X)[β/(1 + β)] = β(α + X)/(1 + β). The problem is solved. 11. Problem 10.94. This is a curious problem on application and analysis of Bayesian approach. It is given that the observation X is a binomial RV Binom(n = 30, θ) and someone believes that the probability of success θ is a realization of a Beta random variable Θ ∼ Beta(α, β). Parameters α and β are not given; instead it is given that EΘ = θ0 = .74 and V ar(Θ) = σ2 0 = 32 = 9. [Do you think that this information is enough to find the parameters of the underlying beta distribution? If “yes”, then what are they?] Now we are in a position to answer the questions. (a). Using only the prior information (that is, no observation is available), the best MSE estimate is the prior mean Θ̂prior = EΘ = .74. (b) Based on the direct information, the MLE and the MME estimators are the same and they are Θ̂MLE = Θ̂MME = X̄ = X/n = 18/30. [Please compare answers in (a) and (b) parts. Are they far enough?] (c) The Bayesian estimator with Θ ∼ Beta(α, β) is (see p.345) Θ̂B = X + α α + β + n . Now, we can either find α and β from the mean and variance information, or use results of our homework problem 10.74 and get Θ̂B = wX̄ + (1 − w)E(Θ), where w = n n + θ0(1−θ0) σ2 0 − 1 = 30 30 + (.74)(.26) 9 − 1 . 8
  • 9. 12. Problem 10.96. Let X be a grade, and assume that X ∼ N(µ, σ2 ) with σ2 = (7.4)2 . Then there is a professor’s believe, based on a prior knowledge, that the mean M ∼ N(µ0 = 65.2, σ2 0 = (1.5)2 ). After exam X̄ = 72.9 is the observation. (a) Denote by Z the standard normal random variable. Then using z-scoring yields P(63.0 < M < 68.0) = P 63.0 − µ0 σ0 M − µ0 σ0 68.0 − µ0 σ0 = P( 63 − 65.2 1.5 Z 68 − 65.2 1.5 = P − 2.2 1.5 Z 2.8 1.5 . Then you use Table — I skip this step here. (b) As we know from Theorem 10.6, M|X̄ is normally distributed with µ1 = nX̄σ2 0 + µ0σ2 nσ2 0 + σ2 , σ2 1 = σ2 σ2 0 σ2 + nσ2 0 . Here: n = 40, X̄ = 72.9, σ2 0 = (1.5)2 , σ2 = (7.4)2 , µ0 = 65.2. Plug-in these numbers and then P(63 M 68|X̄ = 72.9) = P 63 − µ1 σ1 Z 68 − µ1 σ1 . Find the numbers and use the Table. 9