Let X be a random variable. Give the definition of the expected value or expectation 𝔼(X) when
X is a discrete random variable taking on values x1, x2, . . . .
- Definitions
- 𝔼(X) = expected value, aka long term average: = µ: = doing an experiment for a long time should be the average
- Answer
- 𝔼(X) = µ = ∑ x * P(x):Sum the products of each value of X and their probability
X is a continuous random variable with density function f(x) taking on values in ℝ.
- Definitions
- continuous random variable: a random variable that can take on any value within a certain range or interval
- density function: also known as probability density function is a mathematical function that describes the likelihood of a continuous random variable falling within a particular range of values.
- f(x) ≥ 0 for all x in range of X
- P(a ≤ X ≤ b) =
- Answer
- µ = µx = 𝔼(X) =
see WS23 Exercise 1.1
see WS23 Exercise 1.2
Let X be uniformly distributed on [0,1], that is, X has density function
(We write X ~ unif[0,1].) Calculate 𝔼(X) and Var(X).
Uniformly distributed therefore this formula:
𝔼(X) =
- =
- =
- = 01
- =
𝔼(X) =
Var(X) = 𝔼(X2) - 𝔼(X)2
- Calculation for 𝔼(X2)
- 𝔼(X2) =
- =
- = 01
- =
- =
Var(X) =
- =
For X ~ N(µ, σ2), specify the density as well as 𝔼(X) and Var(X). (You do not need to calculate expectation and variance.)
N(µ, σ2) = normally distributed random variable X with mean µ and variance σ2
density function(PDF, given by the normal distribution formula):
𝔼(X) = µ
Var(X) = σ2
Let X1, ..., Xn be independent and identically distributed random variables with 𝔼(Xi) = µ.
What is the "difference" between µ and i?
The difference between µ and lies in their interpretation and their roles in statistics:
- µ is a population parameter, representing the true average value of the entire population
- is a sample statistic, providing an estimate of µ based on sample of observations.
Show that i is an unbiased estimator for µ.
unbiased estimator = a statistical value that provides an estimate of a population parameter without systematically under or over estimating an average.
We know µ = average value
- => if 𝔼(X) = µ then is an unbiased estimator
𝔼() = 𝔼(i)
- since X1, ..., Xn are independent and identically distributed, we can use the linearity of expectations
- = i)
- since all Xi are identically distributed, their mean is equal to µ
- =
- =
- = µ
Therefore i is an unbiased estimator for µ, as its expected value equals µ.
The sample covariance Sxy of numbers x1, ..., xn and y1, ..., yn is defined by
Show that .
- =
- Using linearity of sums
- =
- Calculation1 for
- Because -
- =
- Because -
- =
- Calculation2 for
- see Calculation1 step 1
- =
- =
- =
- Using linearity of sums
- =
- Using factoring out the common factor
- =
- q.e.d. Formula 1
- proof as in 1 until
- =
- Calculation3 for
- see Calculation2
- =
- =
- q.e.d. Formula 3
- proof as in 1 until
- =
- using calculation2 and calculation3
- =
- =
- using linearity of sums
- =
- using factoring our the common factor
- =
- q.e.d. Formula 2
Find concrete matrices A and B such that AB ≠ BA. With these matrices, show that (AB)' = B'A'.
AB ≠ BA:
A = , B =
A*B =
B*A =
AB ≠ BA: ≠
(AB)' = B'A':
(AB)' = transpose of Matrix
- ==
A' = ' =
B' = ' =
B'A' = *
- =
- =
(AB)' = B'A'
=
Let
A =
and compute A', A-1 , (A')-1, (A-1)'.
A' = transpose of a matrix
=
A-1 = inverse of a matrix
→ with Elementary Transformation Method
= / - (2 * first line)
=
A-1 =
(A')-1 = / -(2 * second line)
=
(A')-1 =
(A-1)' =
=
Let
A = , b =
and compute Ab. Demonstrate that the result is a linear combination of the columns of A with the coefficients being the components of b.
A * b =
= *
= =
= 3 *+ 0 *+ 1 * =
see https://vowi.fsinf.at/images/8/80/TU_Wien-Econometrics_for_Business_Informatics_VU_%28Schneider%29_-_Exercise_1.7_2024S.pdf
Let x1, ..., xn ∈ ℝ. If Sxx = 0, what can you deduce for the x'is (i=1, ..., n)? What does Sxx = 0 imply for data points (x1, y1), ... (xn, yn), i.e., how would a plot of these data points look like?
So that Sxx can be 0, all data points must be in one line parallel to the y axis. So all x are equal to . (Tipp: Draw it by making little dots starting on the x-axis at straight up.)