Let X be a random variable. Give the definition of the expected value or expectation 𝔼(X) when
X is a discrete random variable taking on values x1, x2, . . . .
- Definitions
- 𝔼(X) = expected value, aka long term average: = µ: = doing an experiment for a long time should be the average
- Answer
- 𝔼(X) = µ = ∑ x * P(x):Sum the products of each value of X and their probability
X is a continuous random variable with density function f(x) taking on values in ℝ.
- Definitions
- continuous random variable: a random variable that can take on any value within a certain range or interval
- density function: also known as probability density function is a mathematical function that describes the likelihood of a continuous random variable falling within a particular range of values.
- f(x) ≥ 0 for all x in range of X
![{\displaystyle \int \limits _{-\infty }^{\infty }f(x)dx=1}](/index.php?title=Spezial:MathShowImage&hash=08222d39bd29f5642779db4dd23d5b80&mode=mathml)
- P(a ≤ X ≤ b) =
![{\displaystyle \int \limits _{a}^{b}f(x)dx}](/index.php?title=Spezial:MathShowImage&hash=3c335fcec0ef01cac4b27bd1fdd54511&mode=mathml)
- Answer
- µ = µx = 𝔼(X) =
![{\displaystyle \int \limits _{-\infty }^{\infty }x*f(x)dx}](/index.php?title=Spezial:MathShowImage&hash=6636972a3eec25413adaabe6c837db38&mode=mathml)
see WS23 Exercise 1.1
see WS23 Exercise 1.2
Let X be uniformly distributed on [0,1], that is, X has density function
(We write X ~ unif[0,1].) Calculate 𝔼(X) and Var(X).
Uniformly distributed therefore this formula:
𝔼(X) =
- =
![{\displaystyle \int \limits _{-\infty }^{0}x*0dx+\int \limits _{0}^{1}x*1dx+\int \limits _{1}^{\infty }x*0dx}](/index.php?title=Spezial:MathShowImage&hash=3e4e043f16dbb7718055770964cfe7d5&mode=mathml)
- =
![{\displaystyle 0+\int \limits _{0}^{1}xdx+0}](/index.php?title=Spezial:MathShowImage&hash=3473803ea6adf75df13974e199904e62&mode=mathml)
- =
01
- =
![{\displaystyle {\frac {1}{2}}-0}](/index.php?title=Spezial:MathShowImage&hash=81dcc133c76e174307204462ebd89f09&mode=mathml)
𝔼(X) =
Var(X) = 𝔼(X2) - 𝔼(X)2
- Calculation for 𝔼(X2)
- 𝔼(X2) =
- =
![{\displaystyle \int \limits _{0}^{1}x^{2}*1dx}](/index.php?title=Spezial:MathShowImage&hash=a0f460e2a1a770c1612eb3e58b0dcd59&mode=mathml)
- =
01
- =
![{\displaystyle {\frac {1}{3}}-0}](/index.php?title=Spezial:MathShowImage&hash=8c0c1217911d020add64f484f44a6f40&mode=mathml)
- =
![{\displaystyle {\frac {1}{3}}}](/index.php?title=Spezial:MathShowImage&hash=7964c6a339acf2ddea25a5ef0552b97e&mode=mathml)
Var(X) =
- =
![{\displaystyle {\frac {1}{3}}-{\frac {1}{4}}={\frac {1}{12}}}](/index.php?title=Spezial:MathShowImage&hash=1c0bd2cafa11554f9cd49618c4c6eb9d&mode=mathml)
For X ~ N(µ, σ2), specify the density as well as 𝔼(X) and Var(X). (You do not need to calculate expectation and variance.)
N(µ, σ2) = normally distributed random variable X with mean µ and variance σ2
density function(PDF, given by the normal distribution formula):
𝔼(X) = µ
Var(X) = σ2
Let X1, ..., Xn be independent and identically distributed random variables with 𝔼(Xi) = µ.
What is the "difference" between µ and
i?
The difference between µ and
lies in their interpretation and their roles in statistics:
- µ is a population parameter, representing the true average value of the entire population
is a sample statistic, providing an estimate of µ based on sample of observations.
Show that
i is an unbiased estimator for µ.
unbiased estimator = a statistical value that provides an estimate of a population parameter without systematically under or over estimating an average.
We know µ = average value
- => if 𝔼(X) = µ then
is an unbiased estimator
𝔼(
) = 𝔼(
i)
- since X1, ..., Xn are independent and identically distributed, we can use the linearity of expectations
- =
i)
- since all Xi are identically distributed, their mean is equal to µ
- =
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}\mu }](/index.php?title=Spezial:MathShowImage&hash=ee2b3414b8031a4f0a8d69e0093819bb&mode=mathml)
- =
![{\displaystyle {\frac {1}{n}}*n*\mu }](/index.php?title=Spezial:MathShowImage&hash=d025e3e548e2d0b14e352d3b4d469932&mode=mathml)
- = µ
Therefore
i is an unbiased estimator for µ, as its expected value equals µ.
The sample covariance Sxy of numbers x1, ..., xn and y1, ..., yn is defined by
Show that
.
![{\displaystyle S_{\text{xy}}={\frac {1}{n}}\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})={\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*(y_{i}-{\bar {y}})}](/index.php?title=Spezial:MathShowImage&hash=e5bb69c5bc12310179469bce97cfa330&mode=mathml)
- =
- Using linearity of sums
- =
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*y_{i}-{\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*{\bar {y}}-{\frac {1}{n}}\sum \limits _{i=1}^{n}{\bar {x}}*y_{i}+{\frac {1}{n}}\sum \limits _{i=1}^{n}{\bar {x}}*{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=8a3d8e79ed39f7dd16950efcc1fda635&mode=mathml)
- Calculation1 for
- Because -
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}{\bar {x}}={\bar {x}}}](/index.php?title=Spezial:MathShowImage&hash=ab14d4be673244236a246216f8c18070&mode=mathml)
- =
- Because -
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}y_{i}={\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=9e4d8b00a5181300793bdec8b59e8dba&mode=mathml)
- =
![{\displaystyle {\bar {x}}{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=d25d70897b694f0480c1a692f1273a06&mode=mathml)
- Calculation2 for
- see Calculation1 step 1
- =
![{\displaystyle {\bar {x}}{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=d25d70897b694f0480c1a692f1273a06&mode=mathml)
- =
![{\displaystyle ({\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*y_{i})-({\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*{\bar {y}})-{\bar {x}}*{\bar {y}}+{\bar {x}}*{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=fe26e40612a10c68635b85c139661e6b&mode=mathml)
- =
- Using linearity of sums
- =
- Using factoring out the common factor
- =
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*(y_{i}-{\bar {y}})}](/index.php?title=Spezial:MathShowImage&hash=d1d54346bff2e210449da88416795552&mode=mathml)
- q.e.d. Formula 1
![{\displaystyle S_{\text{xy}}={\frac {1}{n}}\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})={\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*y_{i}-{\bar {x}}*{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=88d83620ae05f649716e0dd42b9dd22f&mode=mathml)
- proof as in 1 until
- =
- Calculation3 for
- see Calculation2
- =
![{\displaystyle {\bar {x}}*{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=0558f1da0d3cfd0237e6cb062bf8df20&mode=mathml)
- =
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*y_{i}-{\bar {x}}{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=1de408228dd7a39f18f2d077288bacd6&mode=mathml)
- q.e.d. Formula 3
![{\displaystyle S_{\text{xy}}={\frac {1}{n}}\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})={\frac {1}{n}}\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})*y_{i}}](/index.php?title=Spezial:MathShowImage&hash=e38ad588074f724dbaadab6ea956efd2&mode=mathml)
- proof as in 1 until
- =
- using calculation2 and calculation3
- =
![{\displaystyle ({\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}*y_{i})-{\bar {x}}*{\bar {y}}-({\frac {1}{n}}\sum \limits _{i=1}^{n}{\bar {x}}*y_{i})+{\bar {x}}*{\bar {y}}}](/index.php?title=Spezial:MathShowImage&hash=4149630133603753d5cea0f84c9ec018&mode=mathml)
- =
- using linearity of sums
- =
- using factoring our the common factor
- =
![{\displaystyle {\frac {1}{n}}\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})*y_{i}}](/index.php?title=Spezial:MathShowImage&hash=2fd9aa735ab755f7c42a783d33e9803f&mode=mathml)
- q.e.d. Formula 2
Find concrete matrices A and B such that AB ≠ BA. With these matrices, show that (AB)' = B'A'.
AB ≠ BA:
A =
, B =
A*B =
B*A =
AB ≠ BA:
≠
(AB)' = B'A':
(AB)' = transpose of Matrix
- =
= ![{\displaystyle {\begin{pmatrix}19&43\\22&50\end{pmatrix}}}](/index.php?title=Spezial:MathShowImage&hash=d1e0f89284a55173e23976e75132a219&mode=mathml)
A' =
' =
B' =
' =
B'A' =
*
- =
![{\displaystyle {\begin{pmatrix}5*1+7*2&5*3+7*4\\6*1+8*2&6*3+8*4\end{pmatrix}}}](/index.php?title=Spezial:MathShowImage&hash=92bea8a351b4375c52b8b278ec18bb9e&mode=mathml)
- =
![{\displaystyle {\begin{pmatrix}19&43\\22&50\end{pmatrix}}}](/index.php?title=Spezial:MathShowImage&hash=d1e0f89284a55173e23976e75132a219&mode=mathml)
(AB)' = B'A'
=
Let
A =
and compute A', A-1 , (A')-1, (A-1)'.
A' = transpose of a matrix
=
A-1 = inverse of a matrix
→ with Elementary Transformation Method
=
/ - (2 * first line)
=
A-1 =
(A')-1 =
/ -(2 * second line)
=
(A')-1 =
(A-1)' =
=
Let
A =
, b =
and compute Ab. Demonstrate that the result is a linear combination of the columns of A with the coefficients being the components of b.
A * b =
=
*
=
=
= 3 *
+ 0 *
+ 1 *
=
see https://vowi.fsinf.at/images/8/80/TU_Wien-Econometrics_for_Business_Informatics_VU_%28Schneider%29_-_Exercise_1.7_2024S.pdf
Let x1, ..., xn ∈ ℝ. If Sxx = 0, what can you deduce for the x'is (i=1, ..., n)? What does Sxx = 0 imply for data points (x1, y1), ... (xn, yn), i.e., how would a plot of these data points look like?
So that Sxx can be 0, all data points must be in one line parallel to the y axis. So all x are equal to
. (Tipp: Draw it by making little dots starting on the x-axis at
straight up.)