TU Wien:Advanced Methods for Regression and Classification VU (Filzmoser)/Oral exam 2020-02-27
Zur Navigation springen
Zur Suche springen
27.02.20 - 11:00[Bearbeiten | Quelltext bearbeiten]
Participant 1:
- Smoothing Splines: What are splines, what is the criterion to optimize and the solution (formulas). How to determine Lambda/degrees of freedom. What does degrees of freedom mean in this context, why do we care about it?
- SVM: Write down the optimization criteria with side constraints, how we get the solution (Lagrange primal, dual function) and KKT conditions for the linearly separable and inseparable case. Why do we need the KKT conditions?
- What is the Kernel trick? <- For this question I started with basis expansions, and that we put the basis functions in a vector h(x) = (h_1(x), ..., h_M(x)) and define our hyperplane with that. Then I explained that h(x) is only involved in the inner product in the solution and that we don't need to define the basis functions if we have a Kernel function that returns a real value. That seemed to be what Prof. Filzmoser wanted to hear.
Participant 2:
- Regression trees: Splitting criteria (formula), how to grow the tree and how long? Quality measure and cost complexity criterion (formula)
- PCR: Model, how to get principal components (formulas), solution for the estimated coefficients beta_hat (formula). Why do we need PCR, in what use cases might this be helpful (singularity of X^TX)?