TU Wien:Machine Learning VU (Musliu)/Exam 2021-01-25

Aus VoWi
Zur Navigation springen Zur Suche springen

True/False questions[Bearbeiten | Quelltext bearbeiten]

  • SVM with gradient descent always finds the optimal hyperplane - False
  • Gradient descent always finds the global optimum - False
  • A RNN can be unrolled into an infinite fully connected network - True
  • Pooling and convolution are operations related to RNNs - False
  • Learning the structure of a bayesian network is less complex than learning the probabilities - False
  • SVMs with a linear kernel are especially good for high-dimensional data - True
  • Random forest is a homogeneous ensemble method - True
  • If you use use several weak learners h with boosting to get a classifier H and all h are linear classifiers, then H is also a linear classifier - False

Free text questions[Bearbeiten | Quelltext bearbeiten]

  • Explain difference between L2 and L1 regularization
  • Explain implications of no free lunch theorem
  • Decision tree stump example with boosting
  • Depth of decision tree with 1000 samples and max 300 samples per leave
  • Explain polynomial regression, name advantages and disadvantages compared to linear regression