TU Wien:Machine Learning VU (Musliu)/Exam 2020-09-09

Aus VoWi
Zur Navigation springen Zur Suche springen

Work time was 90 minutes

1-12 True/False questions[Bearbeiten | Quelltext bearbeiten]

Each correctly answered question was worth 2 points, each wrongly answered question -1. 0 points for no answer.

  • Softmax as activation function is used to scale output to a range of 0..1
  • Kernels can only be used with SVMs
  • Boosting is easy to parallize
  • A good machine learning model has low bias and low variance.
  • Freezing layers means that their weights are only updated during fine-tuning.
  • Leave-p-out cross validation is computationally expensive on large data sets.
  • There exists no Bayesian network where all instatiated nodes are not d-seperated after one node is instatiated.
  • Lasso can not be used for feature selection.
  • Something with "Off-the-shelf" model
  • F1 used with regression
  • Naive Bayes - probability density function used when only nominal attributes used
  • Convolutions and max-pooling layers are important for Recurrent Neural Networks

Bayesian Networks[Bearbeiten | Quelltext bearbeiten]

Describe a local-search based algorithm for creating bayesian networks.

No free lunch theorem[Bearbeiten | Quelltext bearbeiten]

What are the implications of the no free lunch theorem?

Decision[Bearbeiten | Quelltext bearbeiten]

Given was a plot of some 2d data and 4 decision boundaries. Tick which decision boundary was produced by a decision tree.

KNN[Bearbeiten | Quelltext bearbeiten]

Given was a plot of some 2d data and 3 decision boundaries made with KNN for k1, k2 and k3. We should decide which order holds for the ks.

  • k1 > k2 > k3
  • k2 < k3 < k1
  • k1 = k3 = k2
  • None of the above

Polynomial Regression[Bearbeiten | Quelltext bearbeiten]

Explain polynomial regression. What are the advantages/disadvantages?

18 Max Pooling[Bearbeiten | Quelltext bearbeiten]

Given was an 7x7 input tensor and a 3x3 layer. Had to tick the right output after max-pooling operation with stride 2.

19 Convolution[Bearbeiten | Quelltext bearbeiten]

Given was an 7x7 input tensor and a 3x3 filter. Had to tick the right output after convolution with stride 2.

20 Naive Bayes[Bearbeiten | Quelltext bearbeiten]

Given was some training and test data. We should calculate the Recall of NB classifier with Laplace Correction.