TU Wien:Einführung in Artificial Intelligence VU (Eiter)/Ausarbeitung aller Theoriefragen (2023/24)

Aus VoWi
Zur Navigation springen Zur Suche springen

Die gleichen Fragen, wie im Ankideck Theoriefragen (2023/24). Hier finden sich alle reinen Theoriefragen (MC und offene Fragen) aus den Tests dieser LVA. Die Seite LVA TU Wien:Einführung in die Künstliche Intelligenz VU (Eiter, Tompits) hat noch viele weitere Alttests (<= 2022), teils mit gleichen, aber auch anderen Fragen als hier. Letztere sind hier nicht enthalten.

What are the 4 components of the PEAS description of an agent? Provide an example for each.[Bearbeiten | Quelltext bearbeiten]

PEAS stands for:

  • Performance Measure – Criteria for success (e.g., score in a game)
  • Environment – Surroundings in which the agent operates (e.g., chessboard)
  • Actuators – What the agent uses to act (e.g., robotic arms, mouse clicks)
  • Sensors – What the agent uses to perceive (e.g., camera, input API)

Example (self-driving car):

  • Performance Measure: safety, time, fuel efficiency
  • Environment: roads, traffic, pedestrians
  • Actuators: steering, accelerator, brake
  • Sensors: cameras, GPS, lidar

Explain the concept of a utility-based agent[Bearbeiten | Quelltext bearbeiten]

  • assess goals with a utility function
  • resolve conflicting goals
  • use expected utility for decision

Draw the structure of a utility-based agent[Bearbeiten | Quelltext bearbeiten]

See image in Anki/slides

T/F: A Recurrent Neural Network has no internal state.[Bearbeiten | Quelltext bearbeiten]

False — A Recurrent Neural Network (RNN) has an internal state (like flip-flops), short term memory. RNNs are also used in NLP.

T/F: Decision trees can express any function of the input attributes.[Bearbeiten | Quelltext bearbeiten]

True — Decision trees can express any function of the input attributes. E.g., for Boolean functions, one truth table row → path from root to leaf. But they may not be the most efficient representation.

T/F: The logical equivalence function can be realised with a single-layer neural network.[Bearbeiten | Quelltext bearbeiten]

False — The logical equivalence function is not linearly separable. If you plot these points, the two "1"s lie on opposite corners of the square — no single line can separate them, so a single-layer neural network cannot represent it. A multi-layer network is needed.

T/F: It is assumed that rational agents are omniscient.[Bearbeiten | Quelltext bearbeiten]

False — percepts may lack relevant infos. Rational agents are not omniscient; they make decisions based on available information and their performance measure, but they do not know everything.

Explain completeness, optimality, time complexity, and space complexity in the context of search algorithms.[Bearbeiten | Quelltext bearbeiten]

  • Completeness: does the algorithm always find a solution if one exists (in the search space)?
  • Optimality: does it always find a least-cost (i.e. best) solution?
  • Time Complexity: number of nodes generated/expanded.
  • Space Complexity: maximum number of nodes in memory

T/F: Depth-first search is optimal if the depth of the search tree is finite.[Bearbeiten | Quelltext bearbeiten]

False — Depth-first search is never optimal. It can get stuck in deep branches and miss shallower solutions. It is complete only if the search space is finite.

T/F: Random-restart hill climbing will always find the global optimum given enough time[Bearbeiten | Quelltext bearbeiten]

True — but maybe ambiguous. This algorithm is probabilistically complete, meaning that as the number of restarts approaches infinity, the probability of finding the global optimum approaches 1. So this is only correct if "enough time" isn't bound by a finite limit.

T/F: If the costs of each edge are identical, BFS is optimal[Bearbeiten | Quelltext bearbeiten]

True — with uniform (identical) edge costs, BFS finds the shallowest (fewest-step) solution, which is also the lowest-cost one.

T/F: each consistent assignment of values of a set of variables of a CSP is a solution of that CSP[Bearbeiten | Quelltext bearbeiten]

False — A Constraint Satisfaction Problem (CSP) consists of:

  • Variables
  • Domains for each
  • Constraints restricting which combinations of values are allowed

A solution is a complete assignment—every gets a value in —that satisfies all constraints. Thus, although a consistent assignment to some subset of variables doesn’t violate any constraints, it isn’t a full “solution” unless all variables are assigned.

T/F: CSPs with nonlinear constraints are always decidable[Bearbeiten | Quelltext bearbeiten]

False — If your CSP allows arbitrary polynomial equations over the integers, deciding solvability is undecidable.

For a CSP with n variables with respectively d possible values, what is the total number of complete assignments?[Bearbeiten | Quelltext bearbeiten]

Each of the variables can take possible values, so the total number of complete assignments is , which is .

T/F: The Action Description Language (ADL) allows only positive literals in states[Bearbeiten | Quelltext bearbeiten]

False — ADL extends STRIPS by allowing both positive and negative literals in states and preconditions, enabling more expressive representations.

T/F: If a literal does not occur in a state in ADL, then it is assumed to be false[Bearbeiten | Quelltext bearbeiten]

False — unmentioned literals in ADL are unknown (Open-World Assumption). In STRIPS however, they are assumed false (Closed-World Assumption).

T/F: The POP-Algorithm only produces consistent plans[Bearbeiten | Quelltext bearbeiten]

True — Partial-Order Planning (POP) maintains consistency by resolving threats and only committing to ordering constraints that preserve plan validity.

T/F: In STRIPS it is assumed that literals not mentioned in the effect remain unchanged[Bearbeiten | Quelltext bearbeiten]

True — In STRIPS, the definition of its semantics incorporates what is known as the STRIPS assumption: every literal not mentioned in the effect remains unchanged. This is how STRIPS addresses the frame problem.

T/F: What role does the activation function play in a neural network?[Bearbeiten | Quelltext bearbeiten]

The activation function determines the output of a neuron based on its input. Without the activation function, the neuronal network would just represent a linear combination (a single layer linear model).

T/F: Name 3 activation functions from the lecture and draw their corresponding graphs.[Bearbeiten | Quelltext bearbeiten]

See image in Anki/slides

T/F: Deep neural networks are preferred over wide neural networks.[Bearbeiten | Quelltext bearbeiten]

True — Experiments have shown that when comparing networks of similar size, deeper and narrower architectures often perform better than shallow, wide ones.

T/F: An agent that senses only partial information about the state cannot be rational.[Bearbeiten | Quelltext bearbeiten]

False — Rationality means choosing the best action given the available information, even if incomplete. Rational omniscient.

T/F: Decision Trees are an example of Supervised learning.[Bearbeiten | Quelltext bearbeiten]

True — Decision trees are a type of supervised learning where a function is learned from labeled examples.

T/F: A rational poker playing agent never loses.[Bearbeiten | Quelltext bearbeiten]

False — Rational agents can still lose in stochastic environments like poker; they aim to maximize expected success, not guarantee a win.

Explain the notion: Time complexity[Bearbeiten | Quelltext bearbeiten]

Time complexity refers to the number of nodes generated or expanded by a search strategy, often expressed in terms of problem parameters like branching factor and depth.

Explain the notion: Overfitting[Bearbeiten | Quelltext bearbeiten]

Overfitting is when a hypothesis fits the training data too closely, including noise or irrelevant features, causing it to perform poorly on new, unseen examples.

Explain the notion: Hidden unit[Bearbeiten | Quelltext bearbeiten]

A hidden unit is a processing node in a neural network that is neither an input nor output unit; it helps extract and transform features through weighted connections.

Explain the notion: Hopfield network[Bearbeiten | Quelltext bearbeiten]

A Hopfield network is a type of recurrent neural network where all units are both inputs and outputs, connections are bidirectional and symmetric (i.e., ), and the activation function is the sign function:

It supports associative memory through distributed representations.

T/F: The path returned by Uniform Cost Search may change when a positive constant is added to every step cost.[Bearbeiten | Quelltext bearbeiten]

True — The path returned by Uniform Cost Search may change when a positive constant is added to every step cost, because longer paths are penalized more heavily than shorter ones.

Example: Starting point SIBIU, goal: BUCHAREST. Problem: SIBIU BUCHAREST = 10, SIBIU FAGARAS BUCHAREST = 9. The latter is preferred as UCS expands the cheaper path (9) first. If we add to every step cost, the first (direct) path becomes 12, while the second path becomes 13, so the first path is now preferred.

T/F: In a finite search space containing no goal state, the A*-Alg will (using graph search) always explore all reachable states[Bearbeiten | Quelltext bearbeiten]

True — In the absence of a goal, A* with graph search continues expanding nodes in order of lowest f-cost until the frontier is empty, meaning it will eventually explore all reachable states.

T/F: Simulated Annealing cannot escape local maxima[Bearbeiten | Quelltext bearbeiten]

False — Simulated Annealing allows some "bad" moves to escape local maxima, but gradually decreases their size and frequency by lowering the temperature over time.

T/F: The Local Beam Search algorithm keeps track of k states rather than just one.[Bearbeiten | Quelltext bearbeiten]

True — Local Beam Search keeps states, selects the top successors from all their children, and shares information between them, unlike running independent searches.

T/F: Linear constraints can be solved with linear programming methods in polynomial time.[Bearbeiten | Quelltext bearbeiten]

True

Which search algorithms can be used to solve CSPs?[Bearbeiten | Quelltext bearbeiten]

Any standard search algorithm

T/F: The task to determine whether a CSP with non-linear constraints has a solution is a decidable problem?[Bearbeiten | Quelltext bearbeiten]

False — In general, determining whether a CSP with non-linear (e.g., polynomial) constraints has a solution is undecidable.

T/F: Literals in STRIPS are ground and may contain functions[Bearbeiten | Quelltext bearbeiten]

False — In STRIPS, literals are ground (i.e., contain no variables), but they must not contain function symbols.

Unlike STRIPS, The Action Description Language (ADL) supports equality[Bearbeiten | Quelltext bearbeiten]

True — ADL extends STRIPS and supports equality tests in preconditions, which STRIPS does not allow.

T/F: Classical planning environments are not fully observable[Bearbeiten | Quelltext bearbeiten]

False — Classical planning assumes environments that are fully observable, deterministic, finite, and static (changes occur only due to the agent's actions).

T/F: In the POP algorithm, the relation ≺ is empty for the initial plan[Bearbeiten | Quelltext bearbeiten]

False — The initial plan in POP includes Start and Finish, the ordering constraint Start Finish, no causal links, and all preconditions in Finish are marked as open.

What is the nonnegativity property of the Value of Perfect Information (VPI)?[Bearbeiten | Quelltext bearbeiten]

The expected value of information is always nonnegative.

What is the nonadditivity property of the Value of Perfect Information (VPI)?[Bearbeiten | Quelltext bearbeiten]

VPI is nonadditive: the combined value of two pieces of evidence is not necessarily the sum of their individual values.

What is the order independence property of the Value of Perfect Information (VPI)?[Bearbeiten | Quelltext bearbeiten]

VPI is order independent: the order in which evidence is considered does not change the value.

Name the 3 properties of the Value of Perfect Information (VPI).[Bearbeiten | Quelltext bearbeiten]

  • Nonnegativity
  • Nonadditivity
  • Order Independence

T/F: A game of chess is a continuous environment[Bearbeiten | Quelltext bearbeiten]

False — Chess is a discrete environment because it has a finite number of distinct states and actions. A continuous environment involves variables that can take on a range of real values without fixed steps — for example, robot navigation in physical space.

T/F: Reinforcement learning is a mix of unsupervised learning and supervised learning[Bearbeiten | Quelltext bearbeiten]

False — Reinforcement learning is a distinct paradigm where an agent learns by interacting with an environment and receiving rewards, not by using labeled examples (supervised) or discovering patterns in data without labels (unsupervised).

What is the goal of inductive learning?[Bearbeiten | Quelltext bearbeiten]

Inductive learning tries to approximate a target function with a hypothesis that is induced from training data, such that .

Briefly describe Uniform-Cost Search (UCS) and give its worst-case time and space complexity.[Bearbeiten | Quelltext bearbeiten]

UCS expands the node with the least cost until a goal state is reached. It's like Dijkstra’s algorithm but stops when the goal is found.

Worst-case time:

Worst-case space:

Where:

  • = max branching factor
  • = min edge cost
  • = cost of optimal solution

Briefly describe Depth-Limited Search (DLS) and give its worst-case time and space complexity.[Bearbeiten | Quelltext bearbeiten]

DLS performs depth-first search up to a predefined depth . Nodes deeper than are not added to the frontier.

Worst-case time:

Worst-case space:

Where:

  • = max branching factor
  • = depth limit

Random restart hill climbing can overcome local maxima[Bearbeiten | Quelltext bearbeiten]

True — By restarting from random initial states, it increases the chance of escaping local maxima and eventually finding the global maximum.

Iterative Deepening Search is for arbitrary (non-negative) costs optimal.[Bearbeiten | Quelltext bearbeiten]

False — IDS is optimal only when all step costs are equal; with arbitrary positive costs, Uniform Cost Search should be used instead.

Local Search methods can not find optimal solutions.[Bearbeiten | Quelltext bearbeiten]

False — Local search methods are not guaranteed to find optimal solutions, but in some cases they can. Techniques like simulated annealing and random restarts improve the chance of reaching the global optimum.

Multi-layer perceptron networks are tailored for image processing.[Bearbeiten | Quelltext bearbeiten]

False — MLPs can be used for image processing but are not well-suited for it. Convolutional Neural Networks (CNNs) are specifically designed for image data as they focus on adjacency (e.g., in images) and aim at feature detection.

A CSP can only contain binary constraints[Bearbeiten | Quelltext bearbeiten]

False — While many CSPs use binary constraints (between two variables), general CSPs can also include unary and higher-order (n-ary) constraints involving more than two variables.

No CSP can be solved in polynomial time.[Bearbeiten | Quelltext bearbeiten]

False — Some CSPs can be solved in polynomial time, such as linear constraints, tree-structured CSPs and CSPs with bounded treewidth. These allow efficient algorithms due to their restricted structure.

Deciding whether a boolean CSP has a solution is NP-hard?[Bearbeiten | Quelltext bearbeiten]

True — Boolean CSPs (where variables have only two values) generalize SAT problems, and deciding whether a solution exists is NP-complete, hence NP-hard.

What is a lottery in decision theory?[Bearbeiten | Quelltext bearbeiten]

A lottery is a set of outcomes that occur with probabilities .

It can be written as:

Each is either an atomic outcome or another lottery.

What is the axiom of substitutability in decision theory?[Bearbeiten | Quelltext bearbeiten]

If an agent is indifferent between outcomes A and B, then it is also indifferent between any two lotteries that differ only by substituting A for B.

Formally:

Classical planning environments are deterministic.[Bearbeiten | Quelltext bearbeiten]

True — Classical planning assumes deterministic environments where each action has a single predictable outcome.

A partial-order plan is created through linearization of a total-order plan.[Bearbeiten | Quelltext bearbeiten]

False — A total-order plan is created by linearizing a partial-order plan, not the other way around.

Describe two methods of planning via state space search.[Bearbeiten | Quelltext bearbeiten]

Progression Planning: Forward state space search from the initial state to the goal.

  • Start with the problem's initial state and consider a sequence of actions until a goal state is reached.
  • An action is only available in a state if all its preconditions are satisfied.

Regression Planning: Backward state space search from the goal to the initial state.

  • Allows consideration of only relevant actions.
  • Actions are relevant if they help achieve the goal state.

What is a conflict between an action C and a causal link A →ₚ B in partial-order planning?[Bearbeiten | Quelltext bearbeiten]

A conflict occurs if:

  • Action has the effect , and
  • could be executed after and before .

This would threaten the causal link from to .