Computer Science Study Notes Help

Artificial Intelligence

1.1 Agent Design

There are mainly two types of agents.

  1. Reflex Agents: Choose action based on current percept (and maybe memory), may have memory or a model of the world's current state, do not consider the future consequences of their actions.

  2. Planning Agents: Decisions based on (hypothesized) consequences of actions, must have a model of how the world evolves in response to actions, and must formulate a goal (test).

1.2 Search Problems and Algorithms

Search Problems

  • A state space.

  • A successor function (with actions, costs).

  • A start state and a goal test.

  • A solution is a sequence of actions (a plan) which transforms the start state to a goal state.

State Space Graphs vs. Search Trees

State Space Graphs vs. Search Trees

Search Algorithms

  • Uninformed search algorithms (blind search)

    • Depth-First Search

    • Breadth-First Search

    • Uniform-Cost Search

  • Informed search algorithms (heuristic search)

    • A* Search

Depth-First Search

  • Completeness: could be infinite, so only if we prevent cycles.

  • Optimality: No, it finds the "leftmost" solution, regardless of depth or cost.

Depth-First Search

Breadth-First Search

  • Completeness: must be finite if a solution exists , so yes!

  • Optimality: Only if costs are all 1.

Breadth-First Search

Iterative Deepening: get DFS’s space advantage with BFS's time / shallow solution advantages, run Run a DFS with depth limit 1, 2, 3, ...

Uniform Cost Search: Expand the node with the lowest path cost.

  1. Dequeue the node with the lowest path cost (current_node).

  2. If current_node is the goal state, reconstruct and return the path.

  3. Expand current_node: For each successor (neighbor) of current_node,

    • If the successor is not in the explored set and not already in the priority queue, create a new node for the successor with the calculated cost and parent set to current_node, and insert the successor node into the priority queue.

    • Else if the successor is already in the priority queue with a higher cost, update the successor's cost in the priority queue and its parent to current_node.

import java.util.*; class Node implements Comparable<Node> { String state; Node parent; int cost; public Node(String state, Node parent, int cost) { this.state = state; this.parent = parent; this.cost = cost; } @Override public int compareTo(Node other) { return Integer.compare(this.cost, other.cost); } } public class UCS { public static List<String> uniformCostSearch(Map<String, Map<String, Integer>> graph, String start, String goal) { PriorityQueue<Node> priorityQueue = new PriorityQueue<>(); priorityQueue.offer(new Node(start, null, 0)); Set<String> explored = new HashSet<>(); while (!priorityQueue.isEmpty()) { Node current = priorityQueue.poll(); if (current.state.equals(goal)) { List<String> path = new ArrayList<>(); while (current != null) { path.add(current.state); current = current.parent; } Collections.reverse(path); return path; } explored.add(current.state); Map<String, Integer> successors = graph.getOrDefault(current.state, Collections.emptyMap()); for (Map.Entry<String, Integer> entry : successors.entrySet()) { String successor = entry.getKey(); int cost = entry.getValue(); Node successorNode = new Node(successor, current, current.cost + cost); if (!explored.contains(successor)) { boolean inQueue = false; for (Node n : priorityQueue) { if (n.state.equals(successor)) { inQueue = true; if (n.cost > successorNode.cost) { priorityQueue.remove(n); // Remove higher cost node priorityQueue.offer(successorNode); // Add lower cost node break; // Important: Exit loop after updating } } } if (!inQueue) { priorityQueue.offer(successorNode); } } } } return null; // No path found } }
#include <iostream> #include <queue> #include <vector> #include <unordered_map> #include <unordered_set> #include <functional> struct Node { int state; Node* parent; int cost; explicit Node(const int s, Node* p = nullptr, const int c = 0) : state(s), parent(p), cost(c) {} // Comparison operator for priority queue (min-heap) bool operator>(const Node& other) const { return cost > other.cost; } }; std::vector<int> uniform_cost_search(const std::unordered_map<int, std::unordered_map<int, int>>& graph, int start, int goal) { std::priority_queue<Node, std::vector<Node>, std::greater<>> priority_queue; priority_queue.emplace(start); std::unordered_set<int> explored; while (!priority_queue.empty()) { Node current_node = priority_queue.top(); priority_queue.pop(); if (current_node.state == goal) { std::vector<int> path; const Node* current = &current_node; while (current) { path.push_back(current->state); current = current->parent; } std::reverse(path.begin(), path.end()); return path; } explored.insert(current_node.state); if (graph.contains(current_node.state)) { for (const auto&[fst, snd] : graph.at(current_node.state)) { int successor = fst; const int cost = snd; if (!explored.contains(successor)) { Node successor_node(successor, &current_node, current_node.cost + cost); // Use pointer to current_node bool in_queue = false; std::vector<Node> temp_queue; while(!priority_queue.empty()){ Node queue_node = priority_queue.top(); priority_queue.pop(); if (queue_node.state == successor){ in_queue = true; if (queue_node > successor_node){ temp_queue.push_back(successor_node); } else { temp_queue.push_back(queue_node); } } else{ temp_queue.push_back(queue_node); } } for (const auto& node : temp_queue){ priority_queue.push(node); } if (!in_queue) { priority_queue.push(successor_node); } } } } } return {}; }
import heapq class Node: def __init__(self, state, parent=None, cost=0): self.state = state self.parent = parent self.cost = cost def __lt__(self, other): return self.cost < other.cost def uniform_cost_search(graph, start, goal): priority_queue = [] heapq.heappush(priority_queue, Node(start)) explored = set() while priority_queue: current_node = heapq.heappop(priority_queue) if current_node.state == goal: path = [] while current_node: path.append(current_node.state) current_node = current_node.parent return path[::-1] explored.add(current_node.state) for successor, cost in graph.get(current_node.state, {}).items(): # Directly access successors and costs successor_node = Node(successor, current_node, current_node.cost + cost) if successor not in explored: in_queue = False for i, node in enumerate(priority_queue): if node.state == successor: in_queue = True if node.cost > successor_node.cost: priority_queue[i] = successor_node heapq.heapify(priority_queue) break if not in_queue: heapq.heappush(priority_queue, successor_node) return None
  • If that solution costs and arcs cost at least , then the "effective depth" is roughly .

  • Time complexity:

  • Space complexity:

  • Completeness: Assuming best solution has a finite cost and minimum arc cost is positive, yes!

  • Optimality: Yes!

1.2.2 Heuristics (A* Search)

In AI, A heuristic is a function that estimates how close a state is to a goal in a search problem.

Greedy Search: Strategy: Expand a node that you think is closest to a goal state using heuristic.

Combining UCS & Greedy Search

  • Uniform cost orders by path cost, or backward cost .

  • Greedy orders by goal proximity, or forward cost .

  • A* Search orders by the sum: .

Admissibility

  • Inadmissible (pessimistic) heuristics can break optimality by overestimating the cost to reach the goal, so A* may not find the actual optimal path.

  • Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs.

A heuristic h is admissible (optimistic) if, ,

where is cost indicated by to reach a goal from , is the optimal cost to reach a goal from .

Optimality of A* Search

Assume is an optimal goal node, is a suboptimal goal node, is admissible, prove that will exit the fringe before .

Proof

Imagine is on the fringe, Some ancestor of is on the fringe, too (maybe !)

n will be expanded before B

  1. is less or equal to .

  2. is less or equal to .

  3. expands before .

All ancestors of expand before , expand before B, so A* search is optimal.

Semi-Lattice of Heuristics

  • Dominance: if ,

  • Max of admissible heuristics is admissible. By comparing, we want the best heuristic possible , one that's as close to the true cost as you can get while still being admissible.

  • Top of lattice is the exact heuristic, which would be the exact cost.

Semi-Lattice of Heuristics

2 Constraint Search Problem

Constraint Satisfaction Problems (CSPs): A special subset of search problems, for which states is defined by variables , with values from a domain (sometimes depends on ), and goal test is a set of constraints specifying allowable combinations of values for subsets of variables.

Binary CSP: Each constraint relates (at most) two variables.

Binary constraint graph: Nodes are variables, arcs show constraints.

Binary Constraint Graph

Varieties of CSPs

  • Discrete Variables

    • Finite domains: Size means complete assignments, e.g., Map Coloring, N-Queen, Boolean CSPs, including Boolean satisfiability (NP-complete).

    • Infinite domains (integers, strings, etc.): E.g., job scheduling, variables are start/end times for each job; linear constraints solvable, nonlinear undecidable.

  • Continuous Variables: E.g., start/end times for Hubble Telescope observations, linear constraints solvable in polynomial time by linear programming methods.

Varieties of Constraints

  • Unary constraints: Involve a single variable (equivalent to reducing domains), e.g.: .

  • Binary constraints: involve pairs of variables, e.g.: .

  • Higher-order constraints: Involve 3 or more variables, e.g., cryptarithmetic column constraints.

Backtracking Search: DFS, along with variable ordering (i.e., [WA = red then NT = green] same as [NT = green then WA = red]) and constraints checking as you go (i.e., consider only values which do not conflict with previous assignments).

Filtering

  1. Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection for all failures.

    For example, in the example of mapping colors for Australia above, if WA is red, then NT & SA cannot be red; if we choose Q to be green next, there is a problem: NT & SA cannot all be blue, but we didn't detect this!

  2. Consistency of A Single Arc: An arc X -> Y is consistent iff for every in the tail there is some in the head which could be assigned without violating a constraint.

    Consistency of A Single Arc

    In this example, Arc SA to NSW is consistent: for every x in the tail there is some y in the head which could be assigned without violating a constraint.

Last modified: 06 December 2024