Artificial Intelligence

Study Guide #1

Chapter 1: Introduction

Know what the Turing Test is.  Know the difference between an algorithm and a heuristic.

Chapter 2: Intelligent Agents

Know the definitions for the following terms: agent, environment, sensors, percepts, actuator, agent function, agent program, *rational agent*.  Know and understand the two-room vacuum cleaner world example (percepts, actions, type of agent, etc.). Know the four specifications that describe the nature of an environment (PEAS).  Know the properties of task environments (Deterministic vs. stochastic, fully vs. partially observable, etc.).  Given an environment, be able to describe the specifications and properties of that environment (Study the book examples and questions in chapter 2).  Be able to describe the architecture of various agent designs: random agent, table driven agent, reflex agent, reflex agent with state, goal based agent, utility based agent, and learning based agent.

Chapter 3: Solving Problems by Searching

Know the definitions for the following terms: problem solving agent, state, node, uninformed search, state space, well defined problem.  Know the assumptions that goal based agents make on the problem environment (Static, observable, discrete, deterministic).  Given a problem, define it be the four components (initial state, successor function, goal test, path cost); study the book examples and questions in chapter 3.  Know the definition of an uninformed search strategy and the algorithms of the five techniques discussed in class: breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search.  Know how each is evaluated in terms of performance (complete, optimal, time complexity, space complexity).

Chapter 4: Informed Search and Exploration

Know the definition of informed search.  Know the definition of g(n), h(n), and f(n).  Know the algorithm for best-first search (aka greedy-best-first search) (complete, time, space, optimal).  Know the algorithm for A* search (complete, time, space, optimal).  Know the definition of an **admissible heuristic**.  Understand the difference in solutions to the route planning problem when using best-first search and A* search.  Know the technique of iterative deepening A* search and simple memory bounded A* search.  Know the definition of dominance as related to heuristic functions.  Know the definition of a relaxed problem and why their optimal solutions never exceed the optimal solution of the real problem.  Understand how heuristics can be learned (derived) from experience.  Know the algorithmic and heuristic techniques discussed in class to solve optimization problems (direct, hill climbing, simulated annealing, and genetic algorithms).

Chapter 6: Adversarial Search

Know the AI properties of simple games (deterministic, turn-taking, fully observable, two-player, zero-sum, perfect information).  Know what a game tree is.  Know the algorithm for the minimax strategy and its properties (complete, time, space, optimal).  Be able to describe the Alpha-Beta pruning technique to improve performance.  Know how games are described in terms of initial state, successor function, terminal test, and utility function.  Be able to define a game using these four properties (study the examples in the textbook and questions in chapter 6).  Know the techniques for "cut-off" for minimax search and Alpha-Beta pruning.