Local Search

Local search

Before we jump into search methods, let's look at how we might use a simple, non-search approach to solve a problem.

Consider the 8-puzzle problem from the previous post. Suppose we start with the following configuration:

8-Puzzle - 1 move away from goal

The blank space is next to a tile that clearly belongs in that spot. So the action that leads to a goal state is immediately clear: UpUp, which slides the tile 66 into the blank space. This results in the desired configuration:

8-Puzzle - Goal

By glancing at the initial state, the action that led to a goal state was obvious. Here's a starting configuration that's a bit more challenging:

8-Puzzle - 3 moves away from goal

After staring at this puzzle for a while, you'll notice that these moves, Down,Left,UpDown, Left, Up—performed in order—takes you to a goal state. So the solution to the problem is this fixed action sequence. Let's consider an even more difficult starting point:

8-Puzzle - Many moves away from goal

Ok...Now the solution is no longer self-evident. The tiles are too scattered to see a clear set of moves to the goal by just looking at the puzzle. So a procedural approach of following a sequence of actions becomes insufficient because we can't glean what actions to take. This is where search algorithms become essential; they give us a systematic way to explore the state space to find that path of actions that leads to a goal.

Random Walk

Random walk is the most naive search algorithm. That's why it's not usually used in practice. But it offers a lens for understanding how to use search to solve problems. To see random walk in action, let's consider the 8-puzzle again, formulated as a search problem (taken from this post):

  • States: Each state in SS corresponds to a specific arrangement of the tiles and the empty space in the grid.
  • Initial State: Some initial state s0Ss₀ ∈ S.
  • Goal States: The set of states GG where the tiles are arranged in numerical order.
  • Actions: In each state ss, your available actions are to move the empty space left, right, up, or down, some of which may be invalid. So ACTIONS(s){Left,Right,Up,Down}\text{ACTIONS}(s) \subseteq \{Left, Right, Up, Down\}.
  • Transition Model: Maps a state (i.e. a tile configuration) and action (i.e. moving the empty space to an adjacent space) to a resulting state.

So given this problem statement, how do we apply search? The basic intuition is, starting from the initial state, we explore what actions are possible from that state, and by extension, what states can be reached by choosing one of those actions. We pick one of these actions, execute it, and then arrive at a new state. If this new state is a goal state, then we're done. Otherwise, we repeat this process: we explore what actions are possible from the new state, pick one of those actions, execute it, and so on.

This is the fundamental idea behind every search algorithm. Where these approaches differ is in their strategy for selecting an action when occupying a state. With random walk, we select an action at—you guessed it—random. Other algorithms that we cover, evaluate and select actions with more sophistication.

Returning to the 8-puzzle, suppose our current state ss looks like this:

8-Puzzle - Example initial state for random walk

So ACTIONS(s)={Left,Right,Down}\text{ACTIONS}(s) = \{Left, Right, Down\}. We pluck one of these actions at random—let's say it's LeftLeft. Then our new state RESULT(s,Left)\text{RESULT}(s, Left) is,

8-Puzzle - Example move with random walk

From this updated state, we rinse and repeat until our configuration matches a goal state.

This doesn't seem awfully systematic, and it's not. There will be actions that lead us closer to a goal, and actions that lead us further away, and actions that lead us back to a state we've already visited. So we can expect this process of trial and error to take a long time. But random walk captures the essence of search. Here are the steps written out as an algorithm:

function RANDOM-WALK(S,s0,G,ACTIONS,RESULT):ss0while true doif sG thenreturn ssRESULT(s,RANDOM(ACTIONS(s)))end while\begin{aligned} &\textbf{function}\ \text{RANDOM-WALK}(S, s_0, G, \text{ACTIONS}, \text{RESULT}): \\ &\quad s \gets s_0 \\ &\quad \textbf{while}\ \text{true}\ \textbf{do} \\ &\quad\quad \textbf{if}\ s \in G\ \textbf{then} \\ &\quad\quad\quad \textbf{return}\ s \\ &\quad\quad s \gets \text{RESULT}(s, \text{RANDOM}(\text{ACTIONS}(s))) \\ &\quad \textbf{end while} \end{aligned}

This algorithm will eventually find a goal state, but in addition to being wildly inefficient, it may never terminate if there is no solution. It will loop endlessly, exploring states at random.

Random walk is an example of local search, which is a family of search algorithms that have the same basic structure as random walk for exploring the state space, but use different criteria for choosing actions.

function LOCAL-SEARCH(S,s0,G,ACTIONS,RESULT,SELECTOR):ss0while true doif sG thenreturn ssRESULT(s,SELECTOR(ACTIONS(s)))end while\begin{aligned} &\textbf{function}\ \text{LOCAL-SEARCH}(S, s_0, G, \text{ACTIONS}, \text{RESULT}, \text{SELECTOR}): \\ &\quad s \gets s_0 \\ &\quad \textbf{while}\ \text{true}\ \textbf{do} \\ &\quad\quad \textbf{if}\ s \in G\ \textbf{then} \\ &\quad\quad\quad \textbf{return}\ s \\ &\quad\quad s \gets \text{RESULT}(s, \text{SELECTOR}(\text{ACTIONS}(s))) \\ &\quad \textbf{end while} \end{aligned}

(We can convert this algorithm into random walk by SELECTOR=RANDOM\text{SELECTOR} = \text{RANDOM}.)

Local search algorithms operate by moving from state to neighboring states, without keeping track of the path and states that have been visited. So they don't have to remember a lot of information (i.e. they're memory-efficient). But like random walk, they are not systematic—they might never explore a the full search space (potentially overlooking goal states) or terminate (potentially getting stuck in visiting the same states over and over again).