Published on 3/22/2015

The term "genetic algorithm" has an aura of mystery surrounding it, so I decided to dive in and see what all the fuss is about.
When you boil them down and look at the essentials, genetic algorithms aren't *that* complicated - and they can help you find an optimal solution to complex combinatorial problems.

The big idea at the heart of GAs is this: randomly create a "population" of possible solutions to your problem, combine and modify those solutions in different ways to create *new* solutions, evaluate the new solutions to see how effective the are, and then rise and repeat.

Genetic algorithms have five main parts:

- Encoding
- Evaluation
- Selection
- Crossover
- Mutation

GAs manipulate possible solutions to our problem, which means that we need to devise a way to format our solutions in such a way that an algorithm can process them. Throughout this post, I'm going to use the Traveling Salesman Problem (TSP) as an example. Suppose we have a salesman that needs to visit 10 different cities. We know the distances between each of the cities, and we want to find the shortest round-trip route for the salesman that will take him to each of the cities only once.

In this example, the solution to our problem will take the form of a path - an ordered list of cities that the salesman will visit. We can represent each city with an integer, and use a vector to make our list of stops. Assuming that we have 10 cities to visit, a possible solution would look like this:

[3 5 4 6 2 7 8 1 9 0]The solution above means the salesman would start in city 3, travel to city 5, then city 4, etc (you get the idea) and finally return to city 3.

In my opinion, encoding is the most difficult part of creating a GA. While it may seem obvious for the TSP, the method of encoding is going to be different for most problem domains and you may have to get pretty creative.

In Clojure, creating a random population is pretty easy:

The end goal of our GA is to find the optimal solution to our problem, so we need to give our GA the ability to determine *how effective* a given solution is. This measure of a solution is know as its fitness score.

For our example, we want to find the shortest path that the salesman can take - that means that our fitness score would simply be the total distance the salesman must travel. Let's define our data and a helper function to get the distance between two cities.

Our distance data is contained in a 2d vector, but you can think of it like a table:

City 1 | City 2 | City 3 | ... | |
---|---|---|---|---|

City 1 | 0 | 28 | 57 | ... |

City 2 | 28 | 0 | 28 | ... |

City 3 | 57 | 28 | 0 | ... |

... | ... | ... | ... | ... |

With our data in this format, we can define a function that will calculate the fitness score for a given solution.

The meat of a GA is in the crossover function, but before we get to that, we need to determine *which solutions* in our population will be paired up in the crossover. This process is often called selection, and there are many different approaches to explore. I'm going to cover two possible selection processes - tournament selection and roulette selection. The outcome of these selection processes is going to be a vector that places mates next to each other:

[a1 a2, b1 b2, c1 c2]

**Tournament Selection**

In tournament selection, we randomly select n number of participants from our population, and the "winner" of the tournament is the participant with the best fitness score. Remember that the *lower* the score, the better (at least in our example).

**Roulette Selection**

Roulette selection gives solutions with a better fitness score a higher chance of being selected. My implementation is a bit...odd. Here's the basic idea: we want to create a "roulette wheel" that we can spin to find a random solution. The better the solution's fitness score, the bigger its slice on the roulette wheel.

To create the roulette wheel, I create a vector of *cumulative* fitness scores...sort of. Because the *lower* fitness scores are better, we have to transform them a bit. I picked a number (700) that is larger than the fitness score of any solution (I just guessed here). Then, we can subtract our actually fitness score from 700, and now our *bigger* scores are better. Using the bigger-is-better scores, I create a vector of cumulative scores, and that's the roulette wheel.

Now that we have our wheel, we need to spin it once for every solution of the population. To do so, we'll choose a random number between 0 and the last element of the roulette wheel vector. We then find the first element in the vector that is greater than our random value, and the index of that value is the index of the solution in the population vector that we picked.

You know what? Let's just step through it. Assume we have this population:

[[0 1 2 3] [2 3 1 0] [4 2 3 1]]To build our roulette wheel, we want to subtract each score from 700. Our scores are now 500, 300, and 200 respectively. The

with fitness scores of:

[200 400 500]

0 + 500 = 500 500 + 300 = 800 800 + 200 = 1000

which results in:

[500 800 1000]

Hopefully you can see how we're assigning a bigger slice of the roulette wheel to the solutions with better fitness scores. My implementation of this process in Clojure is below.

*(I also added a 'fitness multiplier' to help tune the results of roulette selection)*

Note: I gloss over a lot of the details in this section, and I might write a follow-up post that goes into more detail.

Now that we've selected which solutions in our population will be crossed-over (bred), we can finally get into the good stuff. Just like selection, there are a number of different methods to perform crossover. The method with which you encoded your solutions may dictate which method of crossover works best. In our example, I'm going to use cycle crossover. Cycle crossover will prevent the creation of invalid solutions. Remember that we must visit *every city*, and we can only visit each city *once*. Therefore, the solution below is invalid because we visit city 1 twice, and because we don't visit city 9:

[7 6 5 1 8 3 2 4 1]

The cycle crossover finds *subsets of values* in two solutions that occupy the same set of indices. These subsets are called cycles. Let's look at the example below.

0 | 4 | 3 | 6 | 2 | 9 | 5 | 7 | 1 | 8 |

8 | 2 | 3 | 4 | 5 | 6 | 1 | 7 | 9 | 0 |

Notice the highlighted cells - the subset of values (8 and 0) occupy the same subset of indices (0 and 9) in both solutions. This means they can safely be switched between the solutions, and the resulting solutions will *still be valid!*

8 | 4 | 3 | 6 | 2 | 9 | 5 | 7 | 1 | 0 |

0 | 2 | 3 | 4 | 5 | 6 | 1 | 7 | 9 | 8 |

Awesome! But...given a pair of solutions, how do we go about finding a cycle? My implementation is below, but there's probably a better way!

We can then apply the crossover to the entire population with the functions below:

Mutation causes the values at random indices in a given solution to switch places. **Mutation should only occur in a small percentage of the population.** This helps keep the population from settling into a local optimum. To perform a mutation, we simply randomly choose two indices and swap their values. In the example below, the random indices that were chosen are 0 and 6.

8 | 4 | 3 | 6 | 2 | 9 | 5 | 7 | 1 | 0 |

Mutates into:

5 | 4 | 3 | 6 | 2 | 9 | 8 | 7 | 1 | 0 |

In my implementation below, 15% of the population will undergo mutation, and we actually swap two sets of indices.

Now that we have all of the steps in the process of the algorithm complete, we can piece everything together.

In the code above, you'll notice the references to 'draw-chan' - in the full version of this experiment, I've included a visualization component using Quil.

You can find the full code on Github here.

Thanks for stopping by!