MOO can best be described in mathematical terms. MOO is used to optimize a Multi-Objective Problem (MOP). A MOP consists of a set (X1, X2, …, Xn) of n decision explanatory variables, a set (Y1, Y2, …, Yk) of k objective response variables, and either a set of objective evaluation functions (f1(X1, X2, …, Xn), f2(X1, X2, …, Xn), …, fk(X1, X2, …, Xn)) or an evaluation model, either of which assigns values to each of the k objectives. The decision variables each have their respective lower and upper bounds and in the case of constrained MOPs, the set of decisions as a whole is constrained to some set of bounds. The objectives include an indication of which direction (minimizing, or maximizing) is optimal. Typically, the optimization direction is ignored and an assumption is made, but I feel a need for its generalization.
As a broad overview, MOO is both: 1) an MOP and 2) an Algorithm to solve the MOP. We just described the MOP above. Now we discuss algorithmic approaches. Most used is the MOEA (Multi-Objective Evolutionary Algorithm). Two very state-of-the-art MOEAs are NSGA-II and SPEA2. The MOEA is a standard type of algorithm that we build upon when presenting JMOEA (Joe's MOEA).
2. Load Initial Population
3. Collect Initial Stats
4. Generational Evolution:
- - 4a. Selector
- - 4b. Adjustor
- - 4c. Recombiner
- - 4d. Collect New Stats
- - 4e. Evaluate Stopping Criteria
5. Generational Representative Search
The outline above constitutes our JMOEA. First, parameters are initialized, including MU, which defines the number of individuals in a population. Each individual is a listing of decisions and objective fitness (when evaluated). In the second step, an initial population is loaded - not generated. This is so that when two algorithms try to optimize the same problem, they both begin with the same initial set of individuals. Thirdly, stats are collected for the initial population. The initial stats collection involves setting the reference point at the median of the population. This median is then used to calculate our measures of quality and spread of the individuals in the population. These measures help us determined how good we're doing in the algorithm. Other stats include the medians of each objective and the number of evaluations accumulated thus far.
The core of our JMOEA is in step four when evolution starts. Evolution is carried for a number of generations or until the stopping criteria of step 4e says to stop. The methods of 4a, 4b and 4c are the steps that define the algorithm. The selector first identifies a number of individuals from the population to use as "selectees", or offspring. Then, the adjustor in step 4b modifies these selectees (usually per some chance rate). Finally in 4c, the selectees and population are recombined, to either prune the size back down to MU-many individuals, or to grow back up to MU-many individuals.
The definition of an algorithm follows by assigning specific methods to each of 4a, 4b and 4c. For instance, NSGA-II is defined when the selector is a tournament selection, adjustor is crossover and mutation, and the recombiner is the NSGA-II elite method (as described in its original paper reference). SPEA2 is similarly defined, except its recombiner is defined to be the SPEA2 elitist method.
Finally, in step 5, we search through all the generations of the evolutionary process until we find a representing generation; ideally one of the better generations that we can use to say this is how good the algorithm worked.
The code that we use to develop JMOEA is contained in our JMOO package (Joe's MOO). JMOO allows us to define decisions, objectives, problems, algorithms, individuals and the JMOEA. It is very versatile in that new problems can very easily be defined, as well as new algorithms. For fun: JMOO is our purple cow:
The code repository (python) for jmoo: http://unbox.org/things/tags/jmoo/
The rrsl method in jmoo_algorithms.py will require parts of http://unbox.org/things/tags/rrsl/
And the POM3 problem is at: http://unbox.org/things/tags/pom3/