Thursday, January 17, 2013

# 3: Multiobjective Optimization (MOO)

In search based software engineering (SBSE), we come across the problem of multiobjective optimization (MOO).  This problem can best be explained in terms of math functions.  Given an input vector X = [x1, x2, ..., xi], and a set of objective functions, Y = [f1(X), f2(X), ..., fj(X)], we'd like to know for which X are the Y optimized.

An example is my own dumb "moo problem":
 - We start with three input variables: X  = [x1, x2, x3], each xi bounded between [-2,2]
 - And define two output Y variables as follows:
 - - - f1(X) = -3*x1 + 2*x2
 - -  -f2(X) = 3*x3 + -2*x2

Imagine generating thousands of random sample inputs for the X.  Then evaluate the objective functions f1 and f2.  Create the goal of minimizing f1 and minimizing f2.  For which input vectors X are f1 and f2 the smallest?  In SBSE, the input vector X is a set of "decisions", and the output vector Y is a set of "objectives" that we want to minimize.

The example I used here is a very simple made-up random problem.  In reality there are dozens of such models such as Fonseca, Sriniva, ConstrEx, ZDT1, Golinski, and many more.  Note that some of these are constrained models, and some aren't.  The idea of constraints comes into play about additional bounds to the input X variables.

There are algorithms that can optimize solutions for such problems like this automatically for us, most notably NSGAII and SPEA2.  In my own research, we are experimenting with a newer technology known as RRSL (Rapid Recursive Spectral Learning.)  The use of these algorithms for MOO is often known as MOEA (Multiobjective Evolutionary Algorithm).

In a simpler world, where is only one objective to optimize, we can employ simpler methods such as KEYS or KEYS2.  But more than often, we're interested in more than just a single objective.

No comments:

Post a Comment