Saturday, March 23, 2013

#24 - The Art of Game Design Chapter 18,19,20

Chapter 18 - Worlds contain Characters


The breakdown of different character type adds some depth to my own understanding of a game, based on the three types of main media - books, movies and games.  But its important that a video game character be of any type  - they should all work.  For instance, the dimensions of character complexity can be a parameter to a game.  Another point to emphasize is that characters should be memorable, and one that adds to this is the complexity of the character name itself.  Names like Samus Aran come to mind - these type of names you don't see in the real world.  Because of their uniqueness, they tend to stick out and bring about an easier attachment to the overall experience (being or meeting) in the character.

An excellent example related to character trait is Chrono Cross.  The lead programmer for that game writes a script that "translates" the style of speak for the characters depending on which character it is that accompanies you throughout the main functions of the game.  The  story told by the character is the same regardless of which character you take.  But for example, you may take a pirate-type character or a french-dame with you, and the language is converted to reflect the type of character.

Lens #75: The Lens of the Avatar
Lens #76: The Lens of Character Function
Lens #77: The Lens of Character Traits
Lens #78: The Lens of the Interpersonal Circumplex
Lens #79: The Lens of the Character Web
Lens #80: The Lens of Status
Lens #81: The Lens of Character Transformation

Chapter 19 - Worlds Contain Spaces


The space of the world is that physical design world where the character can roam.  I would add to this, the manner in which the characters move in the world.  For instance, a lot of games emphasize continuous smooth movement, and some games focus on grid-based cell movement (like chess).

Lens #82: The Lens of Inner Contradiction
Lens #83: The Lens of The Nameless Quality

Chapter 20 - The Look and Feel of a World Is Defined by Its Aesthetics


This chapter seems like a recap of a lot of previous Lenses.  As a summary on aesthetics, it can help bring your game to life and add to the memorable experience.  It takes a real graphical eye to see that the aesthetics are proper - and it depends on the kind of game you build.  For example, a non-photorealistic approach may be more appropriate over a cell-shading cartoon style.  As another detail, sometimes people don't notice details.  The level of detail used may depend on the environment.  That seems funny to say.

Sunday, March 17, 2013

#23 - JMOO Source Code Release

Search Based Software Engineering brings us a need to simulate real world problems and experiment with optimization of objectives.  For this, we need a way to model the simulations and the algorithms used to optimize objectives.  Formally, this is the field of Multi-Objective Optimization (MOO).  Sometimes, when there is only a single objective optimize, it is called Single-Objective Optimization instead.  In slightly older times, SOO was the more-studied, simplified case before its maturation and generalization into MOO.

MOO can best be described in mathematical terms.  MOO is used to optimize a Multi-Objective Problem (MOP).  A MOP consists of a set (X1, X2, …, Xn) of n decision explanatory variables, a set (Y1, Y2, …, Yk) of k objective response variables, and either a set of objective evaluation functions (f1(X1, X2, …, Xn), f2(X1, X2, …, Xn), …, fk(X1, X2, …, Xn)) or an evaluation model, either of which assigns values to each of the k objectives.   The decision variables each have their respective lower and upper bounds and in the case of constrained MOPs, the set of decisions as a whole is constrained to some set of bounds.  The objectives include an indication of which direction (minimizing, or maximizing) is optimal.  Typically, the optimization direction is ignored and an assumption is made, but I feel a need for its generalization.



As a broad overview, MOO is both: 1) an MOP and 2) an Algorithm to solve the MOP.  We just described the MOP above.  Now we discuss algorithmic approaches.  Most used is the MOEA (Multi-Objective Evolutionary Algorithm).  Two very state-of-the-art MOEAs are NSGA-II and SPEA2.  The MOEA is a standard type of algorithm that we build upon when presenting JMOEA (Joe's MOEA).

1. Initialization
2. Load Initial Population
3. Collect Initial Stats
4. Generational Evolution:
 - - 4a. Selector
 - - 4b. Adjustor
 - - 4c. Recombiner
 - - 4d. Collect New Stats
 - - 4e. Evaluate Stopping Criteria
5. Generational Representative Search

The outline above constitutes our JMOEA.  First, parameters are initialized, including MU, which defines the number of individuals in a population.  Each individual is a listing of decisions and objective fitness (when evaluated).  In the second step, an initial population is loaded - not generated.  This is so that when two algorithms try to optimize the same problem, they both begin with the same initial set of individuals.  Thirdly, stats are collected for the initial population.  The initial stats collection involves setting the reference point at the median of the population.  This median is then used to calculate our measures of quality and spread of the individuals in the population.  These measures help us determined how good we're doing in the algorithm.  Other stats include the medians of each objective and the number of evaluations accumulated thus far.

The core of our JMOEA is in step four when evolution starts.  Evolution is carried for a number of generations or until the stopping criteria of step 4e says to stop.  The methods of 4a, 4b and 4c are the steps that define the algorithm.  The selector first identifies a number of individuals from the population to use as "selectees", or offspring.  Then, the adjustor in step 4b modifies these selectees (usually per some chance rate).  Finally in 4c, the selectees and population are recombined, to either prune the size back down to MU-many individuals, or to grow back up to MU-many individuals.

The definition of an algorithm follows by assigning specific methods to each of 4a, 4b and 4c.  For instance, NSGA-II is defined when the selector is a tournament selection, adjustor is crossover and mutation, and the recombiner is the NSGA-II elite method (as described in its original paper reference).  SPEA2 is similarly defined, except its recombiner is defined to be the SPEA2 elitist method.

Finally, in step 5, we search through all the generations of the evolutionary process until we find a representing generation; ideally one of the better generations that we can use to say this is how good the algorithm worked.

The code that we use to develop JMOEA is contained in our JMOO package (Joe's MOO).  JMOO allows us to define decisions, objectives, problems, algorithms, individuals and the JMOEA.  It is very versatile in that new problems can very easily be defined, as well as new algorithms.  For fun: JMOO is our purple cow:


The code repository (python) for jmoo: http://unbox.org/things/tags/jmoo/
The rrsl method in jmoo_algorithms.py will require parts of http://unbox.org/things/tags/rrsl/
And the POM3 problem is at: http://unbox.org/things/tags/pom3/


Thursday, March 14, 2013

#22 - POM3 Code Release

POM3 is the name given to my adaptation of POM2 (2009).  Similarly, POM2 is an adaptation of the original in 2008, POM.  Named for its originating authors, Port, Olkov and Menzies, POM was a simulation of the Requirements Engineering process in Software Engineering.

The job of the Requirements Engineer is to decide what gets implemented in a software project.  Typically this is done via priorities, but the prioritization strategies differ.  Traditionally, the Requirements Engineer deployed a Plan-Based strategy in which all requirements were prioritized once at the beginning of the project and never altered throughout the course of development.  In the alternative Agile development strategy however, requirements get re-prioritized during every iteration of development.  The question which POM tried to answer, is "which strategy is better?"

To answer the question, POM had to simulate many software projects very quickly.  And secondly, it had to evaluate those simulations.  Formally, this problem fits into the realm of Search Based Software Engineering. POM had become a Single-Objective Problem.  For the originating authors, the search for answering which strategy is better was a manual one involving the plot of parameters given 1,000 trials, and graphs.  In other words, a very inefficient approach at eyeing the results.  The authors of POM2 however, deploy a search technique in the algorithm known as KEYS.  In either case, the results are similar.  Plan-Based strategy is good in some cases, when requirements are stable, but Agile is better for when requirements are volatile and prone to change throughout development.

For more information, refer to references section.  [1] is the POM paper, and [2] is the POM2 paper.

POM3 however is no longer a Single-Objective Problem and is no longer a simulation intended only to focus on Requirements Engineering prioritization strategies, but rather a general software project simulation used to control our additional objectives in Cost, Idle and Completion (with Score).  Below, we discuss both the decisions and objectives of POM3 and then give an outline of the code, followed by some samples of the code.

POM3 Decisions

Decisions are those inputs to the POM3 simulation.  They are as follows.

  1. Culture.  The percentage of personnel on the project that thrive on chaos.  Some projects are chaotic.  That is, a lot of requirements change and are unstable, i.e. volatile.
  2. Criticality.  The impact to cost that safety-critical systems have upon projects.  Some projects have safety-critical requirements.  That is, some requirements are safety-critical in the event that failure absolutely cannot happen for the safety of human lives.
  3. Criticality Modifier.  The percentage of teams that are dealing with safety-critical parts of the project.
  4. Dynamism.  How dynamic is the project?  This affects the rate at which new requirements are found.
  5. Interdependency.  A percentage of how many requirements depend on other requirements (assigned to other teams in the project).
  6. Initial Known.  The percentage of the project that is initially known.  In the real world, we just don't know this, but we can guess.  In software world, all requirements exist, but some are marked as hidden.
  7. Team Size.  The size of teams, in terms of how many personnel are on each team.
  8. Size.  The project size.  Very small, small, medium, large, very large.
  9. Plan.  The requirements prioritization strategy.  We provide 5 different schemes.

POM3 Objectives

Objectives are the outputs of the POM3 simulation that score how well the project was finished.

  1. Score.  A score indicating the performance of the project's simulation.
  2. Cost.  The average cost per completed requirement across all teams.
  3. Idle.  The overall percentage of idleness of the project's teams.
  4. Completion.  The overall percentage of how many visible requirements were completed.

POM3 Algorithm

The outline of the POM3 simulation is as follows.  We avoid core details while providing the code.

  1. Initialization.
  2. Requirements Generation.
  3. Teams Generation.
  4. Shuffling.
    • For Each Team, Do: 
      1. Assess Budget Information
      2. Collect Available Tasks
      3. Apply Sorting Strategy
      4. Execute Available Tasks
      5. Discover New Tasks
      6. Update Task Priorities
  5. Scoring.
This outline is contained in the pom3.py file.  The initialization step processes the inputs by containing them in a pom3_decisions class structure.  The number of shuffling iterations is determined.  And then we move on to step 2, where requirements are generated into a heap.  The code for this is handled in pom3_requirements.py, which further makes use of the pom3_requirements_tree.py file.  In step 3, we generate teams and assign parts of the requirements heap to each team.  The code that manages the assembly of teams is in pom3_teams.py, and the code that manages each team individually is in pom3_team.py.

Finally, shuffling occurs.  Only a few iterations of shuffling occurs, as determined back in the initialization step.  In each iteration, requirements are shuffled through each team over a series of steps.  At first, the team assesses their budget information and collects the money that is allocated to them based on their workload.  Secondly, the team collects any tasks that they can possibly work on.  Thirdly, they sort through these available tasks, according to some sorting strategy (requirements prioritization strategy).  Remember when I said Plan-Based strategies don't do this every iteration?  Well, in POM3 something happens during every iteration, so we're ignoring the traditional strategy for the sake of simplicity.  But it would be relatively simple to reintroduce it properly.  The reason we ignore it, is because quite simply, we don't care too much about it.

The fourth step of shuffling involves the execution of available tasks as possible, within budget limitations as assessed in step one of shuffling.  Fifthly, we look to discovering new tasks; which basically, all this means is marking some more requirements of the heap as visible which were previously invisible.  Finally, tasks are all updated on their 'values', i.e. priorities.  By the way, the code for each shuffling step is all contained in the pom3_team.py file, as all the work here is for each team individually.

Lastly, now that the simulation is finished, we score our objectives.  Score is the funkiest objective, as it considers each completed task's final priority value, as updated in every iteration as the optimal_value.  The true value is marked as well as they are completed in real time, before they are further updated via the last shuffling phase.  As a ratio, the score is calculated as frontier/optimal_frontier, where frontier = value/cost, and optimal_frontier = optimal_value / optimal_cost.  In the code, we refer to frontier as "our frontier", and the optimal frontier as the "god frontier", as it is information only a god-like entity could possibly know (we don't actually update the completed tasks in real life; but they get considered in the simulation for scoring purposes).

Extra

By the way, here's some fun python stuff.

class pom3_decisions:
    def __init__(p3d, X):
        p3d.culture = X[0]
        p3d.criticality = X[1]
        p3d.criticality_modifier = X[2]
        p3d.initial_known = X[3]
        p3d.interdependency = X[4]
        p3d.dynamism = X[5]
        p3d.size = int(X[6])
        p3d.plan = int(X[7])
        p3d.team_size = X[8]

The code above is a sample from POM3.  A simple class structure makes any instance of the pom3_decisions class an effective record of sorts.  When one instances the pom3_decisions class, they are making use of the class constructor (def __init__).  Now, the cool part is that unlike languages such as java, I can give my own name to the "this" reference.  In the sample above, I use the name p3d, and assign class data using it.  This is no different than using python's default "this" name, which is "self".  But sometimes, just sometimes - you can make the code look like syntactic candy with just the right name.

class pom3_requirements:
    def __init__(requirements, decisions):
        requirements.heap = requirements_tree()
        requirements.count = int(2.5*[3,10,30,100,300][decisions.size])
        requirements.decisions = decisions
        
        for i in range(requirements.count):
            
            ...
            
            parent = requirements.heap.tree[i]
            requirements.recursive_adder(parent, 1)

A heap is a bunch of trees crammed together.  Requirements.heap.tree[i] accesses the i'th tree of the heap.  Looks just like English.  But it's not, it's Python!

References

[1] D. Port, A. Olkov, and T. Menzies, “Using simulation to investigate requirements prioritization strategies,” in Automated Software Engineering, 2008. ASE 2008. 23rd IEEE/ACM International Conference on, Sept. 2008, pp. 268–277.
[2] Lemon, B.; et al., "Applications of Simulation and AI Search: Assessing the Relative Merits of Agile vs Traditional Software Development," in Automated Software Engineering, 2009. ASE 2009. 24th IEEE/ACM International Conference on, Nov. 2009, pp.580-584.

Friday, March 8, 2013

#21 - The Art of Game Design

Chapter 13 - Players Play Games Through an Interface


The interface provides to the player a window into the virtual world.  It must be fluid and allow for playability.  When the interface can no longer be fluid, it becomes an obstacle to the player.  The window has been elaborated more in this chapter.  While the interface is also important, feedback is equally important.  Actions sent into the game via the interface must be reported back to the player immediately.  When there is no feedback, the player is unable to learn the dynamics of the game, and will never understand how to play it.  The same can be said of the real world - feedback helps us learn.


Lens #53: The Lens of Control.
Lens #54: The Lens of Physical Interface
Lens #55: The Lens of Virtual Interface
Lens #56: The Lens of Transparency
Lens #57: The Lens of Feedback
Lens #58: The Lens of Juiciness
Lens #59: The Lens of Channels and Dimensions
Lens #60: The Lens of Modes


Chapter 14 - Experiences can be Judged by their Interest Curves

The game should build upon more and more interesting events, peaking up and down, before it peaks at the end for a grand finale.  This idea is described as the interest curve, and a number of factors go into describing aspects that add to or deter interest in the game.  A player's interest needs to be obtained very early, and it needs to be kept.  But after a player's interest has been piqued, it can only be held for so long.  When an event that is something a little *more* interesting occurs, it re-rises the player interest and when done smartly, the player can enjoy the game throughout until its end.

Lens #61: The Lens of the Interest Curve
Lens #62: The Lens of Inherent Interest
Lens #63: The Lens of Beauty
Lens #64: The Lens of Projection

Sunday, March 3, 2013

#20 - JMOO

MOO (Multi-Objective Optimization) is a field of Software Engineering that involves the solving of problems that have decisions which affect the objective goals.  Formally, the mathematical expression of the problem is as follows.

A Multi-Objective Problem (MOP) consists of a set of n decisions and k objectives.  The decisions all have upper and lower bounds as well as constraints that the decisions must all fall within.  The objectives to a problem are similar to decisions, but we do not know their bounds.  Instead, we just know our goals, which are to maximize or minimize each as specified by each objective's optimization direction.  The objective functions provide the evaluation model of mapping decisions to objective scores.  However sometimes, the problem doesn't have a simple set of functions for this; instead it may rather be a process or a set of steps that cannot be easily modeled.  Either way, the objective functions provide a way to map decisions to objectives.

JMOO is our modeling language (Joe's Multi-Objective Optimization), implemented in Python.  The framework is extremely simple and provides an easy way of devising your own MOPs.  Below we show code for implementing the Srinivas MOP.


class srinivas(jmoo_problem):
  def __init__(prob):
    prob.name = "Srinivas"
    prob.decisions = [jmoo_decision("x" + str(i+1), -20, 20) for i in range(2)]
    prob.objectives = [jmoo_objective("f1", True), jmoo_objective("f2", True)]
  def evaluate(prob,input = None):
    if input:
        for i,decision in enumerate(prob.decisions):
            decision.value = input[i]
    x1,x2 = prob.decisions[0].value, prob.decisions[1].value
    prob.objectives[0].value = (x1-2)**2 + (x2 - 1)**2 + 2  
    prob.objectives[1].value = 9*x1 - (x2 - 1)**2
    return [objective.value for objective in prob.objectives]
  def evalConstraints(prob):
    x1,x2 = prob.decisions[0].value, prob.decisions[1].value
    if (x1**2 +   x2**2 > 225): return True
    if (x1    - 3*x2    > -10): return True
    return False


Each MOP is implemented as a subclass to the jmoo_problem base class.  The __init__ method provides information about the MOP's objectives (names, optimization direction) and decisions (name, lower bound, upper bound), using list comprehensions to put them in lists.  The use of decisions and objectives work with the jmoo_decision and jmoo_objective classes, which both act as relatively simple record data types.  The base class implements a hidden generateInput method which will generate a set of inputs compliant to each decision's bounds and the constraints (using the evalConstraints method).  If the problem has no constraints, then the evalConstraints method simply returns False.  The evaluate method on the other hand, will evaluate the decisions and set the objective values.  If you simply call the evaluate problem with no parameters, it will evaluate the decision's current values, but you must be sure to call generateInput before hand.  You can however, supply your own decision values as input if using generateInput doesn't suit your needs (due to generating randomly).

To simplify the process of adding your own MOP, use the following as a scaffolding.  To see how simple it is, just observe the bolded italic lines as being the only lines you need to adjust.


class my_new_mop(jmoo_problem):
  def __init__(prob):
    prob.name = "What to call my new MOP?"
    LOWS = [1, 2, 3, 4, 5]
    UPS = [10, 20, 30, 40, 50]
    OD = optimalDirections = [True, True, False]
    n = numDecisions = 5
    k = numObjectives = 3
    prob.decisions=[jmoo_decision("x"+str(i+1),LOWS[i],UPS[i]) for i in range(n)]
    prob.objectives=[jmoo_objective("f"+str(i+1), OD[i]) for i in range(k)]
  def evaluate(prob,input = None):
    if input:
        for i,decision in enumerate(prob.decisions):
            decision.value = input[i]
    X = [prob.decisions[i].value for i in range(len(prob.decisions))]
    prob.objectives[0].value = 1 # TODO
    prob.objectives[2].value = 2 # TODO
    prob.objectives[3].value = 3 # TODO
    return [objective.value for objective in prob.objectives]
  def evalConstraints(prob):
    X = [prob.decisions[i].value for i in range(len(prob.decisions))]
    #TODO: if (expression involving the X falls out of bounds): return True
    #TODO: if (expression involving the X falls out of bounds): return True
return False


Friday, March 1, 2013

#19 - The Art of Game Design - Chapters 7,8,9

Chapter 7 - The Game Improves through Iteration


I'd read a paper once that described how they made a game in 4 weeks, but it sucked.  After careful reconsideration and further iteration over the course of 12 more weeks, they polished the game into a fine product which became one of the best games on the mobile market.  It doesn't take a long time to make a game, but to make a finely polished game is something entirely different.  Unfortunately, thanks to the great video game crash of 1984, we realize that players don't just want games, they want great games.

Lens #13: The Lens of the Eight Filters
Lens #14: The Lens of Risk Mitigation
Lens #15: The Lens of the Toy

Chapter 8 - The Game is Made for the Player

Know your audience and know how to entertain them! An entire chapter is here that describes how to do it and introduces several taxonomies that I have never seen before.

Lens #16: The Lens of the Player
Lens #17: The Lens of Pleasure



Chapter 9 - The Experience is in the Player's Mind


In this chapter we talk a lot about experience from the perspective of the player. One topic that comes up a lot in my research is Flow. A game needs to be challenging enough without being too difficult. This makes up an "Aspect of Replayability" in my characterization of games in the aspect of Challenge. The other five aspects are Social, Experience, Mastery, Impact and Completion (SChEMICo).

Equally important however is motivating the player to get into your game at all. This is done by Playability. We need to both judge what entices the player and what keeps them happy in the game.

Lens #18: The Lens of Flow
Lens #19: The Lens of Needs
Lens #20: The Lens of Judgment