Show simple item record

dc.contributor.authorBradley, Jay
dc.date.accessioned2011-02-08T15:36:36Z
dc.date.available2011-02-08T15:36:36Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/1842/4784
dc.description.abstractThis thesis investigates how to train the increasingly large cast of characters in modern commercial computer games. Modern computer games can contain hundreds or sometimes thousands of non-player characters that each should act coherently in complex dynamic worlds, and engage appropriately with other non-player characters and human players. Too often, it is obvious that computer controlled characters are brainless zombies portraying the same repetitive hand-coded behaviour. Commercial computer games would seem a natural domain for reinforcement learning and, as the trend for selling games based on better graphics is peaking with the saturation of game shelves with excellent graphics, it seems that better artificial intelligence is the next big thing. The main contribution of this thesis is a novel style of utility function, group utility functions, for reinforcement learning that could provide automated behaviour specification for large numbers of computer game characters. Group utility functions allow arbitrary functions of the characters’ performance to represent relationships between characters and groups of characters. These qualitative relationships are learned alongside the main quantitative goal of the characters. Group utility functions can be considered a multi-agent extension of the existing programming by reward method and, an extension of the team utility function to be more generic by replacing the sum function with potentially any other function. Hierarchical group utility functions, which are group utility functions arranged in a tree structure, allow character group relationships to be learned. For illustration, the empirical work shown uses the negative standard deviation function to create balanced (or equal performance) behaviours. This balanced behaviour can be learned between characters, groups and also, between groups and single characters. Empirical experiments show that a balancing group utility function can be used to engender an equal performance between characters, groups, and groups and single characters. It is shown that it is possible to trade some amount of quantitatively measured performance for some qualitative behaviour using group utility functions. Further experiments show how the results degrade as expected when the number of characters and groups is increased. Further experimentation shows that using function approximation to approximate the learners’ value functions is one possible way to overcome the issues of scale. All the experiments are undertaken in a commercially available computer game engine. In summary, this thesis contributes a novel type of utility function potentially suitable for training many computer game characters and, empirical work on reinforcement learning used in a modern computer game engine.en
dc.language.isoenen
dc.publisherThe University of Edinburghen
dc.relation.hasversion[Bradley and Hayes, 2005a] Bradley, J. and Hayes, G. (2005a). Adapting reinforcement learning for computer games: Using group utility functions. In IEEE Symposium on Computational Intelligence and Games, pages 133–140. Available at http://homepages.inf.ed.ac.uk/s0128829/CIG2005.psen
dc.relation.hasversion[Bradley and Hayes, 2005b] Bradley, J. and Hayes, G. (2005b). Group utility functions: learning equilibria between groups of agents in computer games by modifying the reinforcement signal. In IEEE Congress on Evolutionary Computation. Available at http://homepages.inf.ed.ac.uk/s0128829/CEC2005.psen
dc.subjectcomputer gamesen
dc.subjectgroup utility functionsen
dc.subjectprogramming by rewarden
dc.subjectteam utility functionen
dc.subjectnegative standard deviationen
dc.titleReinforcement learning for qualitative group behaviours applied to non-player computer game charactersen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record