Distributed opportunistic argumentation guided by autonomous agent interaction
associated files.zip (634.4Kb)
Martin, Paul William
MetadataShow full item record
Within a distributed system, autonomous agents may find it necessary to cooperate in order to achieve their objectives. Interaction protocols provide standard frameworks within which to conduct common classes of interaction, but they are only useful when the agents using them have a common interpretation of the constraints imposed by those protocols. In open systems, where there are no system-wide objectives and components are contributed from a variety of sources, this is difficult to ensure. An agent within a sufficiently complex environment will find it necessary to draw inferences from information sources of varying integrity and completeness. Given flawed or incomplete information, it may be necessary for an agent to resort to nonmonotonic reasoning in order to be able to make concrete decisions within limited windows of opportunity. This can be expected to create inconsistencies in the joint beliefs of agents which can only be repaired by dialogue between peers. To verify and repair all possible sources of inconsistency is impractical for any sizable body of inference however—any belief revision must therefore be subject to prioritisation. In this thesis, we introduce a mechanism by which agents can perform opportunistic argumentation during dialogue in order to perform distributed belief revision. An interaction portrayal uses the protocol for a given interaction to identify the logical constraints which must be resolved during the interaction as it unfolds. It then compares and reconciles the expectations of agents prior to the resolution of those constraints by generating and maintaining a system of arguments. The composition and scope of arguments is restricted in order to minimise the information exchange whilst still trying to ensure that all available admissible viewpoints are adequately represented immediately prior to any decision. This serves both to make interaction more robust (by allowing agents to make decisions based on the distributed wisdom of its peer group without being explicitly directed by a protocol) and to reconcile beliefs in a prioritised fashion (by focusing only on those beliefs which directly influence the outcome of an interaction as determined by its protocol).