Agents with a Human Touch: Modeling of Human Rationality in Agent Systems (PhD Thesis)
Will it be possible to create a self-aware and reasoning entity that has the capacity for decision making similar to that we ascribe to human beings?
Modern agent systems, although used today in various applications wherever intelligence is required, are not ready for applications where human rationalities are usually the only option in making important decisions in critical or sensitive situations.
This thesis is a contribution to this area: a decision-making methodology is introduced to address the different characteristics that an agent should have in order to be better trusted with such critical decisions.
The work begins with a study of philosophy in the literature (Chapter 2), which reveals that trust is based on emotions and faith in performance. The study concludes that a trustworthy decision has five main elements: it considers options and their likely effects; it predicts how the environment and other agents will react to decisions; it accounts for short- and long-term goals through planning; it accounts for uncertainties and working with incomplete information; and, finally, it considers emotional factors and their effects. The first four elements address decision making as a product of "beliefs"; the last addresses it as a product of "emotions". A complete discussion of these elements is provided in Section 2.1.
This thesis is divided into two main parts: the first treats trust as a product of beliefs and the second treats trust as a product of emotions.
The first part builds the decision-making methodology based on argumentation through a five-step approach where first the problem situation representing the actions available to the agent and their likely consequences is formulated. Next, arguments to perform these actions are constructed by instantiating an argumentation scheme designed to justify actions in terms of the values and goals they promote. These arguments are then subjected to a series of critical questions to identify possible counter arguments so that all the options and their weaknesses have been identified. Preferences are accommodated by organising the resulting arguments into an Argumentation Framework (we use Value-Based Argumentation [VAF] for this approach). Arguments acceptable to the agents will be identified through the ranking of the agent's values, which may differ from agent to agent. In the second part (Chapters 5 and 6), this methodology is extended to account for emotions. Emotions are generated based on whether other agents relevant to the situation support or frustrate the agent's goals and values; the emotional attitude toward the other agents then influences the ranking of the agent's values and, hence, influences the decision. In Chapters 4 and 6, the methodology is illustrated through an example study. This example has been implemented and tested on a software program. The experimental data and some screen shots are also given in the appendix.[Full Paper]
For each technical report listed here, copyright and all intellectual property rights remain with the respective authors. Copyright is effective from the year of publication in each case. By downloading a file from this page, you agree to use it only for purposes of research and scholarship. Any other use of this material or storage of it in any medium or its sale or distribution in any form is expressly forbidden without prior written permission from the authors concerned.