Command agents with human-like decision making strategies

Show simple item record

dc.contributor.advisor Sastry, V V S S
dc.contributor.author Raza, M
dc.date.accessioned 2010-02-23T17:03:06Z
dc.date.available 2010-02-23T17:03:06Z
dc.date.issued 2010-02-23T17:03:06Z
dc.identifier.uri http://hdl.handle.net/1826/4271
dc.description.abstract Human behaviour representation in military simulations is not sufficiently realistic, specially the decision making by synthetic military commanders. The decision making process lacks realistic representation of variability, flexibility, and adaptability exhibited by a single entity across various episodes. It is hypothesized that a widely accepted naturalistic decision model, suitable for military or other domains with high stakes, time stress, dynamic and uncertain environments, based on an equally tested cognitive architecture can address some of these deficiencies. And therefore, we have developed a computer implementation of Recognition Primed Decision Making (RPD) model using Soar cognitive architecture and it is referred to as RPD-Soar agent in this report. Due to the ability of the RPD-Soar agent to mentally simulate applicable courses of action it is possible for the agent to handle new situations very effectively using its prior knowledge. The proposed implementation is evaluated using prototypical scenarios arising in command decision making in tactical situations. These experiments are aimed at testing the RPD-Soar agent in recognising a situation in a changing context, changing its decision making strategy with experience, behavioural variability within and across individuals, and learning. The results clearly demonstrate the ability of the model to improve realism in representing human decision making behaviour by exhibiting the ability to recognise a situation in a changing context, handle new situations effectively, flexibility in the decision making process, variability within and across individuals, and adaptability. The observed variability in the implemented model is due to the ability of the agent to select a course of action from reasonable but some times sub-optimal choices available. RPD-Soar agent adapts by using ‘chunking’ process which is a form of explanation based learning provided by Soar architecture. The agent adapts to enhance its experience and thus improve its efficiency to represent expertise. en_UK
dc.language.iso en en_UK
dc.rights © Cranfield University 2009. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright owner. en_UK
dc.title Command agents with human-like decision making strategies en_UK
dc.type Thesis or dissertation en_UK
dc.type.qualificationlevel Doctoral en_UK
dc.type.qualificationname PhD en_UK
dc.publisher.department Department of Engineering Systems and Management en_UK


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search CERES


Browse

My Account

Statistics