Abstract:
|
In this report we describe an argument-based model - ProCLAIM - for monitoring agents' decisions in safety-critical environments intended, on the one hand, to prevent agents from undertaking decisions that do not comply with the established guidelines given by the domain, and on the other hand, by enabling agents to argue over their intended decisions, to allow agents to exceptionally undertake decisions that violate the existing guidelines when their given arguments supporting the decisions are accepted. Furthermore, ProCLAIM defines a Case-Based Reasoning component intended for revising the guidelines knowledge that control the agents' decisions. Namely, the arguments given by the agents to support their decisions are stored in a case base and eventually reused in order to revise the Guideline Knowledge so that it will accept new decisions shown to be successful despite of violating the guidelines, and at the same time, to reject decisions that in spite of being complaint with these guidelines have shown to be unsuccessful. We believe and aim to show in this report that ProCLAIM provides a number of interesting theoretical innovations in artificial intelligence which are motivated with valuable practical applications. |