Introducing confidence in multi-agent systems gives agents a form of control in making decisions and helps to improve the decision making process in such systems. Consequently, modeling confidence of agents is important in heterogeneous agent communities. The inability to detect an agent's confidence can be a reason for inaccurate decision. Several weaknesses have been found in current trust and confidence models in multi-agent systems. Current models propose that the trust of an agent depends on its reputation, past experience, and observations on its behavior. This paper presents another approach to agent-based confidence modeling. Initially, it integrates two confidence requirements, namely, trust and certainty. To further strengthen the model, we include evidence as an additional requirement to the model by which trust and certainty of an agent can be verified. This paper establishes bisection between trust, certainty, and evidence spaces. The modeling mechanism eliminates untrusted opinions, since such certainty level might not be valuable in all states. The proposed technique also separates the global confidence scheme from the local confidence scheme, so as to provide greater reliability for confidence detection.
All Science Journal Classification (ASJC) codes
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction