Norms regulate software agent coordination and behavior in multiagent communities. Adopted norms have impacts on agent’s goals, plans, and actions. Although norms generate new goals that do not originate from an agent’s original target, they provide an orientation on how the agent should act in a society. Depending on situations, an agent needs a mechanism to detect correctly and adopt norms in a new society. Researchers have proposed mechanisms that enable agents to detect norms; however, their works entail agents that detect only one norm in an event. We argue that these approaches do not help agents’ decision making in cases of more than one set of norms detected in the event. To solve this problem, we introduce the concept of norm’s trust to help agents decide which detected norms are credible in a new environment. We propose a conceptual norm’s trust framework by inferring trust using filtering factors of adoption ratio, authority, reputation, norm salience and adoption risk to establish a trust value for each detected norms in the event. This value is then used by an agent in deciding to emulate, adopt or ignore the detected norms.