Explainable AI with complete transparency…

We offer a system that makes unbiased and explainable decisions based on previously verified cases. The design is centered around human-interpretable knowledge, resulting in a system that is transparent to its core.

Explainable

We believe that useful AI must be explainable. Our system gives exact explanations for its decisions. An explanation is based on the previously verified cases that were used to generate the new decision. The explanation is detailed, easily understood by non-technical users, and can be used to document the decision.

Knowledge as a tangible asset

The system learns as it processes new cases, building a knowledge base. This knowledge base is human-readable and can be used to analyse the knowledge assets of an organisation.

Unbiased

Unlike machine learning, our systems gives users control over the decision process. Users can decide exactly how the knowledge base is built and what knowledge is included in the decision making. Therefore, biased decision can be easily detected and avoided.

Easy to maintain

Mentor assists the specialist in defining new rules by focusing on the characteristics that are likely to be most relevant. Real use has shown that in a medical domain it takes on average 1-2 minutes to add a new rule using the methodology Mentor is based upon.