Home/News/Do You Trust Your PC/TAR?
Full original news can be read here >. Fair Use excerpt snippets below focus editor/member commentary and do not infringe on source Copyright.

Four Principles of Explainable Artificial Intelligence

Author: NIST Publication – P. Jonathan Phillips, Carina A. Hahn, Peter C. Fontana, David A. Broniatowski & Mark A. Przybocki

…it can be assumed that the failure to articulate the rationale for an answer can affect the level of trust users will grant that system…
Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
Meaningful: Systems provide explanations that are understandable to individual users.
Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output…

Open Source Link >

Editor Comment:

I am not recommending that you read this draft academic paper. It is pretty dry and focused mainly on societal trust of A. I. systems. Rob Robinson quotes the full introduction section that details the above four principals before diving into academic jargon.  Lack of trust and ‘explanation accuracy’ seem to be the primary adoption barriers to PC/TAR relevance determination scenarios. Counsel is fine when we use visualized clusters or even CAL to prioritize most relevant docs first, but they cling to ‘eyes on every document’ arguments. Instead of trying to educate counsel in esoteric analytics, we need PC/TAR workflows that incorporate these principals to create transparent results as part of ‘Self-Explainable Models’.

0 0 votes
Article Rating