Home/News/Battling Confirmation Bias When Training TAR
Full original news can be read here >. Fair Use excerpt snippets below focus editor/member commentary and do not infringe on source Copyright.

Researchers Say They've Figured Out Why People Reject Science, And It's Not Ignorance

Author: Fiona Macdonald – Science Alert

…The issue is that when it comes to facts, people think more like lawyers than scientists, which means they 'cherry pick' the facts and studies that back up what they already believe to be true…
…So if someone doesn't think humans are causing climate change, they will ignore the hundreds of studies that support that conclusion, but latch onto the one study they can find that casts doubt on this view. This is also known as confirmation bias, a type of cognitive bias…

Open Source Link >

Editor Comment:

In this age of division and widely divergent views of reality, I do a lot of reading trying to understand how so many of us fall down internet rabbit holes into such extreme perspectives. This short article and the prior one on cognitive bias resonated with me on an issue I see frequently when brought in to analyze/support a large TAR review or even search term development. Once upon a time I managed a review for a brilliant attorney who insisted we have a few reviewers pretending to be the DOJ. This classic game theory tactic turned up items that were missed by the primary review team and allowed defense counsel to proactively address them. We talk about initial random sample sets to determine rough relevance richness and QC reviews of the final non-relevant set. But all of this is conducted by reviewers subject to confirmation bias. They can become blind to the ‘unknown unknowns’ because they are not seeking them. Maybe we need to incorporate a parallel or sequential ‘black hat’ team charged with taking the adversarial retrieval task?

0 0 votes
Article Rating