Author Topic: Reasoning and debating (a summary)  (Read 14216 times)

Offline Guijarro

  • Forum Regulars
  • Linguist
  • *
  • Posts: 97
  • Country: es
    • Spanish
    • Elucubraciones de José Luis Guijarro
Reasoning and debating (a summary)
« on: December 20, 2013, 02:02:42 PM »
Summary of Mercier, Hugo & Dan Sperber (2011): “Why do humans reason?” (manuscript accepted for publication in Behavioural and Brain Sciences).

Reasoning contributes to the effectiveness and reliability of communication by allowing
communicators to argue for their claim and by allowing addressees to assess these arguments. It thus increases both in quantity and in epistemic quality the information humans are able to share.
We view the evolution of reasoning as linked to that of human communication. Reasoning allows communicators to produce arguments in order convince addressees who would not accept what they say on trust; it allows addressees to evaluate the soundness of these arguments and to accept valuable information that they would be suspicious of otherwise. Thus, thanks to reasoning, human communication is made more reliable and more potent. From the hypothesis that the main function of reasoning is argumentative, we derived a number of predictions that, we tried to show, are confirmed by existing evidence. True, most of these predictions can be derived from other theories. We would argue, however, that the argumentative hypothesis provides a more principled of the empirical evidence (in the case of the confirmation bias for instance). In our discussion of motivated reasoning and of reason-based choice, not only did we converge in our prediction with existing theories, we also extensively borrowed from them. Even in these cases, however, we would argue that our approach has the distinctive advantage of providing clear answers to the why-questions: Why do humans have a confirmation bias? Why do they engage in motivated reasoning? Why do they base their decisions on the availability of justificatory reasons? Moreover, the argumentative theory of reasoning offers a unique integrative perspective: it explains wide swaths of the psychological literature within a single overarching framework. 
Some of the evidence reviewed here shows not only that reasoning falls short of reliably delivering rational beliefs and rational decisions, but also that in a variety of cases, it may even be detrimental to rationality. Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or their actions. The argumentative theory however puts such well-known demonstrations of ‘irrationality’ in a novel perspective. Human reasoning is not a profoundly flawed general mechanism; it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels.
Even from a strictly epistemic point of view, the argumentative theory of reasoning does not paint a wholly disheartening picture. It maintains that there is an asymmetry between the production of arguments, which involves an intrinsic bias in favour of the opinions or decisions of the arguer whether or not they are sound, and the evaluation of arguments, which aims at differentiating good arguments from bad ones and hence genuine information from misinformation. This asymmetry is often obscured in a debate situation (or in a situation where a debate is anticipated). People who have an opinion to defend don't really evaluate the arguments of their interlocutors in a search for genuine information, but rather consider them from the start as counter-arguments to be rebutted.
Still, as shown by the evidence reviewed in section 2, people are good at assessing arguments, and are quite able to do so in an unbiased way, provided they have no particular axe to grind. In group reasoning experiments where participants share an interest in discovering the right answer, it has been shown that truth wins (Laughlin & Ellis, 1986; Moshman & Geil, 1998). While participants in collective experimental tasks typically produce arguments in favour of a variety of hypotheses, most or even all of which are false, they concur in recognizing sound arguments. Since these tasks have a demonstrably valid solution, truth does indeed win. If we generalize to problems that do not have a provable solution, we should at least expect good arguments to win, even if this is not always sufficient for truth to win (and in section 2 we have reviewed evidence that this is indeed the case).
This may sound trivial, but it is not. It demonstrates that, contrary to common bleak assessments of human reasoning abilities, people are quite capable of reasoning in an unbiased manner, at least when they are evaluating arguments rather than producing them, and when they are after the truth rather than trying to win a debate.
Couldn't the same type of situation that favours sound evaluation favour comparable soundness in the production of arguments? Note, first, that situations where a shared interest in truth leads participants in a group task to evaluate arguments correctly are not enough to make them produce correct arguments. In these group tasks, individual participants come up with and propose to the group the same inappropriate answers that they come up with in individual testing. The group success is due first and foremost to the filtering of a variety of solutions, achieved through evaluation. When different answers are initially proposed and all of them are incorrect, then all of them are likely to be rejected, and wholly or partly new hypotheses are likely to be proposed and filtered in turn, thus explaining how groups may do better than any of their individual members.
Individuals thinking on their own without benefiting from the input of others can only assess their own hypotheses, but in doing so, they are both judge and party, or rather judge and advocate, and this is not an optimal stance for pursuing the truth. Wouldn't it be possible, in principle, for an individual to decide to generate a variety of hypotheses in answer to some question and then evaluate them one by one, on the model of Sherlock Holmes? What makes Holmes such a fascinating character is precisely his preternatural turn of mind operating in a world rigged by Conan Doyle, where what should be inductive problems in fact have deductive solutions. More realistically, individuals may develop some limited ability to distance themselves from their own opinion, to consider alternatives and thereby become more objective. Presumably this is what the 10% or so of people who pass the standard Wason selection task do. But this is an acquired skill, and involves exercising some imperfect control over a natural disposition that spontaneously pulls in a different direction.   
Here, one might be tempted to point out that, after all, reasoning is responsible for some of the greatest achievements of human thought in the epistemic and moral domains. This is undeniably true, but the achievements involved are all collective and result from interactions over many generations (on the importance of social interactions for creativity, including scientific creativity see (Csikszentmihalyi & Sawyer, 1995; K. Dunbar, 1997; John-Steiner, 2000; T. Okada & Simon, 1997). The whole scientific enterprise has always been structured around groups, from the Lincean Academy down to the Large Hadron Collider. In the moral domain, moral achievements such as the abolition of slavery are the outcome of intense public arguments. We have pointed out that, in group settings, reasoning biases can become a positive force, and contribute to a kind of division of cognitive labour. Still, to excel in such groups it may be necessary to anticipate how one’s own arguments might be evaluated by others, and to adjust these arguments accordingly. Showing one’s ability to anticipate objections may be a valuable culturally acquired skill, as in medieval disputationes (see Novaes, 2005). By anticipating objections, one may even be able to recognize flaws in one’s own hypotheses and go on to revise them. We have suggested that this depends on a painstakingly acquired ability to exert some limited control over one's own biases. Even among scientists, this ability may be uncommon, but those who have it may have a great influence on the development of scientific ideas.
It would be a mistake, however, to treat their highly visible, almost freakish, contributions as paradigmatic examples of human reasoning. In most discussions, rather than looking for flaws in our own arguments, it is easier to let the other person find them, and only then adjust our arguments if necessary.
In general, one should be cautious about using the striking accomplishments of reasoning as proof of its overall efficiency, since its failures are often much less visible (see Ormerod, 2005; Taleb, 2007).

Epistemic success may depend to a significant extent on what philosophers have dubbed ‘epistemic luck’ (Pritchard, 2005 ), that is, chance factors that happen to put one on the right track. When one happens to be on the right track and ‘more right’ than one could initially have guessed, some of the distorting effects of motivated reasoning and polarization may turn into blessings. For instance, motivated reasoning may have pushed Darwin to focus obsessively on the idea of natural selection and explore all possible supporting arguments and consequences. But for one Darwin, how many Paleys? 
To conclude, we note that the argumentative theory of reasoning should be congenial to those of us who enjoy spending endless hours debating ideas—but this, of course, is not an argument for (or against) the theory.