Mathematics and its ideologies (an anthropologist’s observations)

kenneth_arrow_small_02

Jan Blommaert 

What is science? The question has been debated in tons of papers written over about two centuries and resulting in widely different views. Most people practicing science, consequently, prefer a rather prudent answer to the question, leaving some space for views of science that do not necessarily coincide with their own, but at least appear to share some of its basic features – the assumption, for instance, that knowledge is scientific when it has been constructed by means of methodologies that are shared intersubjectively by a community of scientific peers. The peer-group sharedness of such methodologies enables scientific knowledge to be circulated for critical inspection by these peers; and the use of such ratified methodologies and the principle of peer-group critique together form the “discipline” – the idea of science as disciplined knowledge construction.

There are, however, scientists who have no patience for such delicate musings and take a much narrower and more doctrinaire view of science and its limits. I already knew that – everyone, I suppose, has colleagues who believe that science is what they do, and that’s it. But a small recent reading offensive on the broad social science tradition called Rational Choice (henceforth RC) made me understand that such colleagues are only a minor nuisance compared to hardcore RC believers. For the likes of Arrow, Riker, Buchanan and their disciples, now spanning three generations,”scientific” equals “mathematical”, period. Whatever is not expressed mathematically cannot be scientific; even worse, it is just “intuition”, “metaphysics” or “normativity”. And in that sense it is even dangerous: since “bad” science operates from such intuitive, metaphysical or normative assumptions, it sells ideology under the veil of objectivity and will open the door to totalitarian oppression. What follows is a critique of mathematics as used in RC.

*****

Sonja Amadae (2003), in a book I enjoyed reading, tells the story of how RC emerged out of Cold War concerns in the US. It was the RAND Corporation that sought, at the end of World War II and the beginning of the nuclear era, to create a new scientific paradigm that would satisfy two major ambitions. First, it should provide an objective, scientific grounding for decision-making in the nuclear era, when an ill-considered action by a soldier or a politician could provoke the end of the world as we knew it. Second, it should also provide a scientific basis for refuting the ideological (“scientific”) foundations of communism, and so become the scientific bedrock for liberal capitalist democracy and the “proof” of its superiority. This meant nothing less than a new political science, one that had its basis in pure “rational” objectivity rather than in partisan, “irrational” a priori’s. Mathematics rose to the challenge and would provide the answer.

Central to the problem facing those intent on constructing such a new political science was what Durkheim called “the social fact” – the fact that social phenomena cannot be reduced to individual actions, developments or concerns – or, converted into a political science jargon, the idea of the “public” or “masses” performing collective action driven by collective interests. This idea was of course central to Marxism, but also pervaded mainstream social and political science, including the (then largely US-based) Frankfurt School and the work of influential American thinkers such as Dewey. Doing away with it involved a shift in the fundamental imagery of human beings and social life, henceforth revolving around absolute (methodological) individualism and competitiveness modeled on economic transactions in a “free market” by people driven exclusively by self-interest. Amadae describes how this shift was partly driven by a desire for technocratic government performed by “a supposedly ‘objective’ technocratic elite” free from the whims and idiosyncracies of elected officials (2003: 31). These technocrats should use abstract models – read mathematical models – of “systems analysis”, and RAND did more than its share developing them. “Rational management” quickly became the key term in the newly reorganized US administration, and the term stood for the widespread use of abstract policy and decision-making models.

These models, as I said, involved a radically different image of humans and their social actions. The models, thus, did not just bring a new level of efficiency to policy making, they reformulated its ideological foundations. And Kenneth Arrow provided the key for that with his so-called “impossibility theorem”, published in his Social Choice and Individual Values (1951; I use the 1963 edition in what follows). Arrow’s theorem quickly became the basis for thousands of studies in various disciplines, and a weapon of mass political destruction used against the Cold War enemies of the West.

Arrow opens his book with a question about the two (in his view) fundamental modes of social choice: voting (for political decisions) and market transactions (for economic decisions). Both modes are seemingly collective, and thus opposed to dictatorship and cultural convention, where a single individual determines the choices. Single individuals, Arrow asserts, can be rational in their choices; but “[c]an such consistency be attributed to collective modes of choice, where the wills of many people are involved?” (1963:2). He announces that only the formal aspects of this issue will be discussed. But look what happens.

Using set-theoretical tools and starting from a hypothetical instance where two, then three perfectly rational individuals need to reach agreement, observing a number of criteria, he demonstrates that logically, such a rational collective agreement is impossible.  Even more: in a smart and surely premeditated lexical move, in which one of Arrow’s criteria was “non-dictatorship” (i.e. no collective choice should be based on the preferences of one individual), Arrow demonstrated that the only possible “collective” choices would in fact be dictatorial ones. A political system, in other words, based on the notion of the common will or common good, would of necessity be a dictatorship. In the age of Joseph Stalin, this message was hard to misunderstand.

And he elaborates this, then, in about hundred pages of prose, of which the following two fragments can be an illustration. (I shall provide them as visual images, because I am about ready to embark on my own little analysis, drawn from contemporary semiotic anthropology.)

ScreenHunter_140 Jul. 21 21.11

Fig 1: from p90, Arrow 1963

ScreenHunter_141 Jul. 21 21.14

Fig. 2: From p79, Arrow 1963.

The prose on these pages became epochal: in it, one read the undeniable proof that collective rational social action was impossible, unless as a thinly veiled dictatorship – a death blow to Marxism of course, but also the definitive end of Durkheim’s “social fact” – and that basing policy on such a collective rationality (as in welfare policy) was bound to be futile. This was now objectively, scientifically proven fact, established by the unimpeachable rigor of mathematical logic, of which Arrow and his disciples believed that it could be applied to any aspect of reality.

Arrow, we saw, mentioned the limitations of his inquiry; evidently, he also used several assumptions. Amadae (2003: 84) lists four of them:

“that science is objective; that it yields universal laws; that reason is not culturally relative; and that the individuals’ preferences are both inviolable and incomparable”.

The first three assumptions touch on his conception of science; in other words, they describe his belief in what mathematical methods do. I will return to them below. The fourth assumption is probably one of the most radical formulations of Methodological Individualism (henceforth MI). MI is the label attached to the theory complex in which every human activity is in fine reduced to individual interests, motives, concerns and decisions. In the case of Arrow and his followers, MI leads to the assumption that “society” is merely an aggregate of individuals. It is clear that this MI assumption – an ideological one, in fact a highly specific ideology of the nature of human beings and their social actions – underlies the “proof”, makes it circular, and from an anthropological viewpoint frankly ridiculous, certainly when each of such individuals is a perfectly rational actor who

“will always pursue his advantage, however he defines it, no matter what the circumstances; concepts of duty and responsibility are alien to the instrumental agent pursuing his goals” (Amadae 2003: 272)

Note that Arrow does not allow comparison between individuals (he will do so, grudgingly and conditionally, in 1977 in response to Rawls’ discussion of justice: Amadae 2003: 270). This is important in three ways. One: it is a key motif in his “objective” approach, in which any normative judgment (e.g. a value judgment about preferences of individuals) needs to be excluded from the analysis, because any such judgment would bring in “irrational” elements and open the door to totalitarian policy options. Two: it thus underscores and constructs the case for mathematics as a method, about which more below. And three: it also provides a second-order ideological argument in favor of Man-the-individualist, for if individuals cannot be scientifically compared, they surely cannot be scientifically grouped into collectives.

And so, on the basis of a mathematical “proof” grounded in a set of highly questionable assumptions and operating on an entirely imaginary case, Arrow decided that society – the real one – is made up of a large number of individuals bearing remarkable similarities to Mr Spock. And this, then, was seen as the definitive scientific argument against Marxism, against the Durkheimian social fact, against the welfare state, socialism and communism, and in favor of liberal democracy and free market economics. It is, carefully considered, a simple ideological propaganda treatise covered up by the visual emblems of mathematics-as-objective-science. The assumptions it takes on board as axiomatic givens constitute its ideological core, the mathematical “proof” its discourse, and both are dialectically interacting. His assumptions contain everything he claims to reject: they are profoundly normative, idealistic, and metaphysical. Every form of subjectivity becomes objective as long as it can be formulated mathematically.

The fact that his “impossibility theorem” is, till today, highly influential among people claiming to do social science, is mysterious, certainly given the limitations specified by Arrow himself and the questionable nature of the assumptions he used – the most questionable of which is that of universality, that mathematics could be used to say something sensible on the nature of humans and their societies. The fact that these people often also appear to firmly believe that Arrow’s formal modeling of social reality, with its multitude of Mr Spocks, is a pretty accurate description of social reality, is perplexing, certainly knowing that this mathematical exercise was (and is) taken, along with its overtly ideological assumptions,  to be simple social and political fact (observable or not). Notably the MI postulate of individuals behaving (or being) like entirely sovereign and unaffected consumers in a free market of political choices, “proven” by Arrow and turned into a factual (and normative) definition, leads Adamae (2003: 107) to conclude “that Arrow’s set-theoretical definition of citizens’ sovereignty is one of the least philosophically examined concepts in the history of political theory”. (To Arrow’s credit, he was ready to revise this assumption in later years; Richard Thaler (2015: 162) quotes him saying “We have the curious situation that scientific analysis imputes scientific behavior to its subjects”). Nonetheless, this definition promptly and effectively eliminated a number of items from the purview of new political science: the public sphere, the common good, and even society as such – Arrovians would use the simple argument that since society was not human (read: not individual and rational), it could not be seen as an actor in its own right. Margaret Thatcher, decades later, agreed.

Arrow and his followers set new standards of political debate, arguing that political issues (think of social welfare) were not “real” if they didn’t stand the test of logical analysis. Unless facts agreed with mathematical coherence (as shown in Fig. 2 above), they were not  proven facts; mathematics became the standard for defining reality, and the phrase “theoretically impossible” became synonymous for impossible in reality, separating fact from fiction. I find this unbelievable. But the point becomes slightly more understandable when we broaden the discussion a bit and examine more closely the particular role of mathematics in all of this. And here, I turn to semiotic anthropology.

******

My modest reading offensive also brought Izhtak Gilboa’s Rational Choice (2010) to my table. Gilboa – a third-generation RC scholar with basic training in mathematics – offers us a view of what I prefer to see as the ideology of mathematics in all its splendor and naiveté. Before I review his opinions, I hasten to add that Gilboa is quite critical of radical interpretations of Arrovian choice, including Game Theory, admitting that the complexity of real cases often defies the elegance of established theory, and that we should “keep in mind that even theoretically, we don’t have magic solutions” (2010: 85). Yet he declares himself a full blown adept of RC as a “paradigm, a system of thought, a way of organizing the world in our minds” (2010: 9). And this paradigm is encoded in mathematical language.

Gilboa expresses an unquestioned faith in mathematics, and he gives several reasons for this.

  1. Accuracy: Mathematics is believed to afford the most accurate way of formulating arguments. “The more inaccurate our theories are, and the more we rely on intuition and qualitative arguments, the more important is mathematical analysis, which allows us to view theories in more than one way” (20). Theories not stated in mathematical terms, thus, are suggested not to allow more than one way of viewing. Too bad for Darwin.
  2. Rigor: Mathematics brings order in the chaos. Such chaos is an effect of “intuitive reasoning” (29). Thus, mathematical formulations are rigorous, ordering modes of expressing elaborate conglomerates of facts, not prone to misunderstanding. They form the theoretical tools of research, bringing clear and unambiguous structure in fields of knowledge in ways not offered by “intuitive reasoning”. The latter is a curious category term, frequently used by Gilboa to describe, essentially, any form of knowledge construction that cannot yet be expressed in mathematical language.
  3. Superiority. This follows from (1) and (2). There is mathematics and there is the rest. The rest is muddled and merely serves to test the mathematical theory. Thus (and note the evolutionary discourse here, marked in italics), when a mathematical theoretical model is thrown into “more elaborate research”, such research may prove to be “too complicated to carry out, and we will only make do with intuitive reasoning. In this case we try to focus on the insights that we feel we understand well enough to be able to explain verbally, and independently of the specific mathematical model we started out with” (29). Non-mathematically expressed knowledge is obviously inferior to mathematically expressed knowledge: it is “intuitive”. Yet, it has its importance in theory testing: “mathematical analysis has to be followed by intuitive reasoning, which may sort out the robust insights from those that only hold under very specific assumptions” (ibid).
  4. Simplification: throughout the entire book, but actually throughout most of what we see in RC at large, there is a marked preference for mathematically expressed simplicity. Complex real-world problems are reduced to extremely simple hypothetical cases involving pairs or triplets, as when complex market transactions are reduced to two people bargaining in a game-theoretical example, or the three Spocks in Arrow’s Impossibility Theorem who are supposed to instantiate millions of voters or consumers in large-scale political and economic processes. Such mathematical simplifications often bear names – the Prisoners’ Paradox, Condorcet’s Paradox, the Pareto Optimality or the Von Neumann-Morgenstern Axioms – and are presented (be it with qualifications) as “laws” with universal validity. The simple cases of mathematical theory are proposed as accurate, rigorous and superior modes of describing (and predicting) complex realities.
  5. Psychological realism. Not only are the mathematical formulations accurate descriptive and predictive models of actual social realities, they are also an accurate reflection of human cognitive folk methods, even if people are not aware of it: “Statistics is used to estimate probabilities explicitly in scientific and nonscientific studies as well as implicitly by most of us in everyday life” (56-57). Gilboa as well as many other authors doing this kind of work have the amusing habit of describing people who apply the Von Neumann-Morgenstern Axioms in deciding where to take their holidays and experience very severe logical problems when their behavior violates the Prisoners’ Paradox or exhausts the limits of objective reasoning.
  6. Convincing-conclusive. Finally, Gilboa makes a somewhat curious point about “positive” versus “negative rhetoric”. Negative rhetoric consists of “tricks that make one lose the debate but for which one has good replies the morning after the debate”, while “positive rhetoric consists of devices that you can take from the debate and later use to convince others of what you were convinced of yourself. Mathematics is such a device” (19).

******

The six features of Gilboa’s approach to mathematics are, I would argue, an ideology of mathematics. They articulate a socioculturally entrenched set of beliefs about mathematics as a scientific project. And while I am the first to express admiration for mathematics as a scientific tool which, indeed, allows a tremendous and unique parsimony, transparency and stability in notation, I think the broader ideology of mathematics needs to be put up for critical examination. For mathematics, here, is not presented as a scientific tool – call it “method” or even “methodology” – but as an ontology, a statement on how reality “really” is. We already encountered this earlier when I discussed the mystery of Arrow’s Theorem: no facts are “real” unless they can be expressed in mathematical, formal language. And to this, I intend to attach some critical reflections.

Let me first describe the ontology I detect in views such as the ones expressed by Gilboa, occasionally returning to Arrow’s first three assumptions mentioned earlier. I see two dimensions to it.

  1. Mathematics expressions are the Truth since mathematics represents the perfect overlap of facts and knowledge of facts. And this Truth is rationality: mathematical expressions are expressions of fundamental rationality, devoid of all forms of subjectivity and context-dependence. This enables mathematical expressions to be called “laws”, and to qualify such laws as eternal, universal, and expressions of extreme certainty and accuracy. Recall now Arrow’s second and third assumption: that science (i.e. mathematics) yields universal laws, and that reason is not culturally relative – since it can be described in a universal mathematical code.
  2. Mathematics as an ontology has both esoteric and practical dimensions, and these dimensions make it science. Concretely, mathematics is not something everyone can simply access because it is esoteric – see Fig 2 above for a graphic illustration – and it is practical because it can be applied, as a set of “laws” flawlessly describing and predicting states of reality, to a very broad range of concrete issues, always and everywhere.

Combined with the first point, mathematics as the (rational) Truth, we understand not just Arrow’s first assumption – that science is objective – but his wider (political) project as well. The scientific underpinning of a new social and political science had to be mathematic, because that was the way to avoid ideological, cultural or other forms of “subjectivity” which would make such a science “irrational”, and may lead it towards totalitarian realities. Mathematically stated laws (on any topic) are – so it is suggested – independent of the person formulating them or the place in the world from where they are formulated; their truth value is unconditional and unchallengeable; accepting them is not a matter of personal conviction or political preference, it is a matter of universal reason. This is why Gilboa found mathematics convincing and conclusive: confronted with mathematical proof, no reasonable person could deny the truth, for, as expressed by Gilboa, mathematical formulations reflected – esoterically – the folk reason present in every normal human being. And so we see the comprehensive nature of the ontology: mathematics describes human beings and by extension social life in a truthful, unchallengeable way.

It is at this last point – the “postulate of rationality” as it is known in RC – that this modern ideology of mathematics appears to have its foundations in Enlightenment beliefs about reason as fundamentally and universally human, and so deviates from older ideologies of mathematics. These are well documented, and there is no need here to review an extensive literature, for a key point running through this history is that mathematics was frequently presented as the search for the true and fundamental structure of nature, the universe and (if one was a believer) God’s creation. This fundamental structure could be expressed in rigorous symbolic operations: specific shapes, proportions, figures and relations between figures – they were expressed by means of abstract symbols that created the esoteric dimension of mathematics. Doing mathematics was (and continues to be) often qualified as the equivalent of being “a scientist” or “a wise man”, and if we remember Newton, the distinction between scientific mathematics and other esoteric occupations such as alchemy was often rather blurred.

It is in the age of Enlightenment that all human beings are defined as endowed with reason, and that mathematics can assume the posture of the science describing the fundamental features and structures of this uniquely human feature, as well as of the science that will push this unique human faculty forward. It is also from this period that the modern individual, as a concept, emerges, and the American Declaration of Independence is often seen as the birth certificate of this rational, sovereign individual. Emphasis on rationality very often walks hand in hand with methodological individualism, and this is not a coincidence.

Observe that this ideology of mathematics is pervasive, and even appears to be on the rise globally. Mathematics is, everywhere, an element of formal education, and universally seen as “difficult”. Training in mathematics is presented in policy papers as well as in folk discourse as the necessary step-up towards demanding professions involving rigorous scientific reasoning, and success or failure in mathematics at school is often understood as an effect of the success/failure to enter, through mathematics, a “different mode of thinking” than that characterizing other subjects of learning. Mathematics, in short, often serves as a yardstick for appraising “intelligence”.

******

From the viewpoint of contemporary semiotic anthropology, this ideology of mathematics is just another, specific, language ideology: a set of socioculturally embedded and entrenched beliefs attached to specific forms of language use. The specific form of language use, in the case of mathematics, is a form of literacy, of writing and reading. So let us first look at that, keeping an eye on Figures 1 and 2 above.

Mathematics as we know it gradually developed over centuries as a separate notation system in which random symbols became systematic encoders of abstract concepts – quantities, volumes, relations. Hardcore believers will no doubt object to this, claiming that the notational aspect is just an “instrumental”, ancillary aspect and that the core of mathematics is a special form of reasoning, a special kind of cognitive process. They are wrong, since the notational system is the very essence of the cognitive process claimed to be involved, which is why mathematicians must use the notational systems, and why school children can “understand” quite precisely what they are being told in mathematics classes but fail their tests when they are unable to convert this understanding into the correct notation. Seeing knowledge as in se detached from its infrastructures and methods of production and transmission is tantamount to declaring the latter irrelevant – which begs the question as to why mathematics uses (and insists on the use of) a separate notation system. More on this below.

The system, furthermore, is a socioculturally marked one, and the evidence for that should be entirely obvious. Recall Figure 2. The mathematical notation system follows the left-to-right writing vector of alphabetical scripts (not that, for instance, of Arabic or Chinese); unless I am very much mistaken “written” mathematical symbols (as opposed to e.g. geometrical figures) are alphabetical and not, e.g. hieroglyphic, cuneiform or ideographic (like Chinese characters); and they are drawn from a limited number of alphabets, notably Greek and Latin alphabets. Just click the “special symbols – mathematical symbols” icon in your wordprocessor now for double-checking. In spite of historical influences from Ancient Egypt and Babylonia, the Arab world, India and China, 19th century codification and institutionalization of mathematics (like other sciences) involved the Europeanization of its conventions.

The system is separate in the sense that, in spite of its obvious origins, it cannot be reduced to the “ordinary” writing system of existing languages: the fact that the symbol “0” for “zero” is of Indian origins doesn’t make that symbol Sanskrit, just as the Greek origins of the symbol for “pi” do not load this symbol with vernacular Greek meanings; they are mathematical symbols. But it can be incorporated (in principle) in any such writing system – Figures 1 and 2 show incorporation in English, for instance – and translated, if you wish, in the spoken varieties of any language (something it shares with Morse code). The symbol “<” for instance, can be translated in English as “less/smaller than”. Figure 1 above shows how Arrow translates ordinary English terms into mathematical terms, and the language-ideological assumption involved here is that this translation involves perfect denotational equivalence (the symbols mean exactly what the words express), as well as a superior level of accuracy and generalizability (the concrete of ordinary language becomes the abstract-theoretical of mathematical notation – the words become concepts). Here, we see what language ideologies are all about: they are a synergy of concrete language forms with beliefs about what they perform in the way of meaning. Thus, the difference between ordinary writing and mathematical writing is the belief we have that the latter signals abstraction, theory, and superior accuracy (something for which logical positivism provided ample motivational rhetoric).

This notation system is, in contemporary anthropological vocabulary, best seen as a specialized graphic register. That means that it can be used for a limited set of specific written expressions, as opposed to an “ordinary” writing system in which, in principle, anything can be expressed. We see it in action in the way I just described – reformulating ordinary expressions into “concepts” – in Figure 1, while Figure 2 shows that the register can be used for entire “textual” constructions in the genre of “proof”. The register is parsimonious and, in that sense, efficient. Writing “125364” requires just six symbols; writing “one hundred and twenty-five thousand three hundred and sixty-four” demands almost ten times that number of symbols.

It is, as a graphic register, extremely normative; it is an “ortho-graphy”. Mathematics deploys a closed and finite set of standardized symbols that have to be used in rigorously uniform ways – the symbol “<” can never mean “more than”; both their individual meaning and the ways in which they can be syntactically combined are subject to uniform and rigid rules. Consequently, while in “ordinary” writing some errors do not necessarily distort the meaning of an expression (most people would understand that “I cam home” means “I came home”), a writing error in mathematical notation renders the expression meaningless. So many of us painfully experienced this in the mathematics classes we took: our understanding of the fundamentals of mathematics did not include any degree of freedom in choosing the ways to write it up, since mathematics is normative, orthographic notation. This, too, is part of its specialized nature as well as of its esoteric nature: mathematics must be acquired through disciplined – nonrandom and highly regimented – learning procedures, and knowledge of specific bits of the register are identity-attributive. Some mathematicians are specialists of calculus, others of logic, for instance, while the identity label of “genius” would be stuck on outstanding mathematicians of any branch.

That is the specific form of language we see in mathematics; the language-ideological values attributed to it are, like any other language ideology, sociocultural constructs that emerged, are consolidated and develop by observing socioculturally ratified rules and procedures; and these are (like any other sociocultural convention) highly sensitive to developments over time and place. Very few contemporary mathematicians would be ready to defend the claim that mathematics reveals the fundamental structure of God’s creation, for instance, but it is good to remember that this language-ideological value was once attached to it, and that the people who attached it to mathematics were profoundly convinced that this was what mathematics was all about. Similarly, not too many contemporary mathematicians would perceive alchemy as an occupation compatible with the scientific discipline of mathematics, while Isaac Newton appeared not to have too many doubts about that.

There is nothing eternal, absolute or undisputable to the language-ideological assumptions accompanying mathematics. The suggestion, as I noted a widespread one, that mathematics would involve a “different way of thinking” is a quite questionable one. It is a different way of writing, to which a specific set of language-ideological values are attached. Children who are “not good at mathematics” at school, probably have more of a literacy problem than of a cognitive one – let alone one of inferior intelligence.

And if we return to Gilboa’s six features above, we might perhaps agree that his first two features – accuracy and rigor – are intrinsic affordances of the specific register of mathematics (things mathematics indeed can do quite well). The third feature (superiority) is a belief probably shared by members of the community of mathematicians, but not per se demonstrable, quite the contrary. Because the fourth feature – simplification – points to a limitation of the register, i.e. the fact that not everything can be appropriately written in the code. Ordinary language writing offers an infinitely vaster set of affordances. It is, at this point, good to remind ourselves of the fact that abstraction involves “stripping down”, i.e. the deletion of features from a chunk of reality; that this deletion may touch essential features; and that this deletion is often done on the basis of unproven assumptions.

The fifth feature – psychological realism – cries out for evidence, and those familiar with (to name just one) Alexander Luria‘s 1920s research on modes of thought will be inclined to take a more sobering and prudent view on this topic. There is no reason why the fundamental structures of rationality would not be expressed, for example, in narrative-poetic patterns rather than in mathematical-logical ones. And as for the sixth feature – the conclusive nature of mathematical proof: this, I suppose, depends on whom one submits it to. If the addressee of a mathematical argument shares the ideological assumption that such an argument is conclusive, s/he will accept it; if not, submitting mathematical proof might be not more conclusive than singing a Dean Martin song.

******

Language-ideological attributions are always sociocultural constructs, and therefore they are never unchallengeable and they can always be deconstructed. What we believe certain forms of language do, does not necessarily correspond to what they effectively do. There is, for example, a quite widely shared language-ideological assumption that grammatical, orthographic or other forms of “correctness” are strict conditions for understandability (“you can only make yourself understood if you speak standard language!”), while realities of human interaction show that tremendous largesse often prevails, without impeding relatively smooth mutual understanding. There is also a widespread language-ideological belief that societies are monolingual (think of the official languages specified in national legislations and, e.g., adopted by the EU), while in actual fact dozens of languages are being used. It is the job of my kind of anthropologists and sociolinguists to identify the gaps between facts and beliefs in this field.

Seen from that perspective, there is nothing in se that makes a mathematical proof more “objective” than, say, a poem (it is good to remember that in the Indian Vedic tradition, mathematical statements were written as sutra poetry, and that even today “elegance”, an aesthetic quality, appears to be a criterium for assessing mathematical proof). The status of “objectivity”, indeed the very meaning of that term, emerges by sociocultural agreement within specific communities, and none of the features of the register are in themselves and directly elements of “objectivity”. The notion of objectivity as well as the symbols that are proposed as “indexes” of objectivity, are all sociocultural constructs.

Paradoxically, thus, if we recall Kenneth Arrow’s extraordinarily far-reaching claims, the status of objectivity attributed to mathematics is a vintage Durkheimian “social fact”: something produced by societies and accepted by individuals for reasons they themselves often ignore – it’s a sociocultural convention wrapped, over time, in institutional infrastructures perpetuating and enforcing the convention (in the case of mathematics, the education system plays first violin here). Its power – hegemony we would say – does not turn it into an absolute fact. It remains perpetually challengeable, dynamic, an object of controversy and contention as well as a proposition that can be verified and falsified. Saying this is nothing more than stating the critical principles of science as an Enlightenment product, of re-search as literally meaning “search again” even if you believe you have discovered the laws of nature. These critical principles, we will recall, were the weapons used against religious and dictatorial (“irrational”) postures towards the Truth. They are the very spirit of science and the engine behind the development of sciences.

The intimate union between RC, mathematics, MI and the specific views of human nature and social action that were articulated in this movement, cannot escape this critique. Practitioners of this kind of science would do well to keep in mind that a very great number of their assumptions, claims and findings are, from the viewpoint of other disciplines involved in the study of humans and their societies, simply absurd or ridiculous. The axiomatic nature of rationality, the impossibility of collective choice and action, the preference for extraordinarily pessimistic views of human beings as potential traitors, thieves and opportunists – to name just these – are contradicted by mountains of evidence, and no amount of deductive theorizing can escape the falsifications entailed by this (inductive and not at all, pace Gilboa, “intuitive”) evidence.

MI, leading, as in Arrow’s work, to the refusal to compare individuals’ preferences and to isolate human beings from the complex patterns of interaction that make up their lives, is simply ludicrous when we consider, for instance, language – a system of shared normatively organized sociocultural codes (a “social fact”, once more) which is rather hard to delete from any consideration of what it is to be human or dismiss as a detail in human existence. Here we see how the “stripping down” involved in mathematical abstraction touches essential features of the object of inquiry, making it entirely unrealistic. We have also seen that the language in which such “truths” are expressed is, in itself, a pretty obvious falsification of MI and other RC asssumptions. And more generally, facts gathered through modes of science that Gilboa tartly qualifies as “intuitive reasoning” are also always evidence of something, and usually not of what is claimed to be true in RC.

Such critiques have, of course, been brought to RC scholars (an important example is Green & Shapiro 1994). They were often answered with definitional acrobatics in which, for instance, the concept of “rationality” was stretched to the point where it included almost everything, so as to save the theory (but of course, when a term is supposed to mean everything it effectively means nothing). Other responses included unbearably complex operations, attempting to keep the core theory intact while, like someone extending his/her house on a small budget, adding makeshift annexes, windows, rooms and floors to it, so as to cope with the flurry of exceptions and unsolvable complexities raised against it. I found, for instance, Lindenberg’s “method of decreasing abstraction” (1992) particularly entertaining. Recognizing the complexity of real-world issues, and aiming (as anyone should) at realism in his science, Lindenberg constructs a terrifically Byzantine theoretical compound in which the scientist gradually moves away from simple and rigid mathematical formulations towards less formal and more variable formulations  – hence “decreasing abstraction” or “increasing closeness to reality” (Lindenberg 1992: 3). He thus achieves, through admirably laborious theoretical devotion, what any competent anthropologist achieves in his/her fieldnotes at the end of a good day of ethnographic fieldwork.

*****

This brings me to a final point. Mathematics is a formal system, and a peripheral language-ideological attribution it carries is that of “theory”. Theory, many people believe, must be abstract, and whatever is abstract must be theoretical. People working in the RC paradigm like to believe that (theoretical) “generalization” in science can only be done through, and is synonymous with, abstraction – mathematical expression in formulas, theorems, or statistical recipes.

In dialogues with such people, it took me a while before I detected the cause of the perpetual misunderstandings whenever we tried to talk about issues of generalization and theorization across disciplines, for we were using the same words but attached them to entirely different cultures of interpretation. Their culture was usually a deductive one, in which theory came first and facts followed, while mine operated precisely the other way around. I had to remind them of the existence of such different cultures, and that their view of theoretical generalization – necessarily through abstraction – was an idiosyncrasy not shared by the majority of scientific disciplines.

Theoretical statements are, in their essence, general statements, i.e. statements that take insights from concrete data (cases, in our culture of interpretation) to a level of plausible extrapolation. Since every case one studies is only a case because it is a case of something – an actual and unique instantiation of more generally occurring phenomena – even a single case can be “representative” of millions of other cases. This generalization is always conjectural (something even hardliners from the other camp admit) and demands further testing, in ever more cases. I usually add to this that this method – a scientific one, to be sure – is actually used by your doctor whenever s/he examines you. Your symptoms are always an actual (and unique) instantiation of, say, flu or bronchitis, and your doctor usually concludes the diagnosis on the basis of plausible extrapolation: although s/he can never be 100% sure, the combination of symptoms a, b and c strongly suggests flu. If the prescribed medicine works, this hypothesis is proven correct; if not, the conjectural nature of this exercise is demonstrated. Unless you want to see your doctor as a quack or an alchemist who can’t possibly speak the truth (which would make it highly irrational to go and see him/her when you’re ill), it may be safe to see him/her as an inductive scientist working from facts to theory and usually doing a pretty accurate job at that.

People who believe that mathematics, and only mathematics, equals science, are in actual fact a small but vocal and assertive minority in the scientific community. If they wish to dismiss, say, 70% of what is produced as science as “unscientific”, they do so at their peril (and sound pretty unscientific, even stupid, when they do so). That includes Mr Popper too. The question “what is science?” is answered in very many forms, as a sovereign rational choice of most of its practitioners. Enforcing the preferences of one member of that community, we heard from Kenneth Arrow, is dictatorial. And since we believe that science is an elementary ingredient of a free and democratic society, and that pluralism in reasoned dialogue, including in science, is such an elementary element as well – we really don’t want that, do we?

References

AMADAE, Sonja, N. (2003) Rationalizing Capitalist Democracy: The Cold War Origins of Rational Choice Liberalism. Chicago: University of Chicago Press.

ARROW, Kenneth (1951) Social Choice and Individual Values. New York: Wiley (2nd ed. 1963).

GILBOA, Itzhak (2010) Rational Choice. Cambridge MA: MIT Press

GREEN, Donald & Ian SHAPIRO (1994) Pathologies of Rational Choice: A Critique of Applications in Political Science. New Haven: Yale University Press

LINDENBERG, Siegwart (1992) The Method of Decreasing Abstraction. In James S. Coleman & Thomas J. Fararo (eds.) Rational Choice Theory: Advocacy and Critique: 3-20. Newbury Park: Sage.

THALER, Richard A. (2015) Misbehaving: The Making of Behavioral Economics. New York: W.W. Norton & Company.

by-nc

Advertisements

Research training and the production of ideas

21382110-Business-Ideas-conceptual-A-businessman-watching-the-city-with-big-ideas--Stock-Photo

Jan Blommaert 

Can we agree that Albert Einstein was a scientist? That he was a good one (in fact, a great one)? And that his scientific work has been immeasurably influential?

I’m asking these silly questions for a couple of reasons. One: Einstein would, in the present competitive academic environment, have a really hard time getting recognized as a scientist of some stature. He worked in a marginal branch of science – more on this in a moment – and the small oeuvre he published (another critical limitation now) was not written in English but in German. His classic articles bore titles such as “Die vom Relativätsprinzip geforderte Trägheit der Energie” and appeared in journals called “Annalen der Physik” or “Beiblätter zu den Annalen der Physik”. Nobody would read such papers nowadays.

Two, his work was purely theoretical. That means that it revolved around the production of new ideas, or to put it more bluntly, around imagination. These forms of imagination were not wild or unchecked – it wasn’t “anything goes”. They were based on a comprehensive knowledge of the field in which he placed these ideas (the “known facts of science”, one could say, or “the state of the art” in contemporary jargon ), and the ideas themselves presented a synthesis, sweeping up what was articulated in fragmentary form in various sources and patching up the gaps between the different fragments. His ideas, thus, were imagined modes of representation of known facts and new (unknown but hypothetical and thus plausible or realistic) relations between them.

There was nothing “empirical” about his work. In fact, it took decades before aspects of his theoretical contributions were  supported by empirical evidence, and other aspects still await conclusive empirical proof. He did not construct these ideas in the context of a collaborative research project funded by some authoritative research body – he developed them in a collegial dialogue with other scientists, through correspondence, reading and conversation. In the sense of today’s academic regime, there was, thus, nothing “formal”, countable, measurable, structured, justifiable, or open to inspection to the way he worked. The practices that led to his theoretical breakthroughs would be all but invisible on today’s worksheets and performance assessment forms.

As for “method”, the story is even more interesting. Einstein would rather systematically emphasize the disorderly, even chaotic nature of his work procedures, and mention the fact (often also confirmed by witnesses) that, when he got stuck, he would leave his desk, papers and notebooks, pick up his violin and play music until the crucial brainwave occurred. He was a supremely gifted specialized scholar, of course, but also someone deeply interested (and skilled) in music, visual art, philosophy, literature and several other more mundane (and “unscientific”) fields. His breakthroughs, thus, were not solely produced by advances in the methodical disciplinary technique he had developed; they were importantly triggered by processes that were explicitly non-methodical and relied on “stepping out” of the closed universe of symbolic practices that made up his science.

0000

Imagine, now, that we would like to train junior scholars to become new Einsteins. How would we proceed? Where would we start?

Those familiar with contemporary research training surely know what I am talking about: students are trained to become “scientists” by doing the opposite of what turned Einstein into the commanding scientist he became. The focus these days is entirely – and I am not overstating this – on the acquisition, development and refining of methods to be deployed on problems which in turn are grounded in assumptions by means of hypotheses. Research training now is the training of practicing that model. The problems are defined by the assumptions and discursively formulated through the hypotheses – so they tolerate little reflection or unthinking, they are to be adopted. And what turns the student’s practices into “science” is the disciplined application of acquired methods to such problems resting on such assumptions. This, then, yields scientific facts either confirming or challenging the “hypotheses” that guided the research, and the production of such facts-versus-hypotheses is called scientific research. Even more: increasingly we see that only this procedure is granted the epithet of “scientific” research.

The stage in which ideas are produced is entirely skipped. Or better, the tactics, practices and procedures for constructing ideas are eliminated from research training. The word “idea” itself is often pronounced almost with a sense of shame, as an illegitimate and vulgar term better substituted by formal jargonesque (but equally vague) terms such as “hypothesis”. While, in fact, the closest thing to “idea” in my formulation is the term “assumption” I used in my description of the now dominant research model. And the thing is that while we train students to work from facts through method to hypotheses in solving a “problem”, we do not train them to questions the underlying assumptions that formed both the “problem” they intend to address and the epistemological and methodological routes designed to solve such problems. To put it more sharply, we train them in accepting a priori the fundamental issues surrounding and defining the very stuff they should inquire into and critically question: the object of research, its relations with other objects, the “evidence” we shall accept as elements adequately constructing this object, and the ways in which we can know, understand and communicate all this. We train them, thus, in reproducing – and suggestively confirming – the validity of the assumptions underlying their research.

“Assumptions” typically should be statements about reality, about the fundamental nature of phenomena as we observe and investigate them among large collectives of scientists. Thus, an example of an assumption could be “humans produce meaning through the orderly grammatical alignment of linguistic forms”. Or: “social groups are cohesive when they share fundamental values, that exist sociocognitively in members’ minds”.  Or “ethnicity defines and determines social behavior”. One would expect such assumptions to be the prime targets of continuous critical reassessment in view (precisely) of the “facts” accumulated on aspects that should constitute them. After all, Einstein’s breakthroughs happened at the level of such assumptions, if you wish. Going through recent issues of several leading journals, however, leads to a perplexing conclusion: assumptions are nearly always left intact. Even more: they are nearly always confirmed and credentialed by accumulated “facts” from research – if so much research can be based on them, they must be true, so it seems. “Proof” here is indirect and by proxy, of course – like miracles “proving” the sacred powers of an invoked Saint.

Such assumptions effectively function not as statements about the fundamental nature of objects of research, open for empirical inspection and critique, but as axiomatic theses to be “believed” as a point of departure for research. Whenever such assumptions are questioned, even slightly, the work that does so is instantly qualified as “controversial” (and, in informal conversations, as “crackpot science” or “vacuous speculation”). And “re-search”, meaning “searching again”, no longer means searching again da capo, from step 1, but searching for more of the same. The excellent execution of a method and its logic of demonstration is presented as conclusive evidence for a particular reality. Yes, humans do indeed produce meaning through the orderly grammatical alignment of linguistic forms, because my well-crafted application of a method to data does not contradict that assumption. The method worked, and the world is chiseled accordingly.

0000

Thus we see that the baseline intellectual attitude of young researchers, encouraged or enforced and positively sanctioned – sufficient, for instance, to obtain a doctoral degree and get your work published in leading journals, followed by a ratified academic career – is one in which accepting and believing are key competences, increasingly even the secret of success as a researcher. Not unthinking the fundamental issues in one’s field, and abstaining from a critical inquisitive reflex in which one looks, unprompted, for different ways of imagining objects and relations between them, eventually arriving at new, tentative assumptions (call them ideas now) – is seen as being “good” as a researcher.

The reproductive nature of such forms of research is institutionally supported by all sorts of bonuses. Funding agencies have a manifest and often explicit preference for research that follows the clear reproductive patterns sketched above. In fact, funding bodies (think of the EU) often provide the fundamental assumptions themselves and leave it to researchers to come up with proof of their validity. Thus, for instance, the EU would provide in its funding calls assumptions such as “security risks are correlated with population structure, i.e. with ethnocultural and religious diversity” and invite scientific teams to propose research within the lines of defined sociopolitical reality thus drawn. Playing the game within these lines opens opportunities to acquire that much-coveted (and institutionally highly rewarded) external research funding – an important career item in the present mode of academic politics.

There are more bonuses. The reproductive nature of such forms of research also ensures rapid and high-volume streams of publications. The work is intellectually extraordinarily simple, really, even if those practicing it will assure us that it is exceedingly hard: no fundamental (and often necessarily slow) reflection, unthinking and imaginative rethinking are required, just the application of a standardized method to new “problems” suffices to achieve something that can qualify as (new or original) scientific fact and can be written down as such. Since literature reviews are restricted to reading nothing fundamentally questioning the assumptions, but reading all that operates within the same method-problem sphere, published work quickly gains high citation metrics, and the journals carrying such work are guaranteed high impact factors – all, again, hugely valuable symbolic credit in today’s academic politics. Yet, reading such journal issues in search for sparkling and creative ideas usually turns into a depressing confrontation with intellectual aridity. I fortunately can read such texts as a discourse analyst, which makes them at least formally interesting to me. But that is me.

0000

Naturally, but unhappily, nothing of what I say here is new. It is worth returning to that (now rarely consulted) classic by C. Wright Mills, “The Sociological Imagination” (1959) to get the historical perspective right. Mills, as we know, was long ago deeply unhappy with several tendencies in US sociology. One tendency was the reduction of science to what he called “abstracted empiricism” – comparable to the research model I criticized here. Another was the fact that this abstracted empiricism took the “grand theory” of Talcott Parsons for granted as assumptions in abstracted empirical research. A poor (actually silly) theory vulnerable to crippling empirical criticism, Mills complained, was implicitly confirmed by the mass production of specific forms of research that used the Parsonian worldview as an unquestioned point of departure. The title of his book is clear: in response to that development, Mills strongly advocated imagination in the sense outline earlier, the fact that the truly creative and innovative work in science happens when scientists review large amounts of existing “known facts” and reconfigure them into things called ideas. Such re-imaginative work – I now return to a contemporary vocabulary – is necessarily “slow science” (or at least slower science), and is effectively discouraged in the institutional systems of academic valuation presently in place. But those who neglect, dismiss or skip it do so at their own peril, C. Wright Mills insisted.

It is telling that the most widely quoted scholars tend to be people who produced exactly such ideas and are labeled as “theorists” – think of Darwin, Marx, Foucault, Freud, Lévi-Strauss, Bourdieu, Popper, Merleau-Ponty, Heidegger, Hayek, Hegel and Kant. Many of their most inspiring works were nontechnical, sweeping, bold and provocative – “controversial” in other words, and open to endless barrages of “method”-focused criticism. But they influenced, and changed, so much of the worldviews widely shared by enormous communities of people worldwide and across generations.

It is worth remembering that such people did really produce science, and that very often, they changed and innovated colossal chunks of it by means of ideas, not methods. Their ideas have become landmarks and monuments of science (which is why everyone knows Einstein but only very few people know the scientists who provided empirical evidence for his ideas). It remains worthwhile examining their works with students, looking closely at the ways in which they arrived at the ideas that changed the world as we know it. And it remains imperative, consequently, to remind people that dismissing such practices as “unscientific” – certainly when this has effects on research training – denies imperious scientific efforts, inspiring and forming generations of scientists, the categorical status of “science”, reserving it for a small fraction of scientific activities which could, perhaps far better, be called “development” (as in “product development”). Whoever eliminates ideas from the semantic scope of science demonstrates a perplexing lack of them. And whoever thinks that scientific ideas are the same as ideas about where to spend next year’s holiday displays a tremendous lack of familiarity with science.

0000

Much of what currently dominates the politics and economies of science (including how we train young scientists) derives its dominant status not from its real impact on the world of knowledge but from heteronomic forces operating on the institutional environments for science. The funding structure, the rankings, the metrics-based appraisals of performance and quality, the publishing industry cleverly manipulating all that – those are the engines of “science” as we now know it. These engines have created a system in which Albert Einstein would be reduced to a marginal researcher – if a researcher at all. If science is supposed to maintain, and further develop, the liberating and innovative potential it promised the world since the era of Enlightenment, it is high time to start questioning all that, for an enormous amount of what now passes as science is astonishingly banal in its purpose, function and contents, confirming assumptions that are sometimes simply absurd and surreal.

We can start by talking to the young scholars we train about the absolutely central role of ideas in scientific work, encourage them to abandon the sense of embarrassment they experience whenever they express such ideas, and press upon them that doing scholarly work without the ambition to continue producing such ideas is, at best, a reasonably interesting pastime but not science.

Related texts

https://alternative-democracy-research.org/2015/06/27/when-scientific-became-a-synonym-for-unrealistic/

https://alternative-democracy-research.org/2015/04/13/investing-in-higher-education/

https://alternative-democracy-research.org/2014/10/15/the-power-of-free-in-search-of-democratic-academic-publishing-strategies/

https://alternative-democracy-research.org/2015/06/10/rationalizing-the-unreasonable-there-are-no-good-academics-in-the-eu/

by-nc

When “scientific” became a synonym for “unrealistic”.

Paul Samuelson

Jan Blommaert 

“From Adam Smith in 1776 to Irving Fisher in 1930, economists were thinking about intertemporal choice with Humans in plain sight. Econs began to creep in around the time of Fisher, as he started on the theory of how Econs should behave. But it fell to a twenty-two-year-old Paul Samuelson, then in graduate school, to finish the job”. (Richard H. Thaler, Misbehaving: The Making of Behavioral Economics, p.89; New York: Norton, 2015).

Richard Thaler, in this wonderful book, uses the terms “Humans” and “Econs” to distinguish between, respectively, real people observed in real life, having real interests, attitudes and modes of thought and behavior that are often, let us say, suboptimal; “Econs”, by contrast, are fictional characters, ideal people who don’t have passions or biases, are always rational, possess a maximum of information and are able to convert this linearly into economic behavior. Thaler’s book is a powerful argument in favor of an Economics science that keeps track of, and explains, Human behavior as, at least, a qualification to the kinds of fictional predictions of Econs’ behavior that are the Economics mainstream’s occupation.

In so doing, Thaler also directs our attention towards the small historical window in which this current mainstream’s doctrine occurred and flourished. For almost two centuries, Economics was precoccupied with real markets, customers, prices and policies – Adam Smith’s Theory of Moral Sentiments setting the scene for an Economics that dealt with the whims of human social behavior. The discipline abandoned this focus about merely half a century ago, when Samuelson, Arrow and some others replaced muddled descriptions of reality by elegant mathematical “models”, supposed to be of absolute and eternal precision and capable of bypassing the uncertainties and historical situatedness of real human minds. When critics pointed towards such minds (and their tendency to violate the rules of such elegant models), the response was that, willingly or not, people in economic activities would behave “as if” they had done the intricate calculations captured in the models. Thaler’s book is a lengthy and pretty detailed refutation of that “as if” argument: if nobody really actually operates in the ways laid down in mathematical models, why not take such deviations – “misbehaving” Humans – seriously? For someone such as myself, involved in ethnographic studies of Humans and their social behavior, this question is compelling and the arguments it provokes inescapable.

Thaler is not a nobody in his field – he’s the 2015 President of the American Economics Association; he will be able to ask this question urbi et orbi and with a stentorian voice. There might be some obstacles, though. Interestingly, the kinds of Economics designed by Samuelson and his comrades were (and are) seen as truly “scientific”. The conversion of a science grounded in observations of actually occurring behavior to a science concerned with abstract mathematical modeling was seen as the moment at which Economics became a real science, a complex of knowledge practices not tainted by the fuzziness of actual social facts but aiming at absolute Truth – something invariably expressed not in prose but in graphics, tables and figures, in which a new abstract model could be seen as a major scientific breakthrough (just look at the list of Economics Nobel Prize winners since the 1960s, and read the citations for their selection). As for the teaching and training of aspiring economists, it was thought that they would now be truly “scientific”, since students would learn abstract and ideal frameworks suggested to be absolutely generative in the sense that any form of real behavior could be measured against it and explained in its terms. No more nonsense, no more description – a normative theory such as that of Samuelson (sketching how ideal people should act) would henceforth be presented as a descriptive one (effectively documenting and explaining how they actually act) as well – an absolute theory, in other words. The shift away from “realism” – the aim of descriptive theories – towards ideal-typical modeling – the aim of normative theories – was seen as irrelevant. Economics became “scientific” as soon as it abandoned realism as an ambition.

It is interesting to see that in post-World War II academics, similar moves were made in different disciplines. Chomsky’s revolution in Linguistics (caused by his Syntactic Structures, 1957) is an example. Whereas Linguistics until Chomsky was largely driven by descriptive aims and methods (go out and describe a real language), in which careful empirical description  and comparison would ultimately lead to adequate generalization (Saussure’s Langue), Chomsky saw real Human language as propelled by an abstract formal and generative competence, describable as a finite set of abstract rules capable of generating every possible sentence in a language. This, too, was seen at the time as a major leap towards scientific maturity, and senior philosophers of science (already accustomed to see formalisms such as mathematical logic as the purest forms of meaning) argued that, with Chomsky, Linguistics had finally become a “science”. Linguists, from now on, would no longer do fieldwork – the interest in listening to what real people actually said was disqualified – but rely on “introspection”: one’s own linguistic intuitions were good enough as a base for doing “scientific” Linguistics. It took half a century of sociolinguistics to replace this withdrawal from realism with a renewed attention for actual variation and diversity in real language. Contemporary sociolinguistics, consequently, operates towards Linguistics very much like Thaler’s Behavioral Economics towards mainstream Economics: as a sustained attempt at making this “science” realistic again. Chomsky generative grammar

Similar stories can be told with respect to disciplines such as psychology and sociology, and later cognitive science, where the desire to become “scientific”, in the same era, led to a canonical “science” in which white-room experiments and quantifiable surveys replaced actual observation of situated social behavior and attention to what people really did and say about themselves and society.

There, too, the assumptions were the same: the actual social behavior of people is driven by a “deeper” abstract level of psychological, social and cognitive processes which can be captured and tested by detaching individuals from their real-life environments, submitting them to testing procedures that bear no connection whatsoever with any other actual form of social and meaningful behavior. Thus,  cognitive, psychic and emotional behavior can be accurately and “scientifically” studied by putting individuals into an MRI scanner, where they stay entirely immobile and cut off from any outside stimulus for 45 minutes. The outcomes of such procedures (quite paradoxically called “empirical” by practitioners) are presented, remarkably (or better, incredibly), as accurate accounts of real, situated and contextually sensitive social and mental activity. Abstract modeling of what we could call “Psychons”, here as well, is not seen as a normative enterprise but as a descriptive one as well, predicting (with various degrees of accuracy) Human behavior. This study of Psychons, then, is the real “science”, often rhetorically opposed to and contrasted with the “storytelling” or “journalism” of research grounded in actual observation and description of Humans (turning one of the 20th century’s most influential intellectuals, Sigmund Freud, into a fiction writer). Senior sociologists and psychologists such as Herbert Blumer and Aaron Cicourel brought powerful (and never effectively refuted) methodological arguments against this shift away from realism and towards “science” – their arguments were dismissed as unhelpful.

So here we are: knowledge disciplines concerned with Man and society appear to be “scientific” only when they deliberately reject the challenge of realism – “reality talking back”, as Herbert Blumer famously called it – and engage in abstract formalization and modeling, regardless of whether or not such formal schemes and models stand the test of empirical reality checks. Such “science”, because it dismisses this kind of systematic reality check, also becomes incapable of describing change. Experiments need to be “repeatable” in order to be “scientific”, and consequently we continuously check and test things that have to remain stable in order to be scientifically testable. The fact that actual social processes and realities are not “experiments”, and display a strong tendency to change perpetually, precludes repeatability and consequently can never be “scientifically” addressed. This feature – the bias towards stability and the incapability of addressing change – is a constant in all these “sciences”. And those who practice such “sciences” are actually proud of it. Strange, isn’t it?

We live by our mythologies, Roland Barthes famously said. One of the mythologies we live by is that of “science” being necessarily, because of its own criteria for validity, unrealistic, and therefore often outlandish and outrageous in its findings and conclusions. It would be good, therefore, to return to the old debates historically accompanying the shifts in the disciplines I mentioned here, and carefully examine the validity of critical arguments brought against these kinds of “science”. To the extent that people still believe that “re-search” means “looking again”, i.e. to be continuously critical of one’s own knowledge doctrines, this would be an eminently scientific practice.

PS (2017): Richard Thaler was awarded the Economics Nobel Prize in October 2017.

by-nc

An interview with Jan Blommaert on research and activism

DSC00990

Jan Blommaert 

Responses to a survey on this topic, March 2015 (courtesy Tina Palivos & Heath Cabot).

How would you define or describe research  and social action? Tell us a little bit about your background and your experience in both of these areas.

JB: Research is social action; the fact that the question separates both presupposes “social action” as an “abnormal” aspect of research, while research is always and inevitably social action: an action performed in a real social environment, and infused with elements from a preceding state as well as leading to effects in a posterior state.

The question, rather, would thus be which specific type of social action research would be, and I understand your question as pertaining to what one could call “activist research”, i.e. research that is critical of existing social relations and attempts, at least within the boundaries of research, to amend or alter them, usually in favor of a more equitable or balanced idea of social relations.

Such activist research, I would argue, takes sides in the sense that, based on a preceding analysis of social relations, researchers decide to side with the weakest party in the system and deploy their research in an attempt to provide that weaker party with new intellectual tools for addressing their situation. These tools can be self-analytic – to provide an accurate analysis of the situation of systemic inferiority in which the group is placed – or general-analytic – a critical analysis of the entire system with its various positions and challenges; and such tools are invariably discursive: the forms of analysis provide new discursive, argumentative and representational tools.

Briefly describe academic knowledge or know‐how? Activist knowledge or know‐how?

JB: Knowledge is one, the discourses in which knowledge is articulated are the point here. “Activist”, as in the description above, represents a discursive scale level in which “esoteric” academic knowledge is converted into discourses of wider currency (“simpler” discourses, if you wish), without sacrificing the analytical accuracy and power of the academic discourses.

Do you see them as distinct? If yes, how? How do they overlap, if at all?

JB: Note that the function of both discourses is different; while academic discourse is there to circulate in and convince small circles of peers, activist knowledge must circulate in and convince far broader audiences and systems of mediation (e.g. mass media).

In your experience, how do these areas complement each other?

JB: Personally, I could never find sufficient satisfaction in “pure” academic work if it would lack the dimension of advocacy and appeal to broader and more complex audiences. Science does have the potential to change the world, so one should not be satisfied with just changing the academic world alone. As a scientist, we all have a duty towards the power of science: to use it carefully, justly and for the benefit of humanity, not just a small subset of it. Being a scientist, for me, commits us to these fundamental humanistic duties.

In my case, I complemented my “purely” academic oeuvre always with the writing of low-threshold, Dutch-language books (12 or 13 by now), converting research achievements into texts that could be used in grassroots mobilization, professional training or general-interest reading and instruction. This activity comes with a great deal of lecturing and debating for the audiences addressed by the low-threshold books, which is both a lot harder than academic lecturing (academics are usually very civil and polite towards one another), and a lot more rewarding (convincing and changing the minds of an audience of 300 schoolteachers, train drivers or longshoremen gives one a sense of relevance rarely matched by convincing a handful of academics).

For you, what are the tensions or conflicts between activism and academic work that you have come across? What would you do (or have you done) to resolve these conflicts or tensions? 

JB: The conflicts are diverse:

-No real career bonuses can be obtained for “advocacy” work, if it doesn’t come with “purely” academic aspects; a real problem, specifically for junior researchers. In my research group, we also “count” advocacy outputs.

-A permanent battle against stereotypes of the researcher as ivory-tower fellows out of touch with “reality” (we produce “theory” as opposed to “reality”). Easy to remedy: just talk about reality, show relevance in their terms

-Debate is far harder, more violent and sometimes highly unpleasant in the wider public arena; one must be able to withstand brutal public allegations, insults and accusations. It’s not a good place to be in for sensitive souls.

But let me also address the advantages and benefits. In my experience, a connection between research and activism improves research. If you wish to solve one single real-world problem of one single individual, you quickly discover the inadequacies of our toolkits and the demand to come up with better and more precise science. If I have ever made “breakthroughs”, it was because I had a sharp awareness of the fact that someone’s life literally depended on it. Believe me, that is a powerful engine.

What do you think are the most important and necessary ways in which research and social action could be linked, bridged, or integrated?

JB: All science should benefit humanity, general interests rather than specific ones. In methodology, we attempt to achieve this by means of generalization from isolated facts (i.e. theory). And too little is done, in actual fact, to make this mechanism into a general educational principle for all.

Are there any stumbling blocks or concerns you would have around projects that seek to bridge or bring together research and social action, and academic and activist worlds, to create modes of knowledge and collaboration? How might these be ameliorated?

JB: My very first answer addressed the presupposition underlying your question: the fact that “social action” is seen as separate from scientific action, and I see this as a major problem, an “ideology” if you wish, in which research is seen as in itself value-free (“objective”), to which “value” can be added after research, either as hard cash (licences, patents, industrial contracts etc) or as soft capital (impact on the nonacademic field, as it’s called nowadays). It is a crazy assumption which denies the fundamental sociological given of research: that it is, like any social action, a historically, socioculturally and politically situated activity. I always ask the question “why now?” when addressing research questions – how come we find this a researchable question here-and-now and not, for instance, in the 1970s of 1990s? The real answer to this question leads us into an analysis of scientists as people addressing problems from within a subjective position, defined only partly by “objective” facts of science and far more by the concrete social positions from which they attack questions and problems.

This is clearest (while often least understood) when we talk about research funding. There is a strong suggestion that external money is “neutral” in the sense that it does not pre-script research. In actual fact, it does script it substantively. If the EU opens a funding line on a particular topic, think of “security”, this funding line incorporates the current interests and needs of the EU (combating terror and transnational crime, for instance), excluding others (e.g. not combating these things). The “priorities” defined in such funding calls are always someone’s priorities, and rarely those of the scientists themselves. Scientists have to adjust to them, and this means that they have to adjust to subjective positions defined by funding bodies, within which they can then proceed to do “objective” research.

It is this myth about research – that it is in itself only “good” or “excellent” if and only if it is “value free” – that poisons the debate and the climate on science and society these days. It enables scientists to escape their accountability for what they are doing, and denies them the dialogue on effective social effects of which they should be very much part.

by-nd