Understanding the culture wars: Weaponizing the truth (video)

Jan Blommaert

In the culture wars, the status of “truth” is absolutely central. In this lecture, Jan Blommaert describes how we are moving from a robust system of well-established discourses of truth towards a far more fragmented one, in which truth is derationalized, moralized and individualized.

 

by-nc.eu

Advertisement

Looking back: What was important?

sinking

Two of my maîtres à penser died relatively young. Michel Foucault was 57, Erving Goffman was 60. It is highly likely that I shall die relatively young as well. I’m 58 now, and I have been diagnosed with cancer stage 4 in mid-March 2020. Since there is suddenly very little future left to plan, speculate or dream about, one tends to use such landmark moments as a prompt to reflect on the past. The guiding question in this – quite an obvious one – is: what was important?

I will restrict my reflections to the professional parts of my life. This is, of course, an artificial segmentation, and readers must keep in mind that the professional part of life was always interwined with the nonprofessional parts, often in uneasy or poorly balanced ways. Perhaps that story should be told elsewhere. For now, I will focus on the part of me that was called “academic”.

*****

Let me briefly preface what follows by reviewing what was not important.

What was not important was competition and its attributes of behavioral and relational competitiveness, the desire or urge to be the best, to win contests, to be seen as the champ, to proceed tactically, to forge strategic alliances and what not. I did not have a sense that I had to be part of a specific clique or network, and I don’t think I ever made great efforts to get close to people considered to be important. If I was a member of such networks, it was rather by accident than by design – it happened to me.

I never self-imagined as a genius, individually measured against others, and individually responsible for the production of superb stuff that everyone should read, know, quote and assign to students. Quite the contrary: I saw myself as unexceptional, and as someone who would always need a good team around me in order to achieve anything. Given that academic life, in my case, was not a thing I had actively desired and sought, but a gift I received from others, I felt a duty to be good, as good as  I could be, and better tomorrow than today. So I worked hard, essentially taking my clues from others – the literature of course (a community of others often overlooked when we talk about academic achievement), but also contacts and friends with whom teams could be formed. Discussion and brainstorm were my favorite activities; they were in the most literal sense the ludic, fun, pleasure dimensions of academic life. What I did alone, usually, was the slow and careful analysis of data. But that’s the only thing that’s really individual in a range of activities that were collective and involved intense sharing, exchange and generosity. And even that thing – the data analysis – was usually submitted to the judgment of others before it could be publicly shown. So much for being the lone, unique and autonomous genius researcher.

In such contexts of collective sharing, conditioned by maximum generosity, changing one’s mind is self-evident. The very point of having a discussion or brainstorm – an “exchange of ideas” – is that ideas can be exchanged and changed, and that one leaves the session with better things in one’s head than before the session. Learning is the key there, and if I would be ready to pin one label onto myself, it’s the label of an eternal, insatiable learner.

Which is why I read massively all through my life. And while part of that reading was “just” reading, another part was studying. Most of my career, I was involved in some kind of study, collecting and selecting writings from which I wanted to draw advanced insights, useful for the research projects I was engaged in. I studied, for instance (and the list is not complete), structuralism, existentialism, phenomenology, arcane things such as the works of Rudy Botha on Chomsky and the Functional Grammar attempts of Simon Dik, Talmy Givon and M.A.K. Halliday; but also the entire oeuvre (or, at least, most of what I could get) of Michel Foucault, Carlo Ginzburg, Bakhtin, Freud, Durkheim, Simmel, Parsons, Eric Hobsbawm, E.P. Thompson, Pierre Bourdieu, Charles Goodwin, Dell Hymes, Michael Silverstein, Erving Goffman, Aaron Cicourel, Harold Garfinkel, Anne Rawls, Fernand Braudel, J.K. Galbraith, Immanuel Wallerstein, Arjun Appadurai and several others. I studied Marx and Marxism in its very diverse varieties, Rational Choice, Macchiavelli, Darwin, G.H. Mead’s work and influence, Dewey, Paolo Freire, Ngugi wa Thiong’o, Okot p’Bitek, Walter Rodney, Issa Shivji and quite a bit of African political theory from the 1950, 1960s and 1970s. In order to understand a lot of that, I had to study the works of Mao Zedong and the history of the Cultural Revolution in China. And so on, and so forth.

If I have regrets now, it is about the fact that some of those studies will remain unfinished. I took great pleasure from them.

I disliked and dislike – intensely – the development of academic industrial culture that I was witness to throughout my career, with almost-totalized individualization of academic work and performance measurement, with constant inter-individual competition driving young and vulnerable colleagues to extreme and dangerous levels of stress and investment in work rather than life, and with managers emphasizing – without any burden of evidence – that the “single-authored journal paper” (published, evidently, behind a huge paywall) is the pinnacle of academic performance and the gold standard for measuring the “quality” of an individual researcher. Added to this – and this, too, I was a witness of – is the growth of a veritable celebrity culture in academia, in which mega-conferences take the shape of pop festivals with rockstar headliners bringing their greatest hits in front of an audience of poorly paid struggling academics who spent their personal holiday budgets purchasing a ticket for such events. Little truly valuable intellectual work is going on there. And identical to pop festivals, the carbon footprint of such academic rock concerts is scandalous.

Frankly, all of this is in its simplest and most elementary form anti-academic and anti-intellectual. It’s the recipe for bad science, not for innovation and improvement. I participated in all of it, for all of it became “new” while I was active – it was the culture that defined my career. That culture defined me as one of these rockstars for a while, and thus placed me quite consistently in the company of a small coterie of similar rockstars. It is not a thing I shall miss, for it was invariably awkward and alienating, and very often incredibly boring. And this new culture took away and delegitimized a previous culture, one of collegial dialogue, collaboration, slowness, time to think, to reflect and to doubt, periods of invisibility and absence from public stages – because one was doing some serious bit of research, for instance. And a culture in which one would write something whenever, and because something new had to be reported, not because one needed to achieve one’s annual output quotum or another “top” paper in order to be eligible for promotion, tenure or appointment.

A footnote: another part of that defining culture was university reorganizations, managerialization and budget cuts, with an increasing rat race for jobs (for which the intellectual world pays a terrible price), “customer-oriented” academic programs that had to be checked by the marketing guys as to their merits in a market of academic products, the decline of vital academic “support staff” and the almost-complete commodification of academic output – see the point about “single-authored journal papers” above, and one can add the metrics and impact mania to it. Academic publishing, as an industry, has become a disgrace and is an obstacle to science, not a facilitator (let alone an indispensable actor). Publishing has become a form of terror for young scholars, while it should be an instrument for liberation, for finding their voice and feet in the business. Burnout has now become an endemic professional hazard in academia, much like depression, unhappy human relationships and unhealthy lifestyles. It’s become a highly unattractive environment for human creativity, while it should be an environment, a specialized one, ideally tailored to precisely that.

*****

So that was unimportant. The important things can be summarized in a few keywords: to give, to educate, to inspire. I will add a fourth keyword later.

As I said earlier, my academic life was a gift I received from others. It was unexpected as a gift, and I was unprepared for it. When I received my first academic job in 1988, I mainly looked at people I considered bad examples, and I decided to not do things the way they did it. I essentially decided to be the kind of academic I myself would like to encounter if I were a student. If I had to teach, I should teach the kind of class I myself would love to attend as a student. And if I had to write, I should write texts I myself would enjoy reading. It’s a simple discipline I maintained throughout my career: it’s never about me, it’s always about the student, and my role is to give the student tools and resources useful and valuable for that student, not for me.

I realized early on that my role in the lives of the young people who were my students was that of an educator, not just a lecturer or a teacher. And once I realized that, I took it very seriously. I meticulously prepared every course I ever taught (and there were many), and I always rehearsed every lecture. I never walked into a lecture hall without a fully developed story and a script in mind for how to deliver it. If you have to teach, teach, and do that in a no-nonsense way. Make every minute of the class a moment worth attending for students, and make sure that they learn something in each of your classes. That sounds simple and straightforward, but it isn’t. It’s actually quite a tall order.

It starts from a refusal to underestimate your students. Many of my former students will remember that I would start a course by announcing that I would aim just one inch above their heads, so that they would have to stretch a bit in order to keep up with the pace and content of the course. I always did that: I gave students readings, contents and assignments often judged by colleagues to be too demanding or “above their level” – first-year students would have to read a book by Foucault, for instance. Well, the fact is that they did, and they learned massively from it. So what precisely “their level” is, usually and preferably remains to be determined after the process of learning, not prior to it. Prior to it, no one is “ready” for specific chunks of knowledge; they become ready through the work of learning. Not understanding this elementary fact, and assuming that students “have” a particular level that we, teachers, need to adjust to, is a dramatic error. In my career I have seen very often how this error leads to the infantilization of exceptionally talented young people, and to learning achievements that were a fraction of what could have been achieved. Please never underestimate your students.

Instead, give them the best you have to give. That means: don’t give your students old and pedestrian information, but give them your most recent and most advanced insights and thoughts. Draw them into the world of your current research, expose them to the most advanced issues and discussions in the field, show them complex and demanding data, and allow them into your kitchen, not just into your shop. For large parts of my career, I had a huge teaching load. I could only keep classes interesting for students and for myself by establishing direct and immediate links between my ongoing research and my teaching. I would take half-finished analyses of new data into the classroom, and finish the analysis there, with my students, allowing them to see how I made mistakes, had to return to earlier points, skip some particularly tough bits, and so forth. The good thing was: my frequent classes did not entirely eat away my research time, they were research time, and students were exposed to a researcher talking about a concrete and new problem that demanded a solution.

*****

It is at this point, I believe, that “teaching” turns into “education”. As teachers, we do not “transfer knowledge” and we’re not, in that sense, a sophisticated or awkward kind of bulldozer or forklift by means of which a particular amount of resources is taken from one place (ours) to another (the students’ minds). This is how contemporary academic managerialism prefers to see us. I have already rejected it above.

No. Whether we like it or not, we are much, much more than that for our students, and we have to be. All of us still remember many of our teachers, from kindergarten all the way to university. Some of our memories of them may gradually fade, and some of the teachers may only survive in our memories as vague and superficial sketches attached to particular moments in life. But some of these teachers are actually quite important in the stories we build of ourselves; and of such teachers, we sometimes have extraordinarily extensive and detailed memories. Even more: some of these teachers served (and serve) as role-models or as people who defined our trajectories and identities at critical moments in life. And when people talk about such teachers, we notice how closely they observed and critically monitored even the smallest aspects of behavior of their teachers; their actual words and how, when and why they were spoken; particular gestures made or faces pulled; pranks or surprises they created, and so forth.

I became very aware of the fact that, as a teacher, I will be remembered by my students. I knew, at every moment of interaction with students, that this moment would leave a trace in their development and would often be given a degree of importance it never could have for me. In sum, I realized that, as a teacher, every moment in which I interacted with students would be a moment of education, of the formation of a person, using materials I would be offering to them during that specific moment of interaction. My entire behavior towards them would potentially be educational material in that sense. And my entire behavior towards them, consequently, needed to be organized in that sense. I should allow students to get to know me – at least, get to know a version of me that could be remembered as someone who positively contributed to their development as adult human beings. Respect, courtesy, integrity, professional correctness, empathy, reliability, trustworthiness, commitment: all of these words stand for behavioral scripts that demand constant enactment in order to be real.

Several times in my career, students told me what could best be called “secrets”, highly delicate personal things usually communicated only to members of a small circle of intimi. Twice, young female students came into my office in deep distress, announcing that they had been raped – and I was the first person they called upon for help. While such moments were of course disorienting and caught me cold, they taught me that as a teacher I was very much part of students’ lives, in ways and to degrees I never properly realized. And it taught me the huge responsibilities that came with it: we are so much more than “academics” for these young people; we are fully-formed human beings whose behavior can be helpful, important, even decisive for them. We should act accordingly, and not run away from this broader educational role we have.

*****

The third keyword is “to inspire”, and I need to take a step back now. I mentioned the delight I always took in studying. The real pleasure I took from it was inspiration – other scholars and their works inspired me to think in particular directions, to think things I hadn’t been able to think before, to do things in particular ways, to explore techniques, methods, lines and argument, and so forth. Let me be emphatic about this. I can’t remember ever studying things in order to follow them the way a disciple follows the dictates of a master or an apprentice follows the rules of a trade – or at least, I remember that each attempt in that direction was a dismal failure. I was never able to absorb an orthodoxy, and to become, for instance, someone happy to carry the label of – say – critical discourse analyst or conversation analyst.

Whenever I studied, I wanted to be inspired by what I was studying, and I described inspiration above: it’s the force that suddenly opens areas and directions of thought, shows the embryo of an idea, offers a particular formulation capable of replacing most others, and so forth. Inspiration is about thinking, it is the force that kickstarts thinking and that takes us towards the key element of intellectual life: ideas. And science without ideas is not science, but a rule-governed game in which “success” is defined by the degree of non-creativity one can display in one’s work. The exact opposite, in other words, of what science ought to be. Science can never be submissive, never be a matter of “following a procedure” or “framework”. It is about constructing procedures and frameworks.

There were many moments in my career when graduate students would introduce their work to me, and preface it by saying things such as “I am using Halliday as my framework”. Usually, my response to that was a question: “how did Halliday become a framework?” And the answer is, of course, by constructing his own framework and refusing to follow those designed by others. People who “became a framework”, so to speak, took the essential freedom that research must include and rejected the constraints often mistaken for “scientific practice”. The essential freedom of research is the freedom to unthink what is taken to be true, self-evident and well-known and to re-search it, literally, as in “search again”. It is the freedom of dissidence – a thing we often hide, in our institutionalized discourses, behind the phrase “critical thinking”. I see dissidence as a duty in research, and as one of its most attractive aspects. I believe it is exactly this aspect that still persuades people to choose for a career in research.

Inspiration draws its importance from the duty to unthink, re-search, and question, which I see as the core of research. We can make the work of unthinking and re-searching easier (and more productive, I am convinced) when we allow ourselves to draw inspiration from that enormous volume of existing work and the zillions of useful ideas it contains, as well as from interactions with friends, colleagues, students, peers – allow them to affect our own views, to shape new ones, to help us change our minds about things. And in our own practices, we should perhaps also try, consciously and intentionally, to inspire others. I mean by that: we should not offer others our own doctrines and orthodoxies. We should offer them our ideas – even if they are rough on the edges, unfinished and half-substantiated – and explain how such ideas might fertilize – not replace – what is already there.

I have quite consistently tried to inspire others, and to transmit to them the importance I attached to inspiration as a habitus in work and in life. In my writings, I very often sought to take my readers to the limits of my own knowledge and give them a glimpse of what lies beyond, of the open terrain for which my writings offered no road map, but which my writings could help them to detect as open for exploration. This has made parts of my work “controversial” and/or “provocative” – qualifications that are usually intended to be negative but inevitably also articulate a degree of relevance and suggest a degree of innovation. I was usually quite happy to receive these attributions, and they never irritated me. It also never irritated me when I found out that someone I engaged with in conversation did not know me well, had not read my work and did not pretend to have read it. Usually, those were among the more pleasant encounters.

*****

These three things were definitely important to me in the professional part of my life: making a habit of giving, sharing and being generous in engaging with others; being aware of my duty to educate others and of the responsibilities that come with that, and to take that duty very seriously; and taking inspiration as a central instrument and goal of academic and intellectual practice. I can say that I have tried to apply and implement these three aspects throughout my career; I cannot claim to have done so faultlessly and perfectly – there is no doubt that I made every mistake known to humanity, and I am not speaking as a saint here. But the three elements I discussed here were – now that I can look back with greater detachment – always important, always guiding principles, and always benchmarks for evaluating my own actions and conduct.

*****

I now need to add a fourth keyword: to be democratic. It’s of a slightly different order.

I grew up and studied in the welfare-state educational system of Belgium, and given the modest socio-economic status of my family, I would probably never have received higher education in other, fee-paying systems. I’m very much a product of a big and structural collective effort performed by people who did not know me – taxpayers – and regardless of who I was. I am a product of a democratic society.

I remained extremely conscious of that fact throughout my adult life, and my political stance as a professional academic has consistently been that I, along with the science I produce, am a resource for society, and should give back to society what society has invested in me. “Society”, in this view, includes everyone and not just a segment of it. It is necessarily an inclusive concept. And science in this view has to be a commons, a valuable resource available to everyone, an asset for humanity. Practicing this principle became increasingly difficult because of the developments I already mentioned above: the rapid and pervasive commodification of the academic industry during my career. Academic institutions, and academic work, became and have become extraordinarily exclusive and elitist commodities, and academic work that refuses the limitations commensurate with this commodification are, generally speaking and understated here, not encouraged. I’ll return to this below, but I need to continue an auto-historical narrative first.

Working a lot in Africa and with Africans throughout my career, no one needed to tell me that knowledge, surely in its academic form, was not available to everyone, and that a large part of humanity was offered access only to hand-me-downs from the more privileged parts. One can take this literally: many of the school books used in the early and mid-1980s in Tanzania were books taken off the syllabus in the UK and shipped – as waste products, in effect, but under the flattering epithet of educational development assistance – to Tanzania. And almost any student or academic I met at the University of Dar es Salaam (which became my second home for quite a while in the early stages of my career) would answer “books, journals” to the question “what is it you lack most here at the university?” Bookshelves in departments were indeed near-empty (even in so-called “reading rooms”), and the small collections of books privately held by academics (usually collected while doing graduate work abroad) were cherished, protected and rarely made available to others. In the University bookshop on campus, shelves were also empty, supplies were dismal and most of the collection on offer was dated. (Its most abandoned and dusty corner, however, became a treasure trove for me, for that was where cheap editions of the works of Marx, Lenin and Mao Zedong could be found, donated long ago by the governments of the USSR, the GDR and China.) My own working library at home – the working library of a PhD student – was several times larger than some of the departmental collections I had seen in Dar es Salaam. To the extent that “white privilege” has any meaning, I had a pretty sharp awareness of it from very early in my career.

Inequality became the central theme in my work and academic practice from the first moment I embarked on it. And I never abandoned it. I wanted to understand why understanding itself is an object of inequality. Concretely, I wanted to understand why the story of an African asylum applicant was systematically misunderstood and disqualified by asylum officials in Belgium and elsewhere; why the stories of particular witnesses in the South African Truth and Reconciliation Commission were seen as “memorable” while others were forgotten or never taken seriously; why so many stories from the margins are considered not even worth the effort of listening to, let alone to record and examine; why some groups of people are not recognized as interlocutors, as legitimate voices that demand respect and attention, and so forth. This general concern took me, during my entire career, to the margins of societies I inhabited and worked in, and made confrontations with racism, sexism and other structural forms of inequality inevitable.

It also led to various practical decisions about how I organized my work. I will highlight three such decisions.

One. My experiences in African universities made me very much aware of the existence of several academic worlds, not the idealized one “academic community” sometimes invoked as a trope. And I decided to spend a lot of my efforts working with, and for the benefit of, what is now called the Global South. I am proud of official work I did with the University of the Western Cape in 2003-2008, where I coordinated a very big academic collaboration project on behalf of the Flemish Inter-University Council. UWC is a historically non-white university, and it still bore the scars of apartheid in 2003: the university was severely under-resourced and lacked the infrastructure as well as experience for building a contemporary research culture. Working in very close concert with the local university leadership – the most inspiring and energizing team of academic leaders I had ever met, and lifelong friends since – I believe we were able to turn the ship around. In the process I got to know a large community of amazing people who taught me a lot about what real commitment is – from Chancellor Desmond Tutu down to Allister, the man who acted as my fixer and driver whenever I was in Cape Town.

Informally, I did my best to work with and for scholars and institutions in the Global South, slowly building networks of contacts in several countries and trying to be of assistance in a variety of ways. The people I encountered through these networks usually didn’t have the money to travel to conferences where I appeared, nor the money needed to purchase my books. And this takes me to a second decision.

Two. I wanted to make my work available in open access and to create genuinely democratic mechanisms of circulation and distribution. Remember what I said earlier about science as a commons: I take that seriously. So, from very early on, I started series of working papers that enabled published high-quality material to circumvent the paywalls of commercial publishers. And as soon as the web became a factor of importance in our trade, I used it as a forum for circulation and distribution. Everything I write is first posted on a blog (this blog), and then usually moves to a working paper format in the Tilburg Papers in Culture Studies, before it finds its way into expensive journals or books. I also became an early mover on academic sharing platforms such as Academia.edu and ResearchGate. And I am proud to see that a large segment of those who read and download my materials are scholars from the Global South – those who can’t afford the commercial versions of my work.

But my obsession with open access is not restricted to the issue of Global South readerships. My own students, working with me at a well-resourced university in an affluent country, cannot afford to buy my books. As I said earlier, the academic publishing business has become a disgrace, and it excludes growing numbers of people who absolutely need access to its products. I saw it as part of my duty to subvert that system, to share and distribute things usually not free to be shared and distributed, and to do so early on with recent material. For making old texts widely available is good and useful, but the real need for scholars in very large parts of the world is to gain access to the most recent material, to become part of ongoing debates, to align their own research with that which is cutting-edge elsewhere. And the academic publishing industry does brilliant, truly majestic efforts to prevent exactly that.

We should not be part of that industry, we should not be its advocates and we should not feel obliged to serve that industry’s interests. We are its labor force, and we provide free, unpaid labor to it. We sign contracts with them – non-negotiable ones, usually – in which all rights to our own work are handed over, appropriated and privatized – in return for a doi number and a pdf. We are exploited by that industry to an extent that most other sane people find ridiculous. While, if we do a little bit of creative work, we don’t need that industry any longer. As academics, we have an idea of the audiences for our work out there that is far more precise than that of any marketing officer in an academic publishing firm. We also have a very good idea of who might be knowledgeable and reliable reviewers of our work. And we just need a website to post our work when it’s ready for publication – offering it free of charge and without constraints on sharing to anyone interested in it, not to all those who have paid a certain amount of cash for it.

Three. Throughout my career, I never stopped addressing non-academic audiences. I gave literally hundreds of lectures, workshops, training sessions and public debates for professionals and activists in a range of fields – education, social work, care, law, policing, antiracism, feminism, support to refugees, youth organizations, trade unions and political parties. As a rule I did so without charging a fee (see what I said earlier about giving things back to society), and the default answer to invitations was “yes”. I always found such activities rewarding, and the audiences I met through such activities were often extraordinarily energizing ones. I also continued to write materials in Dutch. Over a dozen books, if I am not mistaken, and piles of articles – all written for lay audiences, often based on my ongoing research, and often used in professional training programs. It was my way of trying to bring recent science to a broader public forum quickly. For social workers or teachers in multilingual classrooms should not be given information that was valid a decade ago; they should get the most advanced insights and understandings available and take these into their practices.

I used a label for the things I mentioned in this section. I called it “knowledge activism“. In a world in which knowledge is at once more widely available than ever before, and more exclusive and elitist than ever before, knowledge is a battlefield and those professionally involved in it must be aware of that. Speaking for myself: a neutral stance towards knowledge is impossible, for it would make knowledge anodyne, powerless, of little significance in the eyes of those exposed to it. Which is why we need an activist attitude, one in which the battle for power-through-knowledge is engaged, in which knowledge is activated as a key instrument for the liberation of people, and as a central tool underpinning any effort to arrive at a more just and equitable society. I have been a knowledge professional, indeed. But understanding what I have done as a professional is easier when one realizes the activism which, at least for me, made it worthwhile being a professional.

*****

I will stop here. I have reviewed four things that I found important, looking back at a career as an academic that started in 1988 and is about to end. As I said, one should not read this review of important principles as the autobiography of a saint. I was evidently not perfect, made loads of errors, have been unjust to people, have made errors of judgment, have indulged in a culture of academic stardom and overachievement which I should have identified, right from the start, as superficial and irrelevant; I have been impossible to work with at times, grumpy and unpleasant at even more times, and so on. I am an ordinary person. But I do believe that I can say that I tried really hard to organize the professional part of my life according to the four points discussed here, and that the attempt, modest as it was, made that part of my life valuable to me. The satisfaction I draw from that is sufficient to end that part of my life without remorse, and without a sense of having missed out or of having been short-changed by others. I am happy to stop here.

 

 

 

 

When your field goes online

NC_CT_1

When your field goes online:

Ethnographic fieldwork in the online-offline nexus

Jan Blommaert & Dong Jie

(Draft postscript to Ethnographic Fieldwork: A Beginner’s Guide. Second and enlarged edition. Bristol: Multilingual Matters, in press)

When we wrote the first edition of Ethnographic Fieldwork in 2008-2010, social life was still very much seen as an offline affair. People used to refer to the digital world as the virtual one, implying that it was in some way not part of the real world. As for new media, Facebook was an infant and the iPhone was a toddler when we wrote the book, and social media activities were widely seen as a relatively irrelevant add-on of ‘real’ (read: offline) social life.

The online-offline nexus

A decade later, this can obviously no longer be maintained. The online world is now fully integrated with the offline one, in the sense that very few of our ordinary, everyday activities proceed without being in some way affected by online infrastructures; and very many of such activities can only proceed due to the existence of such online dimensions of life. From making photographs with our smartphones to checking the weather app, the traffic app, or our daily fitness routine app, and from online shopping, travel booking, banking and reading to quick searches (aptly called, in many places, “Googling”), to TV-on-demand binging, vloggers and influencers, livestreamed events and commercial as well as political campaigns waged on social media – our social, cultural, economic and political lives have changed dramatically. The widespread use of social media has transformed the media and popular culture landscapes globally and has shifted the boundaries between the private and the public spheres. And each action we perform online, however minute, generates data that are aggregated into new systems of surveillance and control and affect our lives in mostly invisible ways. Note that while such developments are spread unevenly across the globe, there are few places in the world where they are not experienced to some degree.

These phenomena are by now well documented, so we don’t think that a full survey of them is warranted here. The fundamental fact we have to take on board is: we live our lives largely in an online-offline nexus, in which both dimensions are equally vital and indispensable. Yet, when it comes to social theory and method, we still very much continue to approach these lives from within frameworks developed to describe and analyze an offline world – and ethnography is no exception to this (Kaur-Gill & Dutta 2017; Blommaert 2018; Varis & Hou 2019). This is not unusual: theory is always slow to catch up with changing realities, and theories that incorporate change as a fundamental given are few and far between. The same goes for method: scholars are usually reluctant to surrender tools of investigation of which they believe that they worked adequately in the past.

When it comes to ethnographic fieldwork, however, we cannot avoid issues of theoretical and methodical adequacy, for a very simple reason: in the online-offline nexus, the field where we do our fieldwork has gone online, and we need to follow that route if we wish to adequately address what it is we observe and analyze.

In what follows, we will offer three reflections on this new field and show how they complicate matters for ethnographers (and others). To be sure, things were complicated enough in an offline field; when we incorporate the online field, however, several new things require focused attention. We need to add some question marks to three seemingly unproblematic things: what do we see? Who is there? And where are we in an online-offline fieldwork site.

What do we see? The compelling bubble

The first complication is caused by what is known as the ‘bubble effect’: whenever we go online, we find ourselves in a space the structure and composition of which has been configured algorithmically, on the basis of data profiles for specific users, machines and software tools. And this is an absolute given: there is no actual PC or smartphone in the world that offers its user an unrestricted view of the online world. Not to put too fine a point of it: whenever you go online on any device, anywhere and anytime, you will encounter bias, and there is simply no neutral and unbiased position of observation possible in the online world. This is worth remembering: using the PC of, say, your local public library to do online research doesn’t remove the bubble effect. It merely (largely) removes your own particular bubble effect, the one affecting actions on your usual devices due to your particular history of use of these devices; but it replaces it with the bubble of the other specific computer, network and community of users who worked on it before you logged on.

Now, we did spent quite a good amount of time in the previous chapters explaining that bias is a normal and altogether not too problematic feature of any ethnographic fieldwork, and that the response to it must be awareness of bias. Remember: ethnographic knowledge is inter-subjective knowledge, co-constructed by all participants in the event. Seen from that perspective, the bubble effects are mere extensions of the inevitable bias inscribed in our fieldwork practices. But let’s remove the word ‘mere’ from the previous sentence, for the extension we see is an extension in another direction – a shift, in other words. And the shift has to do with the meaning of inter-subjective in what preceded. When you interact with an online device, you’re not interacting with a particular person whose subjectivity (and, of course, bias) can be to some extent explained and understood in terms of one’s social, cultural, personal backgrounds – the ‘context’ as we know it from the literature. You’re interacting with a machine that incorporates and creates contexts that require very different modes of interpretation.

In a moment, we shall be more specific – and constructive – about this problem of online contexts. But for the moment, let’s take this on board: going online takes your field in a direction which is not in any way a direct reflection of the offline contexts you, as an ethnographic fieldworker, got accustomed to through intense interactions with the people you work with; it sends you into a different sociocultural realm and confronts you with modes of bias that are sometimes impossible to understand, let alone anticipate or predict in research. The social facts we can observe online are mediated and curated by technologies in complex synergies with their users. Overlooking this point (and it is compelling) can cause you some trouble in making sense of what goes on in the lives of the people you work with in fieldwork.

But there is more.

Who is there?

In the tradition of social research, one thing used to be quite straightforward: the identity of the people one did research with or about – the ‘population’ in one’s research. People’s identities were known, and researchers could believe that they knew them well. So well, in fact, that we could anonymize them in our research outcomes, and that we felt compelled to do so because our research had actually revealed so much about them that they could be construed as identifiable individuals. Anthropological ‘informants’ were only useful, so to speak, when a measure of intimacy had been established between the anthropologist and the ‘informant’ allowing more than mere superficial knowledge to be exchanged.

This knowledge of the population was grounded, as Michel Foucault (2008) described, in some of the great structures of modernity: nation-state bureaucracy and its elaborate inventories of people residing on the state’s territory. From birth certificates through school reports, hospital records, police files and intelligence reports, passports, tax returns, occupational, demographic and income data and the cyclical census: one of the purposes (indeed, needs) of the modern state was comprehensive knowledge of its population. An elaborate bureaucratic infrastructure served that purpose and statistics emerged as the science that could answer questions evolving from all that. As the name itself reveals, statistics was the science of the state. And statistics came up with methodologically refined tools such as the sample to turn knowledge of the population into measurable, user-friendly units with almost infinite opportunities for application.

All of this was achieved in an offline world; the present online-offline nexus offers some serious problems. The first one is infrastructural. Whereas states used to be unchallenged when it came to gathering and elaborating knowledge at a very high scale-level – that of the entire population, this monopoly has vanished. The state now competes with (and often relies upon) private corporate actors when it comes to such high-scale level knowledge. It is the likes of Google, Microsoft, Huawei, Facebook, Weibo who are the great data collectors and analysts presently: companies who collaborate with the state but who are formally independent from it, and who have the capacity to independently develop (as well as own and sell for profit) big data handling and machine learning tools and products. Knowledge of populations nowadays is distributed over more actors, many of which fall outside the raison d’état which Foucault saw as the engine behind modern population studies.

Such private actors can and do impose rules of their own – the scale level we used to define as ‘public’ is now governed by a range of different and sometimes conflicting modes of governance. And such new modes of governance deeply affect this self-evident part of social studies: knowledge about who is involved in social action.

As all of us know, the online world is populated by people operating through an alias. Trolls and members of obscure debating groups in the darker corners of the Web instantly come to mind; we also know that some online platforms are very vulnerable to interventions by automated bots and hired clickfarm operators sending out updates and responding to them; but in many cases there are also strong social and political incentives to remain anonymous when engaging in online activities. One’s employer may not be amused when an employee regularly posts social media updates criticizing the company or articulating views that can be perceived as damaging to the company’s interests; security forces may be alerted by strong political criticism voiced by people online; or one’s spouse would not appreciate one’s active presence on dating sites. In online environments where people are aware of surveillance and censorship, one’s mere presence on a forum can be experienced as risky, and participants will adjust their behavior accordingly – primarily by hiding identity features that might lead to easy identification (cf. Du 2016). The effect is: billions of online ‘profiles’ about whom interlocutors cannot assume any identity feature with any degree of certainty: the exciting 24-year old woman with whom one flirts on a dating site might actually be a 55-year old, married and quite boring man. And the revolutionary activist who eagerly invites and endorses your politically inflammatory updates might actually be a state security agent.

At the frontstage of the online world, identity uncertainty rules. The real identities of online actors are, as a rule, only known backstage by institutional actors: by internet and platform providers, the authorities and the security services. But hackers prove on a daily basis that even that level of certainty about who is online is not entirely bulletproof.

As said before, the online world provides entirely new contexts for all of us. The effects for fieldwork are momentous. While, in offline fieldwork, you can ask friends and neighbors, or colleagues and bystanders for information about particular individuals, your opportunities for doing so in online research are extremely limited – you can never be sure that the neighbor you invite to offer background information about someone is not, in effect, a neighbor at all. So as a rule, you can only observe what you see people do in online fieldwork sites. Getting feedback about who did what, however, is terribly difficult and – to add to the mess – not very reliable. For the online sources you’d approach for such feedback are almost by definition as elusive as the target of your inquiry with them. The fieldworker, consequently, is often reduced to the role of witness rather than that of investigator, and left with very few tools for upgrading one’s role from witness to investigator. So take this as a given: in online fieldwork it is immensely difficult to establish the intimate knowledge one can construct about offline respondents.

But there is more, and we need to return to the bubble effects we discussed earlier. Recall what we said there: the bubble shapes a context for social action on the basis of ‘profiles’ created by data aggregations. So here is yet another level of backstage identity construction: one not directly performed by ourselves but imposed on us by machines and influencing what we can do and effectively do online. Obviously, this also affects what a fieldworker can observe online.

Let us make this a bit clearer. The bubble brings people into your orbit whose profiles have been constructed by algorithms. These people are, also in official parlance, ‘data subjects’ constructed out of hypothetically common features based on aggregations of users’ data. As we said before, the criteria by means of which people are connected to aggregations of data are very difficult to get access to – it is safe to assume that we cannot know the grounds on which algorithms judge that certain people are similar to us, share interests, behavioral or character traits sensed to be compatible with ours, and could be brought into some kind of community alongside us. We can provide educated guesses, no more. But since bubble effects are inevitable, the upshot of all of this is that we observe very peculiar, curated social facts, full of uncertainties about who is involved in their performance. And note that the uncertainty about who is there in your online fieldwork site is individual as well as collective; it applies to the actual interlocutors whose online actions you observe, as well as to the communities that fill the bubble in which you roam.

Imagine now that you’d wish to run a survey online, using a platform such as Twitter. How will you construct a reliable sample in which sociological diacritics such as gender, age, location, education background and religion are adequately spread – when none of this can be established with certainty? How can you reach ‘everyone’ whenever you attempt to speak about a population – when you are mindful of the bubble effect? How can you even identify individual actors when the same person can have eight different Twitter accounts? And how can you be sure that ‘@EddieJones1991’ is not the 28-year old Welsh accountant he claims to be, living in Liverpool with his wife and two young kids and enthusiastically endorsing the Tories, but in fact an automated bot or a clickfarm account operated from Bangalore, India?

All of these issues about who is who online dislodge the certainties used as baseline assumptions in more than a century of social research, and they render forms of research still hanging on to such assumptions very doubtful indeed. In a moment, we shall offer some hope for ethnographically inclined researchers. But first we need to address a third major complication of the online-offline nexus.

Where are we? Invisible lines

Let us briefly recapitulate. We have seen that the online-offline nexus seriously complicates two things we used to consider rather unproblematic in offline fieldwork: what we (can) observe, and who is involved in what we observe. The bubble effect and the uncertainty about participants in social action online render both highly problematic now, and they must serve as a critical check on the kinds of claims we believe we can make in our research. There is a third obvious dimension of social action which is profoundly distorted by the online-offline nexus: the site where we perform our research.

For evident reasons, the site of fieldwork used to be perhaps its least problematic aspect. As outlined in the previous chapters, we used to choose a place for our research based on prior knowledge and a round of thorough preparatory study. Next, we would pack our gear and head off to that place. Yes, we emphasized, the actual meaning of that place would change during fieldwork as a result of accumulated knowledge – the school we chose as our site would gradually transform into a more complex habitat for those involved in the activities in that school, including the fieldworker. But in many ways, our choice of fieldwork site would define and constrict our assumptions about participants and the actions they engage in. We knew that, to stick to the example of a school, some transcontextual analysis was required, for many of the actions performed locally (and offline) by teachers, pupils and other local stakeholders would be inflected by things such as education policy, management principles and other forms of external pressure and influence. In the online-offline nexus, however, the meaning of ‘transcontextual’ has changed quite profoundly.

Two dimensions of this change need to be identified. In both instances, the guiding question is: how can we understand what goes on in our chosen fieldwork site?

The first dimension has to do with the nature of the activities we observe locally. Let us start with an anecdote. A little while ago, one of us was required to check attendances at the start of a class. The usual signup form started moving slowly through the lecture theater, and after a few minutes, suddenly two students came hurrying into the hall – alerted by their colleagues’ hastily written smartphone messages telling them that their presence was mandatory. A local action – taking attendances – was ‘exported’, so to speak, to different places elsewhere by means of online connections, and resulted in a reconfiguration of the local activity – two students joining the class.

This anecdote shows us that in the online-offline nexus, there are invisible lines connecting offline spaces with translocal ones; and that local activities are almost invariably influenced and shaped by translocal ones. Converted into the vocabulary we used above, we see how offline activities are almost invariably influenced and shaped by online ones. Such influences can be material, as in our anecdote in which a material space as well as its population get reconfigured due to online signals given by students. But even more frequent are immaterial effects of online activities on offline ones: knowledge effects, as when we cook a curry after having read several online recipes and watched some YouTube tutorials, or as when our car’s GPS system directs us to take another route due to dense traffic on the normal one. The internet is primarily a learning environment from which we extract (and on which we upload) tons of bits of information, instructions and normative judgments about how certain things should best be done (Blommaert & Varis 2015). In a formal sense also, the online world is a learning environment. Try to imagine studying without access to online resources these days, from online downloadable research publications over Wikipedia to simple Google searches – the contemporary world of learning is an online-offline one.

These learning environments have immediate effects on locally performed actions, as we have seen in the anecdote above. And these effects are inflected by the features we discussed earlier; bubble effects and algorithmically configured profiles creating peculiar forms of ‘truth’ and norms within often elusive online communities and with immediate feedback effects. To illustrate the latter: if you want to cook a Thai dish and choose, out of dozens of options, an online recipe using dried red chili rather than fresh one, this preference will be recorded in your algorithm and have an effect on your bubble. Later searches might show you more recipes using dried chili and let you interact with people who show the same preferences (unless the algorithm decides you’ve made the wrong choice and will try to rectify you in the future). In that sense, online knowledge effects may be qualitatively different from the more traditional ones. Yes, reading a book or having a conversation in a pub may have similar effects on what we think and do, but such effects were usually slower and perhaps less pervasive than the ones we currently notice in the online-offline nexus.

This is the first dimension we had to address: online resources infusing local actions and changing them due to immediate translocal involvement. The second dimension extends this somewhat and raises the question: who is involved in local actions – who belongs to the ‘personnel’ of things we observe in online life. And here, too, an anecdote can be useful as a point of departure.

Oud-Berchem is an inner-city working class and immigrant district in Antwerp, Belgium. One of the remarkable features of the neighborhood is the density of new evangelical churches, usually of the charismatic branch of protestant Christianity and run by pastors from Africa, Asia and Latin America (see Blommaert 2013). The churches are what is known as ‘storefront churches’, renting relatively cheap vacant commercial premises in an old shopping street and usually displaying a health and safety permit for 49 people. Local congregations can be slightly larger though, but some of the churches also cater for smaller congregations. Churches often change premises, denominations and constituencies – a reflex of the rapidly shifting demographics of the neighborhood.

One of the most recent arrivals in this religious industry in Oud-Berchem is a church run by a Nigerian pastor. Let us nickname the church the ‘True Religion Church of Christ’. The church rented what is probably the grottiest location in the neighborhood: a former interior decoration shop closed down a handful of years back, quite badly affected by years of vacancy and exposure to the elements. The church has a permit for 49 attendants, and this is about the size of the congregation attending Saturday and Sunday services there. It’s a small, hardly remarkable and even less prestigious enterprise.

Our initial research on the neighborhood and its churches was based on traditional – read: offline – ethnographic linguistic landscape analysis. From that perspective, indeed, the True Religion Church of Christ is a small local phenomenon, eclipsed by other churches with more attractive premises and a larger congregation. At a given moment, however, we started paying more attention to an often overlooked feature of the linguistic landscape: website addresses and social media signs of the ‘Follow us on Facebook’ type (Blommaert & Maly 2019). When we followed such pointers for the True Religion Church of Christ, we bumped into a few surprises. Its pastor turns out to be a modest global celebrity in the domain of charismatic protestant religion. He runs a YouTube channel with over 125,000 subscribers; the main feature video there is one in which the pastor brings a dead boy back to life during a service in Nigeria, attended by many hundreds of faithful. This video was watched over 85,000 times. The pastor also runs a website in which he announces services all over the world – North America, Europe and Africa – and through which items and services can be booked and ordered using standard e-payment methods such as Paypal.

Suddenly, the grotty premises in which the local congregation gathers on Saturdays and Sundays appear in a different light: as a mere node in a global network of religious activities connected by advanced online infrastructures. This global network is big and prestigious, and stands in sharp contrast to the smallness and shabbiness of what goes on in Oud-Berchem. Many more people, places and resources are involved in what goes on in Oud-Berchem than those that can be locally observed. And we can reasonably assume that what goes on locally in the True Religion Church of Christ derives a lot of its meaning and impact from the translocal, prestigious and well-resourced network in which it is one local node and to which it is permanently connected by online infrastructures. In fact, what happens locally is probably possible only because of the existence of this larger network and its online resources. And so, when we observe the local activities of the church’s congregation, we need to be aware of the fact we see just a very small part of the total social fact we need to understand, and which we can engage with by following the pointers that take us online.

So here are the two dimensions we needed to bring up: the fact that offline practices are almost invariably influenced, formatted and enabled by online ones; and the fact that locally performed social actions can involve far more people than those actually present locally – the effective personnel of lots of current social actions can only be gauged by connecting the offline local phenomena with the online translocal ones. Our field has effectively become an online-offline field; doing fieldwork requires presence in and attention to both, and the blissful simplicity of ‘the local’ has been traded for a far more complex reality of connected fieldwork sites. The notion of ‘participant observation’ needs to be literally in the online-offline nexus: ethnographers are participating in exactly the same contextualized processes they are studying, and there is no privileged vantage point that gives us and edge over other, ‘ordinary’ participants.

More complexity? More ethnography please!

All of this is bad news of course. In the online-offline nexus, we are forced to surrender some of the things we long thought were relatively simply: the things our field had to offer in the way of observable facts and information, the people with whom we engaged in fieldwork, and the actual sites of fieldwork. In other words: we need to reconsider the what, how and where of fieldwork. The online-offline nexus, we can see, is quite a bit more complex than the good old traditional offline fieldwork arena.

The bad news, however, is mainly for those branches of science that rely heavily on the assumptions we questioned above. And there are several reasons why ethnography, while needing to be cautious and more than just aware of these changes, is best equipped to deal with them. In chapter 2, we explained that ethnography is a scholarly approach which, in contrast to many other approaches, does not attempt to simplify and reduce complexity; it takes complexity as a point of departure and tries to provide a full and detailed account of it. Ethnography is not about removing the chaotic nature of social practices performed in real, concrete contexts – it is about making sense of that chaos. The fact that the chaos appears to become denser in the online-offline nexus should not deter us: it’s still just chaos, and we must make sense of it.

And we have inroads into it. Even if the what, who and where of fieldwork are getting more complicated, there are things we can reliably observe. We can still observe what people do, the social actions they perform. In fact – and we emphasized that as well in the opening chapters of the book – ethnography is focused on making sense of social action, of concrete social action performed in concrete contexts, and it belongs to that broad tradition in social research captured under the umbrella of the ‘action perspective’ (cf. Blumer 1969; Goodwin & Goodwin 1992; Strauss 1993; Rawls 2002). So we can observe people watching online stuff, doing online searches, asking and responding to questions, telling stories, making an argument, insulting or responding to insults, expressing joy, appreciation and gratefulness, grief, anger, uneasiness, concern, irony and humor, thanking others; we can observe them liking, sharing and reposting, commenting and endorsing or distancing themselves; we can observe them incorporating online material produced by others in their own online interventions; we can see them logging on and logging off; subscribing to channels and profiles and blocking or ignoring others. And we can observe the (largely visual, literate) resources they deploy in doing all that: different forms of language, jargon and slang, different forms of writing, emojis, memes, GIFs, selfies, profile and banner images, video chats and livestreams on a variety of apps – name it. All of this, we know, is done in interaction with others, frontstage as well as backstage – one is never alone on the Web – and mediated by the specifics of the online contexts we laid out above.

That’s a lot. In fact, it’s exactly the stuff needed for ethnographic work, as we explained in chapter 2 of the book. And it is by looking at the intricate interplay between actions and resources that we are able – in ethnographic analysis – to see how people navigate the contextual opacity and the identity uncertainties that characterize online interactions and make sense of that chaotic reality (cf. Szabla & Blommaert 2017), how they engage in the learning processes for which the online world offers such infinite opportunities, and construct identities and communities within their bubbles, and often beyond them (Varis & Blommaert 2015; Prochazka & Blommaert 2019).

So it is not because we cannot observe everything in online contexts that we can observe nothing. We cannot observe the algorithms and surveillance systems that create bubbles and profiles, true. But we can observe the ways in which people engage with them and operate within their confines – how they adjust their social conduct to the complex and largely invisible contexts within which they interact with others. This is an eminently adequate ethnographic object of inquiry.

But we need to address it carefully. Whenever the phrase ‘participant observation’ was used in discussions of fieldwork, the focus used to be on ‘observation’, and it carried the suggestion that, while participating in social processes, ethnographers did something special and did that from a privileged position – they ‘observed’. We believe that fieldwork in the online-offline nexus shifts that focus towards ‘participant’, and that we must forget the possibility of a privileged position of observation. Whatever we observe is observed as a participant in a new field in which breaking out of the contexts of ordinary participation is near-impossible, for important aspects of such contexts are impossible to inspect – the backstage aspects we discussed above. Perhaps this was never possible, even in traditional offline fieldwork, and perhaps it was just (in Johannes Fabian’s (1983) famous view) the conventional arrogance of academia that created the claim towards privileged knowledge positions. In that case, the online-offline nexus confronts us with an unpleasant truth – one which renders our work more complex but equally more interesting.

 

References

Blommaert, Jan (2013) Ethnography, Superdiversity and Linguistic Landscapes: Chronicles of Complexity. Bristol: Multilingual Matters.

Blommaert, Jan (2018) Durkheim and the Internet: On Sociolinguistics and the Sociological Imagination. London: Bloomsbury.

Blommaert, Jan & Ico Maly (2019) Invisible lines in the online-offline linguistic landscape. Tilburg Papers in Culture Studies, paper 223. https://www.tilburguniversity.edu/research/institutes-and-research-groups/babylon/tpcs

Blommaert, Jan & Piia Varis (2015) Enoughness, accent and light communities: Essays on contemporary identities. Tilburg Papers in Culture Studies paper 139. https://www.tilburguniversity.edu/research/institutes-and-research-groups/babylon/tpcs

Blumer, Herbert (1969) Symbolic Interactionism: Perspective and Method. Berkeley: University of California Press

Du Caixia (2016) The Birth of Social Class Online: The Chinese Precariat on the Internet. PhD diss., Tilburg University.

Fabian, Johannes (1983) Time and the Other: How Anthropology Makes its Object. New York: Columbia University Press.

Foucault, Michel (2008) Security, Territory, Population. Lectures at the College de France 1977-1978. London: Palgrave Macmillan

Goodwin, Charles & Marjorie Harness Goodwin (1992) Context, activity and participation. In Peter Auer & Aldo DiLuzio (eds.) The Contextualization of Language: 77-99. Amsterdam: John Benjamins.

Kaur-Gill, Satveer & Mohan Dutta (2017) Digital ethnography. In Christine Davis & Robert Potter (eds.) The International Encyclopedia of Communication Research Methods: 1-11. New York: Wiley.

Prochazka, Ondrej & Jan Blommaert (2019) Ergoic framing in New Right online groups: Q, the MAGA kid, and the Deep State Theory. Tilburg Papers in Culture Studies paper 224. https://www.tilburguniversity.edu/research/institutes-and-research-groups/babylon/tpcs

Rawls, Anne Warfield (1987) The Interaction order sui generis: Goffman’s contribution to social theory. Sociological Theory 5/2: 136-149.

Rawls, Anne Warfield (2002) Editor’s introduction. In Harold Garfinkel, Ethnomethodology’s Program: Working Out Durkheim’s Aphorism (ed. Anne Warfield Rawls): 1-64. Lanham: Rowman & Littlefield.

Strauss, Anselm (1993) Continual Permutations of Action. New Brunswick: Aldine Transactions

Szabla, Malgorzata & Jan Blommaert (2017) Does context really collapse in social media interaction? Tilburg Papers in Culture Studies paper 201. https://www.tilburguniversity.edu/research/institutes-and-research-groups/babylon/tpcs

Varis, Piia & Jan Blommaert (2015) Conviviality and collectives on social media: Virality, memes, and new social structures. Multilingual Margins 2: 31-45.

Varis, Piia & Mingyi Hou (2019) Digital approaches in linguistic ethnography. In Karin Tusting (ed.) The Routledge Handbook of Linguistic Ethnography. Abingdon: Routledge (in press).

 

 

Mathematics and its ideologies (an anthropologist’s observations)

kenneth_arrow_small_02

Jan Blommaert 

What is science? The question has been debated in tons of papers written over about two centuries and resulting in widely different views. Most people practicing science, consequently, prefer a rather prudent answer to the question, leaving some space for views of science that do not necessarily coincide with their own, but at least appear to share some of its basic features – the assumption, for instance, that knowledge is scientific when it has been constructed by means of methodologies that are shared intersubjectively by a community of scientific peers. The peer-group sharedness of such methodologies enables scientific knowledge to be circulated for critical inspection by these peers; and the use of such ratified methodologies and the principle of peer-group critique together form the “discipline” – the idea of science as disciplined knowledge construction.

There are, however, scientists who have no patience for such delicate musings and take a much narrower and more doctrinaire view of science and its limits. I already knew that – everyone, I suppose, has colleagues who believe that science is what they do, and that’s it. But a small recent reading offensive on the broad social science tradition called Rational Choice (henceforth RC) made me understand that such colleagues are only a minor nuisance compared to hardcore RC believers. For the likes of Arrow, Riker, Buchanan and their disciples, now spanning three generations,”scientific” equals “mathematical”, period. Whatever is not expressed mathematically cannot be scientific; even worse, it is just “intuition”, “metaphysics” or “normativity”. And in that sense it is even dangerous: since “bad” science operates from such intuitive, metaphysical or normative assumptions, it sells ideology under the veil of objectivity and will open the door to totalitarian oppression. What follows is a critique of mathematics as used in RC.

*****

Sonja Amadae (2003), in a book I enjoyed reading, tells the story of how RC emerged out of Cold War concerns in the US. It was the RAND Corporation that sought, at the end of World War II and the beginning of the nuclear era, to create a new scientific paradigm that would satisfy two major ambitions. First, it should provide an objective, scientific grounding for decision-making in the nuclear era, when an ill-considered action by a soldier or a politician could provoke the end of the world as we knew it. Second, it should also provide a scientific basis for refuting the ideological (“scientific”) foundations of communism, and so become the scientific bedrock for liberal capitalist democracy and the “proof” of its superiority. This meant nothing less than a new political science, one that had its basis in pure “rational” objectivity rather than in partisan, “irrational” a priori’s. Mathematics rose to the challenge and would provide the answer.

Central to the problem facing those intent on constructing such a new political science was what Durkheim called “the social fact” – the fact that social phenomena cannot be reduced to individual actions, developments or concerns – or, converted into a political science jargon, the idea of the “public” or “masses” performing collective action driven by collective interests. This idea was of course central to Marxism, but also pervaded mainstream social and political science, including the (then largely US-based) Frankfurt School and the work of influential American thinkers such as Dewey. Doing away with it involved a shift in the fundamental imagery of human beings and social life, henceforth revolving around absolute (methodological) individualism and competitiveness modeled on economic transactions in a “free market” by people driven exclusively by self-interest. Amadae describes how this shift was partly driven by a desire for technocratic government performed by “a supposedly ‘objective’ technocratic elite” free from the whims and idiosyncracies of elected officials (2003: 31). These technocrats should use abstract models – read mathematical models – of “systems analysis”, and RAND did more than its share developing them. “Rational management” quickly became the key term in the newly reorganized US administration, and the term stood for the widespread use of abstract policy and decision-making models.

These models, as I said, involved a radically different image of humans and their social actions. The models, thus, did not just bring a new level of efficiency to policy making, they reformulated its ideological foundations. And Kenneth Arrow provided the key for that with his so-called “impossibility theorem”, published in his Social Choice and Individual Values (1951; I use the 1963 edition in what follows). Arrow’s theorem quickly became the basis for thousands of studies in various disciplines, and a weapon of mass political destruction used against the Cold War enemies of the West.

Arrow opens his book with a question about the two (in his view) fundamental modes of social choice: voting (for political decisions) and market transactions (for economic decisions). Both modes are seemingly collective, and thus opposed to dictatorship and cultural convention, where a single individual determines the choices. Single individuals, Arrow asserts, can be rational in their choices; but “[c]an such consistency be attributed to collective modes of choice, where the wills of many people are involved?” (1963:2). He announces that only the formal aspects of this issue will be discussed. But look what happens.

Using set-theoretical tools and starting from a hypothetical instance where two, then three perfectly rational individuals need to reach agreement, observing a number of criteria, he demonstrates that logically, such a rational collective agreement is impossible.  Even more: in a smart and surely premeditated lexical move, in which one of Arrow’s criteria was “non-dictatorship” (i.e. no collective choice should be based on the preferences of one individual), Arrow demonstrated that the only possible “collective” choices would in fact be dictatorial ones. A political system, in other words, based on the notion of the common will or common good, would of necessity be a dictatorship. In the age of Joseph Stalin, this message was hard to misunderstand.

And he elaborates this, then, in about hundred pages of prose, of which the following two fragments can be an illustration. (I shall provide them as visual images, because I am about ready to embark on my own little analysis, drawn from contemporary semiotic anthropology.)

ScreenHunter_140 Jul. 21 21.11

Fig 1: from p90, Arrow 1963

ScreenHunter_141 Jul. 21 21.14

Fig. 2: From p79, Arrow 1963.

The prose on these pages became epochal: in it, one read the undeniable proof that collective rational social action was impossible, unless as a thinly veiled dictatorship – a death blow to Marxism of course, but also the definitive end of Durkheim’s “social fact” – and that basing policy on such a collective rationality (as in welfare policy) was bound to be futile. This was now objectively, scientifically proven fact, established by the unimpeachable rigor of mathematical logic, of which Arrow and his disciples believed that it could be applied to any aspect of reality.

Arrow, we saw, mentioned the limitations of his inquiry; evidently, he also used several assumptions. Amadae (2003: 84) lists four of them:

“that science is objective; that it yields universal laws; that reason is not culturally relative; and that the individuals’ preferences are both inviolable and incomparable”.

The first three assumptions touch on his conception of science; in other words, they describe his belief in what mathematical methods do. I will return to them below. The fourth assumption is probably one of the most radical formulations of Methodological Individualism (henceforth MI). MI is the label attached to the theory complex in which every human activity is in fine reduced to individual interests, motives, concerns and decisions. In the case of Arrow and his followers, MI leads to the assumption that “society” is merely an aggregate of individuals. It is clear that this MI assumption – an ideological one, in fact a highly specific ideology of the nature of human beings and their social actions – underlies the “proof”, makes it circular, and from an anthropological viewpoint frankly ridiculous, certainly when each of such individuals is a perfectly rational actor who

“will always pursue his advantage, however he defines it, no matter what the circumstances; concepts of duty and responsibility are alien to the instrumental agent pursuing his goals” (Amadae 2003: 272)

Note that Arrow does not allow comparison between individuals (he will do so, grudgingly and conditionally, in 1977 in response to Rawls’ discussion of justice: Amadae 2003: 270). This is important in three ways. One: it is a key motif in his “objective” approach, in which any normative judgment (e.g. a value judgment about preferences of individuals) needs to be excluded from the analysis, because any such judgment would bring in “irrational” elements and open the door to totalitarian policy options. Two: it thus underscores and constructs the case for mathematics as a method, about which more below. And three: it also provides a second-order ideological argument in favor of Man-the-individualist, for if individuals cannot be scientifically compared, they surely cannot be scientifically grouped into collectives.

And so, on the basis of a mathematical “proof” grounded in a set of highly questionable assumptions and operating on an entirely imaginary case, Arrow decided that society – the real one – is made up of a large number of individuals bearing remarkable similarities to Mr Spock. And this, then, was seen as the definitive scientific argument against Marxism, against the Durkheimian social fact, against the welfare state, socialism and communism, and in favor of liberal democracy and free market economics. It is, carefully considered, a simple ideological propaganda treatise covered up by the visual emblems of mathematics-as-objective-science. The assumptions it takes on board as axiomatic givens constitute its ideological core, the mathematical “proof” its discourse, and both are dialectically interacting. His assumptions contain everything he claims to reject: they are profoundly normative, idealistic, and metaphysical. Every form of subjectivity becomes objective as long as it can be formulated mathematically.

The fact that his “impossibility theorem” is, till today, highly influential among people claiming to do social science, is mysterious, certainly given the limitations specified by Arrow himself and the questionable nature of the assumptions he used – the most questionable of which is that of universality, that mathematics could be used to say something sensible on the nature of humans and their societies. The fact that these people often also appear to firmly believe that Arrow’s formal modeling of social reality, with its multitude of Mr Spocks, is a pretty accurate description of social reality, is perplexing, certainly knowing that this mathematical exercise was (and is) taken, along with its overtly ideological assumptions,  to be simple social and political fact (observable or not). Notably the MI postulate of individuals behaving (or being) like entirely sovereign and unaffected consumers in a free market of political choices, “proven” by Arrow and turned into a factual (and normative) definition, leads Adamae (2003: 107) to conclude “that Arrow’s set-theoretical definition of citizens’ sovereignty is one of the least philosophically examined concepts in the history of political theory”. (To Arrow’s credit, he was ready to revise this assumption in later years; Richard Thaler (2015: 162) quotes him saying “We have the curious situation that scientific analysis imputes scientific behavior to its subjects”). Nonetheless, this definition promptly and effectively eliminated a number of items from the purview of new political science: the public sphere, the common good, and even society as such – Arrovians would use the simple argument that since society was not human (read: not individual and rational), it could not be seen as an actor in its own right. Margaret Thatcher, decades later, agreed.

Arrow and his followers set new standards of political debate, arguing that political issues (think of social welfare) were not “real” if they didn’t stand the test of logical analysis. Unless facts agreed with mathematical coherence (as shown in Fig. 2 above), they were not  proven facts; mathematics became the standard for defining reality, and the phrase “theoretically impossible” became synonymous for impossible in reality, separating fact from fiction. I find this unbelievable. But the point becomes slightly more understandable when we broaden the discussion a bit and examine more closely the particular role of mathematics in all of this. And here, I turn to semiotic anthropology.

******

My modest reading offensive also brought Izhtak Gilboa’s Rational Choice (2010) to my table. Gilboa – a third-generation RC scholar with basic training in mathematics – offers us a view of what I prefer to see as the ideology of mathematics in all its splendor and naiveté. Before I review his opinions, I hasten to add that Gilboa is quite critical of radical interpretations of Arrovian choice, including Game Theory, admitting that the complexity of real cases often defies the elegance of established theory, and that we should “keep in mind that even theoretically, we don’t have magic solutions” (2010: 85). Yet he declares himself a full blown adept of RC as a “paradigm, a system of thought, a way of organizing the world in our minds” (2010: 9). And this paradigm is encoded in mathematical language.

Gilboa expresses an unquestioned faith in mathematics, and he gives several reasons for this.

  1. Accuracy: Mathematics is believed to afford the most accurate way of formulating arguments. “The more inaccurate our theories are, and the more we rely on intuition and qualitative arguments, the more important is mathematical analysis, which allows us to view theories in more than one way” (20). Theories not stated in mathematical terms, thus, are suggested not to allow more than one way of viewing. Too bad for Darwin.
  2. Rigor: Mathematics brings order in the chaos. Such chaos is an effect of “intuitive reasoning” (29). Thus, mathematical formulations are rigorous, ordering modes of expressing elaborate conglomerates of facts, not prone to misunderstanding. They form the theoretical tools of research, bringing clear and unambiguous structure in fields of knowledge in ways not offered by “intuitive reasoning”. The latter is a curious category term, frequently used by Gilboa to describe, essentially, any form of knowledge construction that cannot yet be expressed in mathematical language.
  3. Superiority. This follows from (1) and (2). There is mathematics and there is the rest. The rest is muddled and merely serves to test the mathematical theory. Thus (and note the evolutionary discourse here, marked in italics), when a mathematical theoretical model is thrown into “more elaborate research”, such research may prove to be “too complicated to carry out, and we will only make do with intuitive reasoning. In this case we try to focus on the insights that we feel we understand well enough to be able to explain verbally, and independently of the specific mathematical model we started out with” (29). Non-mathematically expressed knowledge is obviously inferior to mathematically expressed knowledge: it is “intuitive”. Yet, it has its importance in theory testing: “mathematical analysis has to be followed by intuitive reasoning, which may sort out the robust insights from those that only hold under very specific assumptions” (ibid).
  4. Simplification: throughout the entire book, but actually throughout most of what we see in RC at large, there is a marked preference for mathematically expressed simplicity. Complex real-world problems are reduced to extremely simple hypothetical cases involving pairs or triplets, as when complex market transactions are reduced to two people bargaining in a game-theoretical example, or the three Spocks in Arrow’s Impossibility Theorem who are supposed to instantiate millions of voters or consumers in large-scale political and economic processes. Such mathematical simplifications often bear names – the Prisoners’ Paradox, Condorcet’s Paradox, the Pareto Optimality or the Von Neumann-Morgenstern Axioms – and are presented (be it with qualifications) as “laws” with universal validity. The simple cases of mathematical theory are proposed as accurate, rigorous and superior modes of describing (and predicting) complex realities.
  5. Psychological realism. Not only are the mathematical formulations accurate descriptive and predictive models of actual social realities, they are also an accurate reflection of human cognitive folk methods, even if people are not aware of it: “Statistics is used to estimate probabilities explicitly in scientific and nonscientific studies as well as implicitly by most of us in everyday life” (56-57). Gilboa as well as many other authors doing this kind of work have the amusing habit of describing people who apply the Von Neumann-Morgenstern Axioms in deciding where to take their holidays and experience very severe logical problems when their behavior violates the Prisoners’ Paradox or exhausts the limits of objective reasoning.
  6. Convincing-conclusive. Finally, Gilboa makes a somewhat curious point about “positive” versus “negative rhetoric”. Negative rhetoric consists of “tricks that make one lose the debate but for which one has good replies the morning after the debate”, while “positive rhetoric consists of devices that you can take from the debate and later use to convince others of what you were convinced of yourself. Mathematics is such a device” (19).

******

The six features of Gilboa’s approach to mathematics are, I would argue, an ideology of mathematics. They articulate a socioculturally entrenched set of beliefs about mathematics as a scientific project. And while I am the first to express admiration for mathematics as a scientific tool which, indeed, allows a tremendous and unique parsimony, transparency and stability in notation, I think the broader ideology of mathematics needs to be put up for critical examination. For mathematics, here, is not presented as a scientific tool – call it “method” or even “methodology” – but as an ontology, a statement on how reality “really” is. We already encountered this earlier when I discussed the mystery of Arrow’s Theorem: no facts are “real” unless they can be expressed in mathematical, formal language. And to this, I intend to attach some critical reflections.

Let me first describe the ontology I detect in views such as the ones expressed by Gilboa, occasionally returning to Arrow’s first three assumptions mentioned earlier. I see two dimensions to it.

  1. Mathematics expressions are the Truth since mathematics represents the perfect overlap of facts and knowledge of facts. And this Truth is rationality: mathematical expressions are expressions of fundamental rationality, devoid of all forms of subjectivity and context-dependence. This enables mathematical expressions to be called “laws”, and to qualify such laws as eternal, universal, and expressions of extreme certainty and accuracy. Recall now Arrow’s second and third assumption: that science (i.e. mathematics) yields universal laws, and that reason is not culturally relative – since it can be described in a universal mathematical code.
  2. Mathematics as an ontology has both esoteric and practical dimensions, and these dimensions make it science. Concretely, mathematics is not something everyone can simply access because it is esoteric – see Fig 2 above for a graphic illustration – and it is practical because it can be applied, as a set of “laws” flawlessly describing and predicting states of reality, to a very broad range of concrete issues, always and everywhere.

Combined with the first point, mathematics as the (rational) Truth, we understand not just Arrow’s first assumption – that science is objective – but his wider (political) project as well. The scientific underpinning of a new social and political science had to be mathematic, because that was the way to avoid ideological, cultural or other forms of “subjectivity” which would make such a science “irrational”, and may lead it towards totalitarian realities. Mathematically stated laws (on any topic) are – so it is suggested – independent of the person formulating them or the place in the world from where they are formulated; their truth value is unconditional and unchallengeable; accepting them is not a matter of personal conviction or political preference, it is a matter of universal reason. This is why Gilboa found mathematics convincing and conclusive: confronted with mathematical proof, no reasonable person could deny the truth, for, as expressed by Gilboa, mathematical formulations reflected – esoterically – the folk reason present in every normal human being. And so we see the comprehensive nature of the ontology: mathematics describes human beings and by extension social life in a truthful, unchallengeable way.

It is at this last point – the “postulate of rationality” as it is known in RC – that this modern ideology of mathematics appears to have its foundations in Enlightenment beliefs about reason as fundamentally and universally human, and so deviates from older ideologies of mathematics. These are well documented, and there is no need here to review an extensive literature, for a key point running through this history is that mathematics was frequently presented as the search for the true and fundamental structure of nature, the universe and (if one was a believer) God’s creation. This fundamental structure could be expressed in rigorous symbolic operations: specific shapes, proportions, figures and relations between figures – they were expressed by means of abstract symbols that created the esoteric dimension of mathematics. Doing mathematics was (and continues to be) often qualified as the equivalent of being “a scientist” or “a wise man”, and if we remember Newton, the distinction between scientific mathematics and other esoteric occupations such as alchemy was often rather blurred.

It is in the age of Enlightenment that all human beings are defined as endowed with reason, and that mathematics can assume the posture of the science describing the fundamental features and structures of this uniquely human feature, as well as of the science that will push this unique human faculty forward. It is also from this period that the modern individual, as a concept, emerges, and the American Declaration of Independence is often seen as the birth certificate of this rational, sovereign individual. Emphasis on rationality very often walks hand in hand with methodological individualism, and this is not a coincidence.

Observe that this ideology of mathematics is pervasive, and even appears to be on the rise globally. Mathematics is, everywhere, an element of formal education, and universally seen as “difficult”. Training in mathematics is presented in policy papers as well as in folk discourse as the necessary step-up towards demanding professions involving rigorous scientific reasoning, and success or failure in mathematics at school is often understood as an effect of the success/failure to enter, through mathematics, a “different mode of thinking” than that characterizing other subjects of learning. Mathematics, in short, often serves as a yardstick for appraising “intelligence”.

******

From the viewpoint of contemporary semiotic anthropology, this ideology of mathematics is just another, specific, language ideology: a set of socioculturally embedded and entrenched beliefs attached to specific forms of language use. The specific form of language use, in the case of mathematics, is a form of literacy, of writing and reading. So let us first look at that, keeping an eye on Figures 1 and 2 above.

Mathematics as we know it gradually developed over centuries as a separate notation system in which random symbols became systematic encoders of abstract concepts – quantities, volumes, relations. Hardcore believers will no doubt object to this, claiming that the notational aspect is just an “instrumental”, ancillary aspect and that the core of mathematics is a special form of reasoning, a special kind of cognitive process. They are wrong, since the notational system is the very essence of the cognitive process claimed to be involved, which is why mathematicians must use the notational systems, and why school children can “understand” quite precisely what they are being told in mathematics classes but fail their tests when they are unable to convert this understanding into the correct notation. Seeing knowledge as in se detached from its infrastructures and methods of production and transmission is tantamount to declaring the latter irrelevant – which begs the question as to why mathematics uses (and insists on the use of) a separate notation system. More on this below.

The system, furthermore, is a socioculturally marked one, and the evidence for that should be entirely obvious. Recall Figure 2. The mathematical notation system follows the left-to-right writing vector of alphabetical scripts (not that, for instance, of Arabic or Chinese); unless I am very much mistaken “written” mathematical symbols (as opposed to e.g. geometrical figures) are alphabetical and not, e.g. hieroglyphic, cuneiform or ideographic (like Chinese characters); and they are drawn from a limited number of alphabets, notably Greek and Latin alphabets. Just click the “special symbols – mathematical symbols” icon in your wordprocessor now for double-checking. In spite of historical influences from Ancient Egypt and Babylonia, the Arab world, India and China, 19th century codification and institutionalization of mathematics (like other sciences) involved the Europeanization of its conventions.

The system is separate in the sense that, in spite of its obvious origins, it cannot be reduced to the “ordinary” writing system of existing languages: the fact that the symbol “0” for “zero” is of Indian origins doesn’t make that symbol Sanskrit, just as the Greek origins of the symbol for “pi” do not load this symbol with vernacular Greek meanings; they are mathematical symbols. But it can be incorporated (in principle) in any such writing system – Figures 1 and 2 show incorporation in English, for instance – and translated, if you wish, in the spoken varieties of any language (something it shares with Morse code). The symbol “<” for instance, can be translated in English as “less/smaller than”. Figure 1 above shows how Arrow translates ordinary English terms into mathematical terms, and the language-ideological assumption involved here is that this translation involves perfect denotational equivalence (the symbols mean exactly what the words express), as well as a superior level of accuracy and generalizability (the concrete of ordinary language becomes the abstract-theoretical of mathematical notation – the words become concepts). Here, we see what language ideologies are all about: they are a synergy of concrete language forms with beliefs about what they perform in the way of meaning. Thus, the difference between ordinary writing and mathematical writing is the belief we have that the latter signals abstraction, theory, and superior accuracy (something for which logical positivism provided ample motivational rhetoric).

This notation system is, in contemporary anthropological vocabulary, best seen as a specialized graphic register. That means that it can be used for a limited set of specific written expressions, as opposed to an “ordinary” writing system in which, in principle, anything can be expressed. We see it in action in the way I just described – reformulating ordinary expressions into “concepts” – in Figure 1, while Figure 2 shows that the register can be used for entire “textual” constructions in the genre of “proof”. The register is parsimonious and, in that sense, efficient. Writing “125364” requires just six symbols; writing “one hundred and twenty-five thousand three hundred and sixty-four” demands almost ten times that number of symbols.

It is, as a graphic register, extremely normative; it is an “ortho-graphy”. Mathematics deploys a closed and finite set of standardized symbols that have to be used in rigorously uniform ways – the symbol “<” can never mean “more than”; both their individual meaning and the ways in which they can be syntactically combined are subject to uniform and rigid rules. Consequently, while in “ordinary” writing some errors do not necessarily distort the meaning of an expression (most people would understand that “I cam home” means “I came home”), a writing error in mathematical notation renders the expression meaningless. So many of us painfully experienced this in the mathematics classes we took: our understanding of the fundamentals of mathematics did not include any degree of freedom in choosing the ways to write it up, since mathematics is normative, orthographic notation. This, too, is part of its specialized nature as well as of its esoteric nature: mathematics must be acquired through disciplined – nonrandom and highly regimented – learning procedures, and knowledge of specific bits of the register are identity-attributive. Some mathematicians are specialists of calculus, others of logic, for instance, while the identity label of “genius” would be stuck on outstanding mathematicians of any branch.

That is the specific form of language we see in mathematics; the language-ideological values attributed to it are, like any other language ideology, sociocultural constructs that emerged, are consolidated and develop by observing socioculturally ratified rules and procedures; and these are (like any other sociocultural convention) highly sensitive to developments over time and place. Very few contemporary mathematicians would be ready to defend the claim that mathematics reveals the fundamental structure of God’s creation, for instance, but it is good to remember that this language-ideological value was once attached to it, and that the people who attached it to mathematics were profoundly convinced that this was what mathematics was all about. Similarly, not too many contemporary mathematicians would perceive alchemy as an occupation compatible with the scientific discipline of mathematics, while Isaac Newton appeared not to have too many doubts about that.

There is nothing eternal, absolute or undisputable to the language-ideological assumptions accompanying mathematics. The suggestion, as I noted a widespread one, that mathematics would involve a “different way of thinking” is a quite questionable one. It is a different way of writing, to which a specific set of language-ideological values are attached. Children who are “not good at mathematics” at school, probably have more of a literacy problem than of a cognitive one – let alone one of inferior intelligence.

And if we return to Gilboa’s six features above, we might perhaps agree that his first two features – accuracy and rigor – are intrinsic affordances of the specific register of mathematics (things mathematics indeed can do quite well). The third feature (superiority) is a belief probably shared by members of the community of mathematicians, but not per se demonstrable, quite the contrary. Because the fourth feature – simplification – points to a limitation of the register, i.e. the fact that not everything can be appropriately written in the code. Ordinary language writing offers an infinitely vaster set of affordances. It is, at this point, good to remind ourselves of the fact that abstraction involves “stripping down”, i.e. the deletion of features from a chunk of reality; that this deletion may touch essential features; and that this deletion is often done on the basis of unproven assumptions.

The fifth feature – psychological realism – cries out for evidence, and those familiar with (to name just one) Alexander Luria‘s 1920s research on modes of thought will be inclined to take a more sobering and prudent view on this topic. There is no reason why the fundamental structures of rationality would not be expressed, for example, in narrative-poetic patterns rather than in mathematical-logical ones. And as for the sixth feature – the conclusive nature of mathematical proof: this, I suppose, depends on whom one submits it to. If the addressee of a mathematical argument shares the ideological assumption that such an argument is conclusive, s/he will accept it; if not, submitting mathematical proof might be not more conclusive than singing a Dean Martin song.

******

Language-ideological attributions are always sociocultural constructs, and therefore they are never unchallengeable and they can always be deconstructed. What we believe certain forms of language do, does not necessarily correspond to what they effectively do. There is, for example, a quite widely shared language-ideological assumption that grammatical, orthographic or other forms of “correctness” are strict conditions for understandability (“you can only make yourself understood if you speak standard language!”), while realities of human interaction show that tremendous largesse often prevails, without impeding relatively smooth mutual understanding. There is also a widespread language-ideological belief that societies are monolingual (think of the official languages specified in national legislations and, e.g., adopted by the EU), while in actual fact dozens of languages are being used. It is the job of my kind of anthropologists and sociolinguists to identify the gaps between facts and beliefs in this field.

Seen from that perspective, there is nothing in se that makes a mathematical proof more “objective” than, say, a poem (it is good to remember that in the Indian Vedic tradition, mathematical statements were written as sutra poetry, and that even today “elegance”, an aesthetic quality, appears to be a criterium for assessing mathematical proof). The status of “objectivity”, indeed the very meaning of that term, emerges by sociocultural agreement within specific communities, and none of the features of the register are in themselves and directly elements of “objectivity”. The notion of objectivity as well as the symbols that are proposed as “indexes” of objectivity, are all sociocultural constructs.

Paradoxically, thus, if we recall Kenneth Arrow’s extraordinarily far-reaching claims, the status of objectivity attributed to mathematics is a vintage Durkheimian “social fact”: something produced by societies and accepted by individuals for reasons they themselves often ignore – it’s a sociocultural convention wrapped, over time, in institutional infrastructures perpetuating and enforcing the convention (in the case of mathematics, the education system plays first violin here). Its power – hegemony we would say – does not turn it into an absolute fact. It remains perpetually challengeable, dynamic, an object of controversy and contention as well as a proposition that can be verified and falsified. Saying this is nothing more than stating the critical principles of science as an Enlightenment product, of re-search as literally meaning “search again” even if you believe you have discovered the laws of nature. These critical principles, we will recall, were the weapons used against religious and dictatorial (“irrational”) postures towards the Truth. They are the very spirit of science and the engine behind the development of sciences.

The intimate union between RC, mathematics, MI and the specific views of human nature and social action that were articulated in this movement, cannot escape this critique. Practitioners of this kind of science would do well to keep in mind that a very great number of their assumptions, claims and findings are, from the viewpoint of other disciplines involved in the study of humans and their societies, simply absurd or ridiculous. The axiomatic nature of rationality, the impossibility of collective choice and action, the preference for extraordinarily pessimistic views of human beings as potential traitors, thieves and opportunists – to name just these – are contradicted by mountains of evidence, and no amount of deductive theorizing can escape the falsifications entailed by this (inductive and not at all, pace Gilboa, “intuitive”) evidence.

MI, leading, as in Arrow’s work, to the refusal to compare individuals’ preferences and to isolate human beings from the complex patterns of interaction that make up their lives, is simply ludicrous when we consider, for instance, language – a system of shared normatively organized sociocultural codes (a “social fact”, once more) which is rather hard to delete from any consideration of what it is to be human or dismiss as a detail in human existence. Here we see how the “stripping down” involved in mathematical abstraction touches essential features of the object of inquiry, making it entirely unrealistic. We have also seen that the language in which such “truths” are expressed is, in itself, a pretty obvious falsification of MI and other RC asssumptions. And more generally, facts gathered through modes of science that Gilboa tartly qualifies as “intuitive reasoning” are also always evidence of something, and usually not of what is claimed to be true in RC.

Such critiques have, of course, been brought to RC scholars (an important example is Green & Shapiro 1994). They were often answered with definitional acrobatics in which, for instance, the concept of “rationality” was stretched to the point where it included almost everything, so as to save the theory (but of course, when a term is supposed to mean everything it effectively means nothing). Other responses included unbearably complex operations, attempting to keep the core theory intact while, like someone extending his/her house on a small budget, adding makeshift annexes, windows, rooms and floors to it, so as to cope with the flurry of exceptions and unsolvable complexities raised against it. I found, for instance, Lindenberg’s “method of decreasing abstraction” (1992) particularly entertaining. Recognizing the complexity of real-world issues, and aiming (as anyone should) at realism in his science, Lindenberg constructs a terrifically Byzantine theoretical compound in which the scientist gradually moves away from simple and rigid mathematical formulations towards less formal and more variable formulations  – hence “decreasing abstraction” or “increasing closeness to reality” (Lindenberg 1992: 3). He thus achieves, through admirably laborious theoretical devotion, what any competent anthropologist achieves in his/her fieldnotes at the end of a good day of ethnographic fieldwork.

*****

This brings me to a final point. Mathematics is a formal system, and a peripheral language-ideological attribution it carries is that of “theory”. Theory, many people believe, must be abstract, and whatever is abstract must be theoretical. People working in the RC paradigm like to believe that (theoretical) “generalization” in science can only be done through, and is synonymous with, abstraction – mathematical expression in formulas, theorems, or statistical recipes.

In dialogues with such people, it took me a while before I detected the cause of the perpetual misunderstandings whenever we tried to talk about issues of generalization and theorization across disciplines, for we were using the same words but attached them to entirely different cultures of interpretation. Their culture was usually a deductive one, in which theory came first and facts followed, while mine operated precisely the other way around. I had to remind them of the existence of such different cultures, and that their view of theoretical generalization – necessarily through abstraction – was an idiosyncrasy not shared by the majority of scientific disciplines.

Theoretical statements are, in their essence, general statements, i.e. statements that take insights from concrete data (cases, in our culture of interpretation) to a level of plausible extrapolation. Since every case one studies is only a case because it is a case of something – an actual and unique instantiation of more generally occurring phenomena – even a single case can be “representative” of millions of other cases. This generalization is always conjectural (something even hardliners from the other camp admit) and demands further testing, in ever more cases. I usually add to this that this method – a scientific one, to be sure – is actually used by your doctor whenever s/he examines you. Your symptoms are always an actual (and unique) instantiation of, say, flu or bronchitis, and your doctor usually concludes the diagnosis on the basis of plausible extrapolation: although s/he can never be 100% sure, the combination of symptoms a, b and c strongly suggests flu. If the prescribed medicine works, this hypothesis is proven correct; if not, the conjectural nature of this exercise is demonstrated. Unless you want to see your doctor as a quack or an alchemist who can’t possibly speak the truth (which would make it highly irrational to go and see him/her when you’re ill), it may be safe to see him/her as an inductive scientist working from facts to theory and usually doing a pretty accurate job at that.

People who believe that mathematics, and only mathematics, equals science, are in actual fact a small but vocal and assertive minority in the scientific community. If they wish to dismiss, say, 70% of what is produced as science as “unscientific”, they do so at their peril (and sound pretty unscientific, even stupid, when they do so). That includes Mr Popper too. The question “what is science?” is answered in very many forms, as a sovereign rational choice of most of its practitioners. Enforcing the preferences of one member of that community, we heard from Kenneth Arrow, is dictatorial. And since we believe that science is an elementary ingredient of a free and democratic society, and that pluralism in reasoned dialogue, including in science, is such an elementary element as well – we really don’t want that, do we?

References

AMADAE, Sonja, N. (2003) Rationalizing Capitalist Democracy: The Cold War Origins of Rational Choice Liberalism. Chicago: University of Chicago Press.

ARROW, Kenneth (1951) Social Choice and Individual Values. New York: Wiley (2nd ed. 1963).

GILBOA, Itzhak (2010) Rational Choice. Cambridge MA: MIT Press

GREEN, Donald & Ian SHAPIRO (1994) Pathologies of Rational Choice: A Critique of Applications in Political Science. New Haven: Yale University Press

LINDENBERG, Siegwart (1992) The Method of Decreasing Abstraction. In James S. Coleman & Thomas J. Fararo (eds.) Rational Choice Theory: Advocacy and Critique: 3-20. Newbury Park: Sage.

THALER, Richard A. (2015) Misbehaving: The Making of Behavioral Economics. New York: W.W. Norton & Company.

by-nc

Research training and the production of ideas

21382110-Business-Ideas-conceptual-A-businessman-watching-the-city-with-big-ideas--Stock-Photo

Jan Blommaert 

Can we agree that Albert Einstein was a scientist? That he was a good one (in fact, a great one)? And that his scientific work has been immeasurably influential?

I’m asking these silly questions for a couple of reasons. One: Einstein would, in the present competitive academic environment, have a really hard time getting recognized as a scientist of some stature. He worked in a marginal branch of science – more on this in a moment – and the small oeuvre he published (another critical limitation now) was not written in English but in German. His classic articles bore titles such as “Die vom Relativätsprinzip geforderte Trägheit der Energie” and appeared in journals called “Annalen der Physik” or “Beiblätter zu den Annalen der Physik”. Nobody would read such papers nowadays.

Two, his work was purely theoretical. That means that it revolved around the production of new ideas, or to put it more bluntly, around imagination. These forms of imagination were not wild or unchecked – it wasn’t “anything goes”. They were based on a comprehensive knowledge of the field in which he placed these ideas (the “known facts of science”, one could say, or “the state of the art” in contemporary jargon ), and the ideas themselves presented a synthesis, sweeping up what was articulated in fragmentary form in various sources and patching up the gaps between the different fragments. His ideas, thus, were imagined modes of representation of known facts and new (unknown but hypothetical and thus plausible or realistic) relations between them.

There was nothing “empirical” about his work. In fact, it took decades before aspects of his theoretical contributions were  supported by empirical evidence, and other aspects still await conclusive empirical proof. He did not construct these ideas in the context of a collaborative research project funded by some authoritative research body – he developed them in a collegial dialogue with other scientists, through correspondence, reading and conversation. In the sense of today’s academic regime, there was, thus, nothing “formal”, countable, measurable, structured, justifiable, or open to inspection to the way he worked. The practices that led to his theoretical breakthroughs would be all but invisible on today’s worksheets and performance assessment forms.

As for “method”, the story is even more interesting. Einstein would rather systematically emphasize the disorderly, even chaotic nature of his work procedures, and mention the fact (often also confirmed by witnesses) that, when he got stuck, he would leave his desk, papers and notebooks, pick up his violin and play music until the crucial brainwave occurred. He was a supremely gifted specialized scholar, of course, but also someone deeply interested (and skilled) in music, visual art, philosophy, literature and several other more mundane (and “unscientific”) fields. His breakthroughs, thus, were not solely produced by advances in the methodical disciplinary technique he had developed; they were importantly triggered by processes that were explicitly non-methodical and relied on “stepping out” of the closed universe of symbolic practices that made up his science.

0000

Imagine, now, that we would like to train junior scholars to become new Einsteins. How would we proceed? Where would we start?

Those familiar with contemporary research training surely know what I am talking about: students are trained to become “scientists” by doing the opposite of what turned Einstein into the commanding scientist he became. The focus these days is entirely – and I am not overstating this – on the acquisition, development and refining of methods to be deployed on problems which in turn are grounded in assumptions by means of hypotheses. Research training now is the training of practicing that model. The problems are defined by the assumptions and discursively formulated through the hypotheses – so they tolerate little reflection or unthinking, they are to be adopted. And what turns the student’s practices into “science” is the disciplined application of acquired methods to such problems resting on such assumptions. This, then, yields scientific facts either confirming or challenging the “hypotheses” that guided the research, and the production of such facts-versus-hypotheses is called scientific research. Even more: increasingly we see that only this procedure is granted the epithet of “scientific” research.

The stage in which ideas are produced is entirely skipped. Or better, the tactics, practices and procedures for constructing ideas are eliminated from research training. The word “idea” itself is often pronounced almost with a sense of shame, as an illegitimate and vulgar term better substituted by formal jargonesque (but equally vague) terms such as “hypothesis”. While, in fact, the closest thing to “idea” in my formulation is the term “assumption” I used in my description of the now dominant research model. And the thing is that while we train students to work from facts through method to hypotheses in solving a “problem”, we do not train them to questions the underlying assumptions that formed both the “problem” they intend to address and the epistemological and methodological routes designed to solve such problems. To put it more sharply, we train them in accepting a priori the fundamental issues surrounding and defining the very stuff they should inquire into and critically question: the object of research, its relations with other objects, the “evidence” we shall accept as elements adequately constructing this object, and the ways in which we can know, understand and communicate all this. We train them, thus, in reproducing – and suggestively confirming – the validity of the assumptions underlying their research.

“Assumptions” typically should be statements about reality, about the fundamental nature of phenomena as we observe and investigate them among large collectives of scientists. Thus, an example of an assumption could be “humans produce meaning through the orderly grammatical alignment of linguistic forms”. Or: “social groups are cohesive when they share fundamental values, that exist sociocognitively in members’ minds”.  Or “ethnicity defines and determines social behavior”. One would expect such assumptions to be the prime targets of continuous critical reassessment in view (precisely) of the “facts” accumulated on aspects that should constitute them. After all, Einstein’s breakthroughs happened at the level of such assumptions, if you wish. Going through recent issues of several leading journals, however, leads to a perplexing conclusion: assumptions are nearly always left intact. Even more: they are nearly always confirmed and credentialed by accumulated “facts” from research – if so much research can be based on them, they must be true, so it seems. “Proof” here is indirect and by proxy, of course – like miracles “proving” the sacred powers of an invoked Saint.

Such assumptions effectively function not as statements about the fundamental nature of objects of research, open for empirical inspection and critique, but as axiomatic theses to be “believed” as a point of departure for research. Whenever such assumptions are questioned, even slightly, the work that does so is instantly qualified as “controversial” (and, in informal conversations, as “crackpot science” or “vacuous speculation”). And “re-search”, meaning “searching again”, no longer means searching again da capo, from step 1, but searching for more of the same. The excellent execution of a method and its logic of demonstration is presented as conclusive evidence for a particular reality. Yes, humans do indeed produce meaning through the orderly grammatical alignment of linguistic forms, because my well-crafted application of a method to data does not contradict that assumption. The method worked, and the world is chiseled accordingly.

0000

Thus we see that the baseline intellectual attitude of young researchers, encouraged or enforced and positively sanctioned – sufficient, for instance, to obtain a doctoral degree and get your work published in leading journals, followed by a ratified academic career – is one in which accepting and believing are key competences, increasingly even the secret of success as a researcher. Not unthinking the fundamental issues in one’s field, and abstaining from a critical inquisitive reflex in which one looks, unprompted, for different ways of imagining objects and relations between them, eventually arriving at new, tentative assumptions (call them ideas now) – is seen as being “good” as a researcher.

The reproductive nature of such forms of research is institutionally supported by all sorts of bonuses. Funding agencies have a manifest and often explicit preference for research that follows the clear reproductive patterns sketched above. In fact, funding bodies (think of the EU) often provide the fundamental assumptions themselves and leave it to researchers to come up with proof of their validity. Thus, for instance, the EU would provide in its funding calls assumptions such as “security risks are correlated with population structure, i.e. with ethnocultural and religious diversity” and invite scientific teams to propose research within the lines of defined sociopolitical reality thus drawn. Playing the game within these lines opens opportunities to acquire that much-coveted (and institutionally highly rewarded) external research funding – an important career item in the present mode of academic politics.

There are more bonuses. The reproductive nature of such forms of research also ensures rapid and high-volume streams of publications. The work is intellectually extraordinarily simple, really, even if those practicing it will assure us that it is exceedingly hard: no fundamental (and often necessarily slow) reflection, unthinking and imaginative rethinking are required, just the application of a standardized method to new “problems” suffices to achieve something that can qualify as (new or original) scientific fact and can be written down as such. Since literature reviews are restricted to reading nothing fundamentally questioning the assumptions, but reading all that operates within the same method-problem sphere, published work quickly gains high citation metrics, and the journals carrying such work are guaranteed high impact factors – all, again, hugely valuable symbolic credit in today’s academic politics. Yet, reading such journal issues in search for sparkling and creative ideas usually turns into a depressing confrontation with intellectual aridity. I fortunately can read such texts as a discourse analyst, which makes them at least formally interesting to me. But that is me.

0000

Naturally, but unhappily, nothing of what I say here is new. It is worth returning to that (now rarely consulted) classic by C. Wright Mills, “The Sociological Imagination” (1959) to get the historical perspective right. Mills, as we know, was long ago deeply unhappy with several tendencies in US sociology. One tendency was the reduction of science to what he called “abstracted empiricism” – comparable to the research model I criticized here. Another was the fact that this abstracted empiricism took the “grand theory” of Talcott Parsons for granted as assumptions in abstracted empirical research. A poor (actually silly) theory vulnerable to crippling empirical criticism, Mills complained, was implicitly confirmed by the mass production of specific forms of research that used the Parsonian worldview as an unquestioned point of departure. The title of his book is clear: in response to that development, Mills strongly advocated imagination in the sense outline earlier, the fact that the truly creative and innovative work in science happens when scientists review large amounts of existing “known facts” and reconfigure them into things called ideas. Such re-imaginative work – I now return to a contemporary vocabulary – is necessarily “slow science” (or at least slower science), and is effectively discouraged in the institutional systems of academic valuation presently in place. But those who neglect, dismiss or skip it do so at their own peril, C. Wright Mills insisted.

It is telling that the most widely quoted scholars tend to be people who produced exactly such ideas and are labeled as “theorists” – think of Darwin, Marx, Foucault, Freud, Lévi-Strauss, Bourdieu, Popper, Merleau-Ponty, Heidegger, Hayek, Hegel and Kant. Many of their most inspiring works were nontechnical, sweeping, bold and provocative – “controversial” in other words, and open to endless barrages of “method”-focused criticism. But they influenced, and changed, so much of the worldviews widely shared by enormous communities of people worldwide and across generations.

It is worth remembering that such people did really produce science, and that very often, they changed and innovated colossal chunks of it by means of ideas, not methods. Their ideas have become landmarks and monuments of science (which is why everyone knows Einstein but only very few people know the scientists who provided empirical evidence for his ideas). It remains worthwhile examining their works with students, looking closely at the ways in which they arrived at the ideas that changed the world as we know it. And it remains imperative, consequently, to remind people that dismissing such practices as “unscientific” – certainly when this has effects on research training – denies imperious scientific efforts, inspiring and forming generations of scientists, the categorical status of “science”, reserving it for a small fraction of scientific activities which could, perhaps far better, be called “development” (as in “product development”). Whoever eliminates ideas from the semantic scope of science demonstrates a perplexing lack of them. And whoever thinks that scientific ideas are the same as ideas about where to spend next year’s holiday displays a tremendous lack of familiarity with science.

0000

Much of what currently dominates the politics and economies of science (including how we train young scientists) derives its dominant status not from its real impact on the world of knowledge but from heteronomic forces operating on the institutional environments for science. The funding structure, the rankings, the metrics-based appraisals of performance and quality, the publishing industry cleverly manipulating all that – those are the engines of “science” as we now know it. These engines have created a system in which Albert Einstein would be reduced to a marginal researcher – if a researcher at all. If science is supposed to maintain, and further develop, the liberating and innovative potential it promised the world since the era of Enlightenment, it is high time to start questioning all that, for an enormous amount of what now passes as science is astonishingly banal in its purpose, function and contents, confirming assumptions that are sometimes simply absurd and surreal.

We can start by talking to the young scholars we train about the absolutely central role of ideas in scientific work, encourage them to abandon the sense of embarrassment they experience whenever they express such ideas, and press upon them that doing scholarly work without the ambition to continue producing such ideas is, at best, a reasonably interesting pastime but not science.

Related texts

https://alternative-democracy-research.org/2015/06/27/when-scientific-became-a-synonym-for-unrealistic/

https://alternative-democracy-research.org/2015/04/13/investing-in-higher-education/

https://alternative-democracy-research.org/2014/10/15/the-power-of-free-in-search-of-democratic-academic-publishing-strategies/

https://alternative-democracy-research.org/2015/06/10/rationalizing-the-unreasonable-there-are-no-good-academics-in-the-eu/

by-nc

When “scientific” became a synonym for “unrealistic”.

Paul Samuelson

Jan Blommaert 

“From Adam Smith in 1776 to Irving Fisher in 1930, economists were thinking about intertemporal choice with Humans in plain sight. Econs began to creep in around the time of Fisher, as he started on the theory of how Econs should behave. But it fell to a twenty-two-year-old Paul Samuelson, then in graduate school, to finish the job”. (Richard H. Thaler, Misbehaving: The Making of Behavioral Economics, p.89; New York: Norton, 2015).

Richard Thaler, in this wonderful book, uses the terms “Humans” and “Econs” to distinguish between, respectively, real people observed in real life, having real interests, attitudes and modes of thought and behavior that are often, let us say, suboptimal; “Econs”, by contrast, are fictional characters, ideal people who don’t have passions or biases, are always rational, possess a maximum of information and are able to convert this linearly into economic behavior. Thaler’s book is a powerful argument in favor of an Economics science that keeps track of, and explains, Human behavior as, at least, a qualification to the kinds of fictional predictions of Econs’ behavior that are the Economics mainstream’s occupation.

In so doing, Thaler also directs our attention towards the small historical window in which this current mainstream’s doctrine occurred and flourished. For almost two centuries, Economics was precoccupied with real markets, customers, prices and policies – Adam Smith’s Theory of Moral Sentiments setting the scene for an Economics that dealt with the whims of human social behavior. The discipline abandoned this focus about merely half a century ago, when Samuelson, Arrow and some others replaced muddled descriptions of reality by elegant mathematical “models”, supposed to be of absolute and eternal precision and capable of bypassing the uncertainties and historical situatedness of real human minds. When critics pointed towards such minds (and their tendency to violate the rules of such elegant models), the response was that, willingly or not, people in economic activities would behave “as if” they had done the intricate calculations captured in the models. Thaler’s book is a lengthy and pretty detailed refutation of that “as if” argument: if nobody really actually operates in the ways laid down in mathematical models, why not take such deviations – “misbehaving” Humans – seriously? For someone such as myself, involved in ethnographic studies of Humans and their social behavior, this question is compelling and the arguments it provokes inescapable.

Thaler is not a nobody in his field – he’s the 2015 President of the American Economics Association; he will be able to ask this question urbi et orbi and with a stentorian voice. There might be some obstacles, though. Interestingly, the kinds of Economics designed by Samuelson and his comrades were (and are) seen as truly “scientific”. The conversion of a science grounded in observations of actually occurring behavior to a science concerned with abstract mathematical modeling was seen as the moment at which Economics became a real science, a complex of knowledge practices not tainted by the fuzziness of actual social facts but aiming at absolute Truth – something invariably expressed not in prose but in graphics, tables and figures, in which a new abstract model could be seen as a major scientific breakthrough (just look at the list of Economics Nobel Prize winners since the 1960s, and read the citations for their selection). As for the teaching and training of aspiring economists, it was thought that they would now be truly “scientific”, since students would learn abstract and ideal frameworks suggested to be absolutely generative in the sense that any form of real behavior could be measured against it and explained in its terms. No more nonsense, no more description – a normative theory such as that of Samuelson (sketching how ideal people should act) would henceforth be presented as a descriptive one (effectively documenting and explaining how they actually act) as well – an absolute theory, in other words. The shift away from “realism” – the aim of descriptive theories – towards ideal-typical modeling – the aim of normative theories – was seen as irrelevant. Economics became “scientific” as soon as it abandoned realism as an ambition.

It is interesting to see that in post-World War II academics, similar moves were made in different disciplines. Chomsky’s revolution in Linguistics (caused by his Syntactic Structures, 1957) is an example. Whereas Linguistics until Chomsky was largely driven by descriptive aims and methods (go out and describe a real language), in which careful empirical description  and comparison would ultimately lead to adequate generalization (Saussure’s Langue), Chomsky saw real Human language as propelled by an abstract formal and generative competence, describable as a finite set of abstract rules capable of generating every possible sentence in a language. This, too, was seen at the time as a major leap towards scientific maturity, and senior philosophers of science (already accustomed to see formalisms such as mathematical logic as the purest forms of meaning) argued that, with Chomsky, Linguistics had finally become a “science”. Linguists, from now on, would no longer do fieldwork – the interest in listening to what real people actually said was disqualified – but rely on “introspection”: one’s own linguistic intuitions were good enough as a base for doing “scientific” Linguistics. It took half a century of sociolinguistics to replace this withdrawal from realism with a renewed attention for actual variation and diversity in real language. Contemporary sociolinguistics, consequently, operates towards Linguistics very much like Thaler’s Behavioral Economics towards mainstream Economics: as a sustained attempt at making this “science” realistic again. Chomsky generative grammar

Similar stories can be told with respect to disciplines such as psychology and sociology, and later cognitive science, where the desire to become “scientific”, in the same era, led to a canonical “science” in which white-room experiments and quantifiable surveys replaced actual observation of situated social behavior and attention to what people really did and say about themselves and society.

There, too, the assumptions were the same: the actual social behavior of people is driven by a “deeper” abstract level of psychological, social and cognitive processes which can be captured and tested by detaching individuals from their real-life environments, submitting them to testing procedures that bear no connection whatsoever with any other actual form of social and meaningful behavior. Thus,  cognitive, psychic and emotional behavior can be accurately and “scientifically” studied by putting individuals into an MRI scanner, where they stay entirely immobile and cut off from any outside stimulus for 45 minutes. The outcomes of such procedures (quite paradoxically called “empirical” by practitioners) are presented, remarkably (or better, incredibly), as accurate accounts of real, situated and contextually sensitive social and mental activity. Abstract modeling of what we could call “Psychons”, here as well, is not seen as a normative enterprise but as a descriptive one as well, predicting (with various degrees of accuracy) Human behavior. This study of Psychons, then, is the real “science”, often rhetorically opposed to and contrasted with the “storytelling” or “journalism” of research grounded in actual observation and description of Humans (turning one of the 20th century’s most influential intellectuals, Sigmund Freud, into a fiction writer). Senior sociologists and psychologists such as Herbert Blumer and Aaron Cicourel brought powerful (and never effectively refuted) methodological arguments against this shift away from realism and towards “science” – their arguments were dismissed as unhelpful.

So here we are: knowledge disciplines concerned with Man and society appear to be “scientific” only when they deliberately reject the challenge of realism – “reality talking back”, as Herbert Blumer famously called it – and engage in abstract formalization and modeling, regardless of whether or not such formal schemes and models stand the test of empirical reality checks. Such “science”, because it dismisses this kind of systematic reality check, also becomes incapable of describing change. Experiments need to be “repeatable” in order to be “scientific”, and consequently we continuously check and test things that have to remain stable in order to be scientifically testable. The fact that actual social processes and realities are not “experiments”, and display a strong tendency to change perpetually, precludes repeatability and consequently can never be “scientifically” addressed. This feature – the bias towards stability and the incapability of addressing change – is a constant in all these “sciences”. And those who practice such “sciences” are actually proud of it. Strange, isn’t it?

We live by our mythologies, Roland Barthes famously said. One of the mythologies we live by is that of “science” being necessarily, because of its own criteria for validity, unrealistic, and therefore often outlandish and outrageous in its findings and conclusions. It would be good, therefore, to return to the old debates historically accompanying the shifts in the disciplines I mentioned here, and carefully examine the validity of critical arguments brought against these kinds of “science”. To the extent that people still believe that “re-search” means “looking again”, i.e. to be continuously critical of one’s own knowledge doctrines, this would be an eminently scientific practice.

PS (2017): Richard Thaler was awarded the Economics Nobel Prize in October 2017.

by-nc

An interview with Jan Blommaert on research and activism

DSC00990

Jan Blommaert 

Responses to a survey on this topic, March 2015 (courtesy Tina Palivos & Heath Cabot).

How would you define or describe research  and social action? Tell us a little bit about your background and your experience in both of these areas.

JB: Research is social action; the fact that the question separates both presupposes “social action” as an “abnormal” aspect of research, while research is always and inevitably social action: an action performed in a real social environment, and infused with elements from a preceding state as well as leading to effects in a posterior state.

The question, rather, would thus be which specific type of social action research would be, and I understand your question as pertaining to what one could call “activist research”, i.e. research that is critical of existing social relations and attempts, at least within the boundaries of research, to amend or alter them, usually in favor of a more equitable or balanced idea of social relations.

Such activist research, I would argue, takes sides in the sense that, based on a preceding analysis of social relations, researchers decide to side with the weakest party in the system and deploy their research in an attempt to provide that weaker party with new intellectual tools for addressing their situation. These tools can be self-analytic – to provide an accurate analysis of the situation of systemic inferiority in which the group is placed – or general-analytic – a critical analysis of the entire system with its various positions and challenges; and such tools are invariably discursive: the forms of analysis provide new discursive, argumentative and representational tools.

Briefly describe academic knowledge or know‐how? Activist knowledge or know‐how?

JB: Knowledge is one, the discourses in which knowledge is articulated are the point here. “Activist”, as in the description above, represents a discursive scale level in which “esoteric” academic knowledge is converted into discourses of wider currency (“simpler” discourses, if you wish), without sacrificing the analytical accuracy and power of the academic discourses.

Do you see them as distinct? If yes, how? How do they overlap, if at all?

JB: Note that the function of both discourses is different; while academic discourse is there to circulate in and convince small circles of peers, activist knowledge must circulate in and convince far broader audiences and systems of mediation (e.g. mass media).

In your experience, how do these areas complement each other?

JB: Personally, I could never find sufficient satisfaction in “pure” academic work if it would lack the dimension of advocacy and appeal to broader and more complex audiences. Science does have the potential to change the world, so one should not be satisfied with just changing the academic world alone. As a scientist, we all have a duty towards the power of science: to use it carefully, justly and for the benefit of humanity, not just a small subset of it. Being a scientist, for me, commits us to these fundamental humanistic duties.

In my case, I complemented my “purely” academic oeuvre always with the writing of low-threshold, Dutch-language books (12 or 13 by now), converting research achievements into texts that could be used in grassroots mobilization, professional training or general-interest reading and instruction. This activity comes with a great deal of lecturing and debating for the audiences addressed by the low-threshold books, which is both a lot harder than academic lecturing (academics are usually very civil and polite towards one another), and a lot more rewarding (convincing and changing the minds of an audience of 300 schoolteachers, train drivers or longshoremen gives one a sense of relevance rarely matched by convincing a handful of academics).

For you, what are the tensions or conflicts between activism and academic work that you have come across? What would you do (or have you done) to resolve these conflicts or tensions? 

JB: The conflicts are diverse:

-No real career bonuses can be obtained for “advocacy” work, if it doesn’t come with “purely” academic aspects; a real problem, specifically for junior researchers. In my research group, we also “count” advocacy outputs.

-A permanent battle against stereotypes of the researcher as ivory-tower fellows out of touch with “reality” (we produce “theory” as opposed to “reality”). Easy to remedy: just talk about reality, show relevance in their terms

-Debate is far harder, more violent and sometimes highly unpleasant in the wider public arena; one must be able to withstand brutal public allegations, insults and accusations. It’s not a good place to be in for sensitive souls.

But let me also address the advantages and benefits. In my experience, a connection between research and activism improves research. If you wish to solve one single real-world problem of one single individual, you quickly discover the inadequacies of our toolkits and the demand to come up with better and more precise science. If I have ever made “breakthroughs”, it was because I had a sharp awareness of the fact that someone’s life literally depended on it. Believe me, that is a powerful engine.

What do you think are the most important and necessary ways in which research and social action could be linked, bridged, or integrated?

JB: All science should benefit humanity, general interests rather than specific ones. In methodology, we attempt to achieve this by means of generalization from isolated facts (i.e. theory). And too little is done, in actual fact, to make this mechanism into a general educational principle for all.

Are there any stumbling blocks or concerns you would have around projects that seek to bridge or bring together research and social action, and academic and activist worlds, to create modes of knowledge and collaboration? How might these be ameliorated?

JB: My very first answer addressed the presupposition underlying your question: the fact that “social action” is seen as separate from scientific action, and I see this as a major problem, an “ideology” if you wish, in which research is seen as in itself value-free (“objective”), to which “value” can be added after research, either as hard cash (licences, patents, industrial contracts etc) or as soft capital (impact on the nonacademic field, as it’s called nowadays). It is a crazy assumption which denies the fundamental sociological given of research: that it is, like any social action, a historically, socioculturally and politically situated activity. I always ask the question “why now?” when addressing research questions – how come we find this a researchable question here-and-now and not, for instance, in the 1970s of 1990s? The real answer to this question leads us into an analysis of scientists as people addressing problems from within a subjective position, defined only partly by “objective” facts of science and far more by the concrete social positions from which they attack questions and problems.

This is clearest (while often least understood) when we talk about research funding. There is a strong suggestion that external money is “neutral” in the sense that it does not pre-script research. In actual fact, it does script it substantively. If the EU opens a funding line on a particular topic, think of “security”, this funding line incorporates the current interests and needs of the EU (combating terror and transnational crime, for instance), excluding others (e.g. not combating these things). The “priorities” defined in such funding calls are always someone’s priorities, and rarely those of the scientists themselves. Scientists have to adjust to them, and this means that they have to adjust to subjective positions defined by funding bodies, within which they can then proceed to do “objective” research.

It is this myth about research – that it is in itself only “good” or “excellent” if and only if it is “value free” – that poisons the debate and the climate on science and society these days. It enables scientists to escape their accountability for what they are doing, and denies them the dialogue on effective social effects of which they should be very much part.

by-nd