It is unlikely that I will, in the future, attend conferences or similar meetings; my health won’t allow it. Over the past few years, I recorded several full-length lectures on a range of topics. I will post them here. All of them are open access, so feel free to use them whenever you wish to have me on the program of your seminars, symposia or what not.
On writing an academic paper
On sociolinguistic scales
New modes of interaction, new modes of integration
The action perspective in linguistic anthropology
Invisible lines in the linguistic landscape
Online with Garfinkel
Political discourse in postdigital societies
Old new media: how big issues reoccur whenever new media appear.
Understanding the culture wars: weaponizing the truth
Does context really collapse in social media interaction?
(incomplete, but the point is made in this part of the lecture)
Online identities and sociolinguistics
Two of my maîtres à penser died relatively young. Michel Foucault was 57, Erving Goffman was 60. It is highly likely that I shall die relatively young as well. I’m 58 now, and I have been diagnosed with cancer stage 4 in mid-March 2020. Since there is suddenly very little future left to plan, speculate or dream about, one tends to use such landmark moments as a prompt to reflect on the past. The guiding question in this – quite an obvious one – is: what was important?
I will restrict my reflections to the professional parts of my life. This is, of course, an artificial segmentation, and readers must keep in mind that the professional part of life was always interwined with the nonprofessional parts, often in uneasy or poorly balanced ways. Perhaps that story should be told elsewhere. For now, I will focus on the part of me that was called “academic”.
Let me briefly preface what follows by reviewing what was not important.
What was not important was competition and its attributes of behavioral and relational competitiveness, the desire or urge to be the best, to win contests, to be seen as the champ, to proceed tactically, to forge strategic alliances and what not. I did not have a sense that I had to be part of a specific clique or network, and I don’t think I ever made great efforts to get close to people considered to be important. If I was a member of such networks, it was rather by accident than by design – it happened to me.
I never self-imagined as a genius, individually measured against others, and individually responsible for the production of superb stuff that everyone should read, know, quote and assign to students. Quite the contrary: I saw myself as unexceptional, and as someone who would always need a good team around me in order to achieve anything. Given that academic life, in my case, was not a thing I had actively desired and sought, but a gift I received from others, I felt a duty to be good, as good as I could be, and better tomorrow than today. So I worked hard, essentially taking my clues from others – the literature of course (a community of others often overlooked when we talk about academic achievement), but also contacts and friends with whom teams could be formed. Discussion and brainstorm were my favorite activities; they were in the most literal sense the ludic, fun, pleasure dimensions of academic life. What I did alone, usually, was the slow and careful analysis of data. But that’s the only thing that’s really individual in a range of activities that were collective and involved intense sharing, exchange and generosity. And even that thing – the data analysis – was usually submitted to the judgment of others before it could be publicly shown. So much for being the lone, unique and autonomous genius researcher.
In such contexts of collective sharing, conditioned by maximum generosity, changing one’s mind is self-evident. The very point of having a discussion or brainstorm – an “exchange of ideas” – is that ideas can be exchanged and changed, and that one leaves the session with better things in one’s head than before the session. Learning is the key there, and if I would be ready to pin one label onto myself, it’s the label of an eternal, insatiable learner.
Which is why I read massively all through my life. And while part of that reading was “just” reading, another part was studying. Most of my career, I was involved in some kind of study, collecting and selecting writings from which I wanted to draw advanced insights, useful for the research projects I was engaged in. I studied, for instance (and the list is not complete), structuralism, existentialism, phenomenology, arcane things such as the works of Rudy Botha on Chomsky and the Functional Grammar attempts of Simon Dik, Talmy Givon and M.A.K. Halliday; but also the entire oeuvre (or, at least, most of what I could get) of Michel Foucault, Carlo Ginzburg, Bakhtin, Freud, Durkheim, Simmel, Parsons, Eric Hobsbawm, E.P. Thompson, Pierre Bourdieu, Charles Goodwin, Dell Hymes, Michael Silverstein, Erving Goffman, Aaron Cicourel, Harold Garfinkel, Anne Rawls, Fernand Braudel, J.K. Galbraith, Immanuel Wallerstein, Arjun Appadurai and several others. I studied Marx and Marxism in its very diverse varieties, Rational Choice, Macchiavelli, Darwin, G.H. Mead’s work and influence, Dewey, Paolo Freire, Ngugi wa Thiong’o, Okot p’Bitek, Walter Rodney, Issa Shivji and quite a bit of African political theory from the 1950, 1960s and 1970s. In order to understand a lot of that, I had to study the works of Mao Zedong and the history of the Cultural Revolution in China. And so on, and so forth.
If I have regrets now, it is about the fact that some of those studies will remain unfinished. I took great pleasure from them.
I disliked and dislike – intensely – the development of academic industrial culture that I was witness to throughout my career, with almost-totalized individualization of academic work and performance measurement, with constant inter-individual competition driving young and vulnerable colleagues to extreme and dangerous levels of stress and investment in work rather than life, and with managers emphasizing – without any burden of evidence – that the “single-authored journal paper” (published, evidently, behind a huge paywall) is the pinnacle of academic performance and the gold standard for measuring the “quality” of an individual researcher. Added to this – and this, too, I was a witness of – is the growth of a veritable celebrity culture in academia, in which mega-conferences take the shape of pop festivals with rockstar headliners bringing their greatest hits in front of an audience of poorly paid struggling academics who spent their personal holiday budgets purchasing a ticket for such events. Little truly valuable intellectual work is going on there. And identical to pop festivals, the carbon footprint of such academic rock concerts is scandalous.
Frankly, all of this is in its simplest and most elementary form anti-academic and anti-intellectual. It’s the recipe for bad science, not for innovation and improvement. I participated in all of it, for all of it became “new” while I was active – it was the culture that defined my career. That culture defined me as one of these rockstars for a while, and thus placed me quite consistently in the company of a small coterie of similar rockstars. It is not a thing I shall miss, for it was invariably awkward and alienating, and very often incredibly boring. And this new culture took away and delegitimized a previous culture, one of collegial dialogue, collaboration, slowness, time to think, to reflect and to doubt, periods of invisibility and absence from public stages – because one was doing some serious bit of research, for instance. And a culture in which one would write something whenever, and because something new had to be reported, not because one needed to achieve one’s annual output quotum or another “top” paper in order to be eligible for promotion, tenure or appointment.
A footnote: another part of that defining culture was university reorganizations, managerialization and budget cuts, with an increasing rat race for jobs (for which the intellectual world pays a terrible price), “customer-oriented” academic programs that had to be checked by the marketing guys as to their merits in a market of academic products, the decline of vital academic “support staff” and the almost-complete commodification of academic output – see the point about “single-authored journal papers” above, and one can add the metrics and impact mania to it. Academic publishing, as an industry, has become a disgrace and is an obstacle to science, not a facilitator (let alone an indispensable actor). Publishing has become a form of terror for young scholars, while it should be an instrument for liberation, for finding their voice and feet in the business. Burnout has now become an endemic professional hazard in academia, much like depression, unhappy human relationships and unhealthy lifestyles. It’s become a highly unattractive environment for human creativity, while it should be an environment, a specialized one, ideally tailored to precisely that.
So that was unimportant. The important things can be summarized in a few keywords: to give, to educate, to inspire. I will add a fourth keyword later.
As I said earlier, my academic life was a gift I received from others. It was unexpected as a gift, and I was unprepared for it. When I received my first academic job in 1988, I mainly looked at people I considered bad examples, and I decided to not do things the way they did it. I essentially decided to be the kind of academic I myself would like to encounter if I were a student. If I had to teach, I should teach the kind of class I myself would love to attend as a student. And if I had to write, I should write texts I myself would enjoy reading. It’s a simple discipline I maintained throughout my career: it’s never about me, it’s always about the student, and my role is to give the student tools and resources useful and valuable for that student, not for me.
I realized early on that my role in the lives of the young people who were my students was that of an educator, not just a lecturer or a teacher. And once I realized that, I took it very seriously. I meticulously prepared every course I ever taught (and there were many), and I always rehearsed every lecture. I never walked into a lecture hall without a fully developed story and a script in mind for how to deliver it. If you have to teach, teach, and do that in a no-nonsense way. Make every minute of the class a moment worth attending for students, and make sure that they learn something in each of your classes. That sounds simple and straightforward, but it isn’t. It’s actually quite a tall order.
It starts from a refusal to underestimate your students. Many of my former students will remember that I would start a course by announcing that I would aim just one inch above their heads, so that they would have to stretch a bit in order to keep up with the pace and content of the course. I always did that: I gave students readings, contents and assignments often judged by colleagues to be too demanding or “above their level” – first-year students would have to read a book by Foucault, for instance. Well, the fact is that they did, and they learned massively from it. So what precisely “their level” is, usually and preferably remains to be determined after the process of learning, not prior to it. Prior to it, no one is “ready” for specific chunks of knowledge; they become ready through the work of learning. Not understanding this elementary fact, and assuming that students “have” a particular level that we, teachers, need to adjust to, is a dramatic error. In my career I have seen very often how this error leads to the infantilization of exceptionally talented young people, and to learning achievements that were a fraction of what could have been achieved. Please never underestimate your students.
Instead, give them the best you have to give. That means: don’t give your students old and pedestrian information, but give them your most recent and most advanced insights and thoughts. Draw them into the world of your current research, expose them to the most advanced issues and discussions in the field, show them complex and demanding data, and allow them into your kitchen, not just into your shop. For large parts of my career, I had a huge teaching load. I could only keep classes interesting for students and for myself by establishing direct and immediate links between my ongoing research and my teaching. I would take half-finished analyses of new data into the classroom, and finish the analysis there, with my students, allowing them to see how I made mistakes, had to return to earlier points, skip some particularly tough bits, and so forth. The good thing was: my frequent classes did not entirely eat away my research time, they were research time, and students were exposed to a researcher talking about a concrete and new problem that demanded a solution.
It is at this point, I believe, that “teaching” turns into “education”. As teachers, we do not “transfer knowledge” and we’re not, in that sense, a sophisticated or awkward kind of bulldozer or forklift by means of which a particular amount of resources is taken from one place (ours) to another (the students’ minds). This is how contemporary academic managerialism prefers to see us. I have already rejected it above.
No. Whether we like it or not, we are much, much more than that for our students, and we have to be. All of us still remember many of our teachers, from kindergarten all the way to university. Some of our memories of them may gradually fade, and some of the teachers may only survive in our memories as vague and superficial sketches attached to particular moments in life. But some of these teachers are actually quite important in the stories we build of ourselves; and of such teachers, we sometimes have extraordinarily extensive and detailed memories. Even more: some of these teachers served (and serve) as role-models or as people who defined our trajectories and identities at critical moments in life. And when people talk about such teachers, we notice how closely they observed and critically monitored even the smallest aspects of behavior of their teachers; their actual words and how, when and why they were spoken; particular gestures made or faces pulled; pranks or surprises they created, and so forth.
I became very aware of the fact that, as a teacher, I will be remembered by my students. I knew, at every moment of interaction with students, that this moment would leave a trace in their development and would often be given a degree of importance it never could have for me. In sum, I realized that, as a teacher, every moment in which I interacted with students would be a moment of education, of the formation of a person, using materials I would be offering to them during that specific moment of interaction. My entire behavior towards them would potentially be educational material in that sense. And my entire behavior towards them, consequently, needed to be organized in that sense. I should allow students to get to know me – at least, get to know a version of me that could be remembered as someone who positively contributed to their development as adult human beings. Respect, courtesy, integrity, professional correctness, empathy, reliability, trustworthiness, commitment: all of these words stand for behavioral scripts that demand constant enactment in order to be real.
Several times in my career, students told me what could best be called “secrets”, highly delicate personal things usually communicated only to members of a small circle of intimi. Twice, young female students came into my office in deep distress, announcing that they had been raped – and I was the first person they called upon for help. While such moments were of course disorienting and caught me cold, they taught me that as a teacher I was very much part of students’ lives, in ways and to degrees I never properly realized. And it taught me the huge responsibilities that came with it: we are so much more than “academics” for these young people; we are fully-formed human beings whose behavior can be helpful, important, even decisive for them. We should act accordingly, and not run away from this broader educational role we have.
The third keyword is “to inspire”, and I need to take a step back now. I mentioned the delight I always took in studying. The real pleasure I took from it was inspiration – other scholars and their works inspired me to think in particular directions, to think things I hadn’t been able to think before, to do things in particular ways, to explore techniques, methods, lines and argument, and so forth. Let me be emphatic about this. I can’t remember ever studying things in order to follow them the way a disciple follows the dictates of a master or an apprentice follows the rules of a trade – or at least, I remember that each attempt in that direction was a dismal failure. I was never able to absorb an orthodoxy, and to become, for instance, someone happy to carry the label of – say – critical discourse analyst or conversation analyst.
Whenever I studied, I wanted to be inspired by what I was studying, and I described inspiration above: it’s the force that suddenly opens areas and directions of thought, shows the embryo of an idea, offers a particular formulation capable of replacing most others, and so forth. Inspiration is about thinking, it is the force that kickstarts thinking and that takes us towards the key element of intellectual life: ideas. And science without ideas is not science, but a rule-governed game in which “success” is defined by the degree of non-creativity one can display in one’s work. The exact opposite, in other words, of what science ought to be. Science can never be submissive, never be a matter of “following a procedure” or “framework”. It is about constructing procedures and frameworks.
There were many moments in my career when graduate students would introduce their work to me, and preface it by saying things such as “I am using Halliday as my framework”. Usually, my response to that was a question: “how did Halliday become a framework?” And the answer is, of course, by constructing his own framework and refusing to follow those designed by others. People who “became a framework”, so to speak, took the essential freedom that research must include and rejected the constraints often mistaken for “scientific practice”. The essential freedom of research is the freedom to unthink what is taken to be true, self-evident and well-known and to re-search it, literally, as in “search again”. It is the freedom of dissidence – a thing we often hide, in our institutionalized discourses, behind the phrase “critical thinking”. I see dissidence as a duty in research, and as one of its most attractive aspects. I believe it is exactly this aspect that still persuades people to choose for a career in research.
Inspiration draws its importance from the duty to unthink, re-search, and question, which I see as the core of research. We can make the work of unthinking and re-searching easier (and more productive, I am convinced) when we allow ourselves to draw inspiration from that enormous volume of existing work and the zillions of useful ideas it contains, as well as from interactions with friends, colleagues, students, peers – allow them to affect our own views, to shape new ones, to help us change our minds about things. And in our own practices, we should perhaps also try, consciously and intentionally, to inspire others. I mean by that: we should not offer others our own doctrines and orthodoxies. We should offer them our ideas – even if they are rough on the edges, unfinished and half-substantiated – and explain how such ideas might fertilize – not replace – what is already there.
I have quite consistently tried to inspire others, and to transmit to them the importance I attached to inspiration as a habitus in work and in life. In my writings, I very often sought to take my readers to the limits of my own knowledge and give them a glimpse of what lies beyond, of the open terrain for which my writings offered no road map, but which my writings could help them to detect as open for exploration. This has made parts of my work “controversial” and/or “provocative” – qualifications that are usually intended to be negative but inevitably also articulate a degree of relevance and suggest a degree of innovation. I was usually quite happy to receive these attributions, and they never irritated me. It also never irritated me when I found out that someone I engaged with in conversation did not know me well, had not read my work and did not pretend to have read it. Usually, those were among the more pleasant encounters.
These three things were definitely important to me in the professional part of my life: making a habit of giving, sharing and being generous in engaging with others; being aware of my duty to educate others and of the responsibilities that come with that, and to take that duty very seriously; and taking inspiration as a central instrument and goal of academic and intellectual practice. I can say that I have tried to apply and implement these three aspects throughout my career; I cannot claim to have done so faultlessly and perfectly – there is no doubt that I made every mistake known to humanity, and I am not speaking as a saint here. But the three elements I discussed here were – now that I can look back with greater detachment – always important, always guiding principles, and always benchmarks for evaluating my own actions and conduct.
I now need to add a fourth keyword: to be democratic. It’s of a slightly different order.
I grew up and studied in the welfare-state educational system of Belgium, and given the modest socio-economic status of my family, I would probably never have received higher education in other, fee-paying systems. I’m very much a product of a big and structural collective effort performed by people who did not know me – taxpayers – and regardless of who I was. I am a product of a democratic society.
I remained extremely conscious of that fact throughout my adult life, and my political stance as a professional academic has consistently been that I, along with the science I produce, am a resource for society, and should give back to society what society has invested in me. “Society”, in this view, includes everyone and not just a segment of it. It is necessarily an inclusive concept. And science in this view has to be a commons, a valuable resource available to everyone, an asset for humanity. Practicing this principle became increasingly difficult because of the developments I already mentioned above: the rapid and pervasive commodification of the academic industry during my career. Academic institutions, and academic work, became and have become extraordinarily exclusive and elitist commodities, and academic work that refuses the limitations commensurate with this commodification are, generally speaking and understated here, not encouraged. I’ll return to this below, but I need to continue an auto-historical narrative first.
Working a lot in Africa and with Africans throughout my career, no one needed to tell me that knowledge, surely in its academic form, was not available to everyone, and that a large part of humanity was offered access only to hand-me-downs from the more privileged parts. One can take this literally: many of the school books used in the early and mid-1980s in Tanzania were books taken off the syllabus in the UK and shipped – as waste products, in effect, but under the flattering epithet of educational development assistance – to Tanzania. And almost any student or academic I met at the University of Dar es Salaam (which became my second home for quite a while in the early stages of my career) would answer “books, journals” to the question “what is it you lack most here at the university?” Bookshelves in departments were indeed near-empty (even in so-called “reading rooms”), and the small collections of books privately held by academics (usually collected while doing graduate work abroad) were cherished, protected and rarely made available to others. In the University bookshop on campus, shelves were also empty, supplies were dismal and most of the collection on offer was dated. (Its most abandoned and dusty corner, however, became a treasure trove for me, for that was where cheap editions of the works of Marx, Lenin and Mao Zedong could be found, donated long ago by the governments of the USSR, the GDR and China.) My own working library at home – the working library of a PhD student – was several times larger than some of the departmental collections I had seen in Dar es Salaam. To the extent that “white privilege” has any meaning, I had a pretty sharp awareness of it from very early in my career.
Inequality became the central theme in my work and academic practice from the first moment I embarked on it. And I never abandoned it. I wanted to understand why understanding itself is an object of inequality. Concretely, I wanted to understand why the story of an African asylum applicant was systematically misunderstood and disqualified by asylum officials in Belgium and elsewhere; why the stories of particular witnesses in the South African Truth and Reconciliation Commission were seen as “memorable” while others were forgotten or never taken seriously; why so many stories from the margins are considered not even worth the effort of listening to, let alone to record and examine; why some groups of people are not recognized as interlocutors, as legitimate voices that demand respect and attention, and so forth. This general concern took me, during my entire career, to the margins of societies I inhabited and worked in, and made confrontations with racism, sexism and other structural forms of inequality inevitable.
It also led to various practical decisions about how I organized my work. I will highlight three such decisions.
One. My experiences in African universities made me very much aware of the existence of several academic worlds, not the idealized one “academic community” sometimes invoked as a trope. And I decided to spend a lot of my efforts working with, and for the benefit of, what is now called the Global South. I am proud of official work I did with the University of the Western Cape in 2003-2008, where I coordinated a very big academic collaboration project on behalf of the Flemish Inter-University Council. UWC is a historically non-white university, and it still bore the scars of apartheid in 2003: the university was severely under-resourced and lacked the infrastructure as well as experience for building a contemporary research culture. Working in very close concert with the local university leadership – the most inspiring and energizing team of academic leaders I had ever met, and lifelong friends since – I believe we were able to turn the ship around. In the process I got to know a large community of amazing people who taught me a lot about what real commitment is – from Chancellor Desmond Tutu down to Allister, the man who acted as my fixer and driver whenever I was in Cape Town.
Informally, I did my best to work with and for scholars and institutions in the Global South, slowly building networks of contacts in several countries and trying to be of assistance in a variety of ways. The people I encountered through these networks usually didn’t have the money to travel to conferences where I appeared, nor the money needed to purchase my books. And this takes me to a second decision.
Two. I wanted to make my work available in open access and to create genuinely democratic mechanisms of circulation and distribution. Remember what I said earlier about science as a commons: I take that seriously. So, from very early on, I started series of working papers that enabled published high-quality material to circumvent the paywalls of commercial publishers. And as soon as the web became a factor of importance in our trade, I used it as a forum for circulation and distribution. Everything I write is first posted on a blog (this blog), and then usually moves to a working paper format in the Tilburg Papers in Culture Studies, before it finds its way into expensive journals or books. I also became an early mover on academic sharing platforms such as Academia.edu and ResearchGate. And I am proud to see that a large segment of those who read and download my materials are scholars from the Global South – those who can’t afford the commercial versions of my work.
But my obsession with open access is not restricted to the issue of Global South readerships. My own students, working with me at a well-resourced university in an affluent country, cannot afford to buy my books. As I said earlier, the academic publishing business has become a disgrace, and it excludes growing numbers of people who absolutely need access to its products. I saw it as part of my duty to subvert that system, to share and distribute things usually not free to be shared and distributed, and to do so early on with recent material. For making old texts widely available is good and useful, but the real need for scholars in very large parts of the world is to gain access to the most recent material, to become part of ongoing debates, to align their own research with that which is cutting-edge elsewhere. And the academic publishing industry does brilliant, truly majestic efforts to prevent exactly that.
We should not be part of that industry, we should not be its advocates and we should not feel obliged to serve that industry’s interests. We are its labor force, and we provide free, unpaid labor to it. We sign contracts with them – non-negotiable ones, usually – in which all rights to our own work are handed over, appropriated and privatized – in return for a doi number and a pdf. We are exploited by that industry to an extent that most other sane people find ridiculous. While, if we do a little bit of creative work, we don’t need that industry any longer. As academics, we have an idea of the audiences for our work out there that is far more precise than that of any marketing officer in an academic publishing firm. We also have a very good idea of who might be knowledgeable and reliable reviewers of our work. And we just need a website to post our work when it’s ready for publication – offering it free of charge and without constraints on sharing to anyone interested in it, not to all those who have paid a certain amount of cash for it.
Three. Throughout my career, I never stopped addressing non-academic audiences. I gave literally hundreds of lectures, workshops, training sessions and public debates for professionals and activists in a range of fields – education, social work, care, law, policing, antiracism, feminism, support to refugees, youth organizations, trade unions and political parties. As a rule I did so without charging a fee (see what I said earlier about giving things back to society), and the default answer to invitations was “yes”. I always found such activities rewarding, and the audiences I met through such activities were often extraordinarily energizing ones. I also continued to write materials in Dutch. Over a dozen books, if I am not mistaken, and piles of articles – all written for lay audiences, often based on my ongoing research, and often used in professional training programs. It was my way of trying to bring recent science to a broader public forum quickly. For social workers or teachers in multilingual classrooms should not be given information that was valid a decade ago; they should get the most advanced insights and understandings available and take these into their practices.
I used a label for the things I mentioned in this section. I called it “knowledge activism“. In a world in which knowledge is at once more widely available than ever before, and more exclusive and elitist than ever before, knowledge is a battlefield and those professionally involved in it must be aware of that. Speaking for myself: a neutral stance towards knowledge is impossible, for it would make knowledge anodyne, powerless, of little significance in the eyes of those exposed to it. Which is why we need an activist attitude, one in which the battle for power-through-knowledge is engaged, in which knowledge is activated as a key instrument for the liberation of people, and as a central tool underpinning any effort to arrive at a more just and equitable society. I have been a knowledge professional, indeed. But understanding what I have done as a professional is easier when one realizes the activism which, at least for me, made it worthwhile being a professional.
I will stop here. I have reviewed four things that I found important, looking back at a career as an academic that started in 1988 and is about to end. As I said, one should not read this review of important principles as the autobiography of a saint. I was evidently not perfect, made loads of errors, have been unjust to people, have made errors of judgment, have indulged in a culture of academic stardom and overachievement which I should have identified, right from the start, as superficial and irrelevant; I have been impossible to work with at times, grumpy and unpleasant at even more times, and so on. I am an ordinary person. But I do believe that I can say that I tried really hard to organize the professional part of my life according to the four points discussed here, and that the attempt, modest as it was, made that part of my life valuable to me. The satisfaction I draw from that is sufficient to end that part of my life without remorse, and without a sense of having missed out or of having been short-changed by others. I am happy to stop here.
I am often amazed at the naivety with which the thing called ‘research ethics’ is being addressed. This naivety, expressed in one-size-fits-all ‘ethical guidelines’ for research, overlooks the actual conditions under which so much of research is (necessarily) conducted: in sites often qualified as ‘margins’, with vulnerable, misrecognized and oppressed people whose position in society is precarious. Inequalities in the world are part and parcel of how actual research projects are undertaken, develop and evolve; this, of course, includes issues of method and methodology, but also issues of research ethics.
In 2008, I published a book called Grassroots Literacy. In the book, I analyzed handwritten texts by two authors from D.R. Congo – authors I had never met, about whom some suggested at the time that they had perished in the war raging in their area since the mid-1990s, and whose texts I had obtained, almost accidentally, through third persons. Working on these texts created acute ethical issues, which I raised and discussed in the preface. What follows is the relevant fragment from the preface to that book. The point I hope to make is that research ethics is a contextualized and situated matter, concrete features of which can and do escape the imagined simplicity (and equality) of the worldview often presupposed in ‘ethical guidelines’.
Globalization is a process that forces us to take the world as a context. This world is complex and highly diverse, and developments in the ‘centre’ of this world – the development of new telecommunication systems and media, for instance – have effects on the ‘margins’ of the world. Literacy is a case in point, and what the documents I examine here show us is that there is a growing gap between different literacy regimes in the world. Texts such as the ones I will discuss here do not quickly or easily communicate the messages they contain. Their meanings increasingly disappear in the widening gap between literacy regimes in diverse parts of the world. The problem is obviously not academic but very real, of immediate life-or-death importance to many people. Voice is a pressing concern in a globalizing context in which less and less can be taken for granted with respect to the communicative repertoires of people interacting with one another. I addressed these concerns in an earlier book called Discourse: A Critical Introduction (Cambridge University Press 2005), and in many ways the present study is a sequel to Discourse. It picks up, and develops, points embryonically made there, focusing on literacy because of the reasons specified above, and bringing literacy analysis in the same theoretical field of force as the one described in Discourse.
This purpose offers me the opportunity to write about a corpus of texts that has puzzled, intrigued and mesmerized me for more than a decade. I came across Julien’s life histories in the mid-1990s, by what I would call ‘structured accident’. The documents are rare instances of grassroots life-writing, and they offered me more theoretical and descriptive challenges than I could imagine at the time. My encounter with these documents coincided with a period in which I was deeply engaged with Johannes Fabian’s work. I had read and reviewed his History from Below, and few books ever had such a profound impact on me. Fabian has definitely been one of my maîtres à penser and the present book is, consequently, very much the upshot of a protracted dialogue with Fabian’s work.
This dialogue intensified when, again by accident, I started working on a handwritten history of the Congo written by the Congolese painter Tshibumba, about whose historical paintings Fabian had published the magnificent Remembering the Present. I received a copy of this massively intriguing document from Bogumil Jewsiwiecki, and quickly spotted the similarities between this history and Julien’s life-writing. Both displayed the constraints of sub-elite writing, and both produced a grassroots voice on history. In both, the very act of writing appeared to produce all sorts of things: texts, but also particular positions, subjectivities. The question guiding my work then became: what does this kind of grassroots literacy make possible for people such as Julien and Tshibumba?
I had, in the meantime, started realizing that the notion of constraint is central in considering this issue. Since the mid-1990s, I had frequently been requested by my national authorities to translate written statements by African refugees and Africans arrested by the police. Gradually, a corpus of texts had emerged in which I clearly saw that literacy achievements that had some value in sub-elite African contexts rather systematically failed to be seen as valuable in Belgium. The question about the possibilities of grassroots writing thus acquired a dimension of globalization: ‘grassroots’ equals local, and the local effectiveness and adequacy of communicative resources raises questions of mobility. Texts travel, and they not necessarily travel well. In the transfer from one place to another, they cross from one regime into another, and the changed orders of indexicality makes that they are understood differently. Having clearly understood that both Julien’s and Tshibumba’s texts were mobile texts – both were written for addressees in the West – I started realizing that these documents might offer exceptional possibilities for exploring and identifying the main issues of literacy in the age of globalization: issues that have to do with the locality of literacy regimes, with mobility and inequality.
This is the story of this book. There is irony in the story, because, naturally, it was hard not to reflect on my own writing practices while I was investigating those of Julien, Tshibumba and others. I saw my own literacy regime in action – writing in a globalized language that is not my own, in a particular register and genre, on a sophisticated laptop, in a solitary comfortable space surrounded by an archive and a working library, and with Google on the toolbar. All these material conditions: I don’t take them for granted anymore. There is so much inequality inscribed in the production of this book. The main inequality is in the result: voice. I can produce a globally mobile voice, they can’t; I can produce a prestige genre, they can’t; I can speak from within a recognizable position and identity, they can’t.
There are ethical issues here. I can write about Julien and Tshibumba in ways they themselves could not, for reasons that will become all too clear in the chapters of this book. And I could not consult them while writing. I never had contact with Julien, only with his patron, Mrs Arens. She informed Julien about my academic work on his texts, and she gave me, also on his behalf, permission for pursuing it. As for Tshibumba, he has disappeared from the radar screen several years ago and no one has been able to inform me about his whereabouts. Julien and Tshibumba, we should recall, live in the southern part of the Congo, in an area marked by deep poverty and marginalization, and torn by unrest and war since the second half of the 1990s. As for the refugees and police suspects whose documents I have analyzed, I hardly ever had any contact with them either, often because I did not even know their names and because my role as state-appointed translator proscribed contacts with these subjects.
I am aware of these issues, have reflected on them over and over again, and came across the bitter irony of contemporary realities. Customary ethical codes for research presuppose a particular socio-political environment in which everyone has a name, an administrative existence, a recognizable and recognised subjectivity that demands respect and distance. We can only use a pseudonym when people’s real names are known and when knowledge and possession of that name is connected to inalienable rights, to subjectivity and, consequently, to norms that separate the public from the private sphere. Underlying is the image of a fully integrated Modern society in which such elementary features are attached to everyone and recorded – officially – somewhere.
Real societies, alas, are different. There are people in our own Modern societies that do not possess such elementary features and rights. Illegal immigrants have no name and no identifiable ‘official’ existence. Their ‘lives’ and stories are, for all practical purposes, nonexistent. Their anonymity is not the result of a desire for ‘privacy’, it is the effect of erasure and silencing; not of choice but of oppression. And there are even more people elsewhere in the world to whom these conditions apply. African works of art kept in museums are only rarely attributed to an individual artist, they are attributed to an ethnic group or to a region somewhere in Africa. Millions of people there live ‘unofficial’ lives, and no one cares about their names, birth dates, addresses, or, in a wider sense, subjectivity. I write about their subjectivity, about their existence and lives – or seen from a different perspective: I invade their privacy – because I have voice and they don’t. I can invade their privacy because I have shaped a private sphere for them, and this act is an effect of global inequalities. I am not comfortable with that situation. But I believe there is great virtue in caring about their lives and in getting to know them, and if that exposes me to ethical criticisms, I will live with that. It is a lesson I have already learned about research in contemporary societies.
I have also learned that it is good to stop and reflect on such questions, and to realise (in Gunnar Myrdal’s footsteps) that existing ethical codes do not solve the moral dilemmas of social research. They merely highlight them.
I sometimes get asked why I insist on using new and arcane terms such as “superdiversity” and “chronotope” in fields for which we (appear to) have an established and consensual vocabulary. My answer is usually: sometimes we need new words for no other reason than to examine the validity of the old ones. A form of quality control of analytical vocabulary, if you wish.
The history of science is replete with reformulations of the same, or very similar, realities, and authors such as Michel Foucault were extraordinarily productive in the creation of an entirely new terminology to describe processes already described in, e.g., Weber and Marx. The quest was, almost invariably, a quest for enhanced precision and accuracy – rendering visible and analytically identifiable (often small but relevant) distinctions that had been left aside as relatively insignificant details, side-effects or mere aspects of another phenomenon; or to identify a phenomenon previously treated only in part or in a much to generalizing way. Think of Foucault’s use of “biopower” or “governmentality” as instances, Scott’s “hidden transcripts” or Bourdieu’s “habitus”. Such terms do not replace an earlier vocabulary, they complement it with tools that allow and enable a different approach to the same field or object, focusing on different aspects and characteristics of it.
In that sense, they are no one’s enemy. The more since, as C. Wright Mills reminded us, the debate should not be about the words, but about the ideas they capture and for which the words are merely facilitators.
I recently received an email from the academic sharing platform ResearchGate, where I maintain a profile.
So here we are. I am the originator of knowledge, but I can only share it with others under conditions specified by a commercial enterprise, who holds the license to it. In the philosophical literature, this situation is know as heteronomy, the opposite of autonomy.
There is a long tradition in which knowledge was seen as an exceptional kind of commodity: as opposed to e.g. an apple or a bottle of Coke, consumption of knowledge doesn’t deprive its producer of it. On the contrary, it is supposed to make everyone better, to contribute to the common good. We are now in a situation where (a) this principle of knowledge as a commons is entirely rejected; (b) the producer is deprived of the autonomy to communicate it.
The contribution of a publisher to an academic article is nothing more than a reference. To the title and author, it adds something such as “Journal of Applied Linguistics 34/3: 111-131”. That’s it. This reference, of course, is the stuff of careers. The article can be insignificant or outright useless, but the reference turns it into an academic achievement, a really-existent result and product of labor that can be turned into a line in someone’s CV and thence into an argument for appointment, tenure or promotion. It’s alchemy: a stone has been turned into gold. Or at least, that’s what we believe has happened.
The price to be paid for this bit of alchemy is colossal. I can make ideas, convert them into knowledge and write them into texts; but I have to ask permission for communicating them to others, for I don’t have the right to do so myself. I can, thus, violate the copyrights to my own thoughts, words and phrases. I can be punished for doing so. So what is the next thing? Perhaps a close monitoring by publishers of conference contributions? – imagine that I would read a paper published by them to an audience who haven’t paid for it? Surely that would be a crime.
Forgive me if I find all of that quite weird.
Copyright Jan Blommaert, 2016
In what follows, I intend to place some footnotes to an earlier text, in which I addressed at length various highly contentious issues characterizing the field of academic publishing nowadays. That earlier text, roughly summarized, (a) described the present economic model of academic publishing as outspokenly exploitative; (b) included the current models of Open Access as equally absurd when viewed from the perspective of ownership, and (c) suggested that publishers become increasingly redundant as actors in the field of knowledge circulation. We can independently do almost everything currently done by publishers, and do it better.
The text became the topic of one of the first discussion sessions on Academia.edu and was widely picked up and redistributed (illustrating, thus, the exact point it was making). To the extent that the arguments in the text still require clarification and further elaboration, I wish to offer one point in what follows – about money.
As an element of background, it is good to recall that academic publishing is an extraordinarily lucrative business – in fact, one of the most lucrative businesses around. In 2013, Elsevier-Reed (one of the giants in the field) reported a net profit rate of 39% – a margin which for most other domains of industry belongs to the realm of dreams. Part of this is due to the escalation of subscription costs for academic journals, which has risen at a level three times higher than other average commodity costs since 1986. Academic publishing is, if one wishes, a robber economy. Open Access negotiations, especially those in which so-called “Gold Open Access” is the target, involve the payment of several thousands of Euro’s for a single article to be made Open Access. I refer the reader to the earlier text for details.
Grasping the nature of the transactions involved in all of this can be helped by the following illustration. Here is part of a copyright agreement I recently concluded with a prominent academic publisher.
There is nothing special about the text of the agreement; in fact, it is quite common in our field. We notice that I transfer all copyrights to the publisher, and that I do not get a reward for it. In fact, what I get in return is a reference to an electronically published version of my text, and some heavily limited rights in using this published version myself. People who wish to read my article and have no access to an institutional subscription to the journal have to pay the price of a book – between 30€ and 50€ per article. And if I (or my university) want to turn the article into something that can be read at no cost by anyone, a couple of thousands of Euros must be paid to the publisher.
It’s all about money, surely, but only parts of the money involved in this are shown so far. An aspect never mentioned in these transactions is the production cost of the article. Articles don’t grow on trees, they are manufactured by someone, and this process involves material and immaterial resources, and labor costs.
Now let us do a little simulation here, and a merciful one. Imagine that the production cost of an average article involves 100 hours of academic labor (from getting the idea, over the research, to reading, writing, editing and so forth, and including the material costs). And imagine that such labor costs about 20€ per hour (as I said, I am being merciful here). The production cost of the article is 2000€, and by signing the copyright agreement this is donated to the publisher, who, in turn, charges everyone (including the author) for reading the article. It’s a form of “enclosure” – you spent a season working hard growing apples, but if you wish to eat one you need to buy it from a grocer who happens to have licensed the apples.
Imagine now that I write a book. The book has seven chapters, and to keep things simple I use the calculation above – each chapter being the equivalent of an article. We then get 7 times 2000€, or 14.000 Euros’ worth of labor donated to the publisher. It is because these production costs are eliminated in the transactions we (have to) enter into with publishers, that academic publishing is so extraordinarily lucrative a business. Publishers, simply put, do not bear any cost in the production of their primary material – the papers and books we submit to them for publication. When they speak about “costs”, consequently, they only address the end of the line production costs – some editing and lay outing, and marketing, sales and distribution of things that consumed tremendous amounts of labor to produce and represent, consequently, tremendous value – all of which is made invisible now. Note, in passing, that these end-of-the-line costs are usually pressented as prohibitive and are also rolled off onto the author, as in the following illustration, a fragment from another copyright agreement:
Now, whose money is involved here? In my case, the money appropriated by publishers is that of my employer, a public university; through the system of subsidies in education, the money is ultimately put up by the taxpayer. Who, if s/he now wants to read the product they subsidized, need to pay 30€ for a single pdf download.
From the publishers’ viewpoint, this is an excellent business model (credentialed, I assume, by their profit margins). From the viewpoint of the producer, it’s a net, huge loss, and an economic model that is profoundly unsustainable. I can therefore simply repeat the conclusion of the earlier text on this topic: let’s do the publishing ourselves. We can do it better, cheaper, more efficiently, and more democratically.
POSTSCRIPT April 2019
In yet another transition in the system described here, Elsevier now considers the copyright transfer agreement between author and publisher an “order”: by transferring all the rights to his/her text, the author places an order with Elsevier for publishing etc. services. Needless to say that there is no more room for negotiation (let alone disagreement) about the conditions of this order: if you do not click the right buttons, your text will not be submitted to the publisher – period. And One of the buttons one can click is the Gold Open Access one – which involves a payment (by the author) of 1500Euro.
There is no limit to the exploitation model in academic publishing…
(Some of the arguments here are inspired by the essays in Charlotte Hess & Elinor Ostrom, Understanding Knowledge as a commons: From theory to practice; MIT Press 2011)
Can we agree that Albert Einstein was a scientist? That he was a good one (in fact, a great one)? And that his scientific work has been immeasurably influential?
I’m asking these silly questions for a couple of reasons. One: Einstein would, in the present competitive academic environment, have a really hard time getting recognized as a scientist of some stature. He worked in a marginal branch of science – more on this in a moment – and the small oeuvre he published (another critical limitation now) was not written in English but in German. His classic articles bore titles such as “Die vom Relativätsprinzip geforderte Trägheit der Energie” and appeared in journals called “Annalen der Physik” or “Beiblätter zu den Annalen der Physik”. Nobody would read such papers nowadays.
Two, his work was purely theoretical. That means that it revolved around the production of new ideas, or to put it more bluntly, around imagination. These forms of imagination were not wild or unchecked – it wasn’t “anything goes”. They were based on a comprehensive knowledge of the field in which he placed these ideas (the “known facts of science”, one could say, or “the state of the art” in contemporary jargon ), and the ideas themselves presented a synthesis, sweeping up what was articulated in fragmentary form in various sources and patching up the gaps between the different fragments. His ideas, thus, were imagined modes of representation of known facts and new (unknown but hypothetical and thus plausible or realistic) relations between them.
There was nothing “empirical” about his work. In fact, it took decades before aspects of his theoretical contributions were supported by empirical evidence, and other aspects still await conclusive empirical proof. He did not construct these ideas in the context of a collaborative research project funded by some authoritative research body – he developed them in a collegial dialogue with other scientists, through correspondence, reading and conversation. In the sense of today’s academic regime, there was, thus, nothing “formal”, countable, measurable, structured, justifiable, or open to inspection to the way he worked. The practices that led to his theoretical breakthroughs would be all but invisible on today’s worksheets and performance assessment forms.
As for “method”, the story is even more interesting. Einstein would rather systematically emphasize the disorderly, even chaotic nature of his work procedures, and mention the fact (often also confirmed by witnesses) that, when he got stuck, he would leave his desk, papers and notebooks, pick up his violin and play music until the crucial brainwave occurred. He was a supremely gifted specialized scholar, of course, but also someone deeply interested (and skilled) in music, visual art, philosophy, literature and several other more mundane (and “unscientific”) fields. His breakthroughs, thus, were not solely produced by advances in the methodical disciplinary technique he had developed; they were importantly triggered by processes that were explicitly non-methodical and relied on “stepping out” of the closed universe of symbolic practices that made up his science.
Imagine, now, that we would like to train junior scholars to become new Einsteins. How would we proceed? Where would we start?
Those familiar with contemporary research training surely know what I am talking about: students are trained to become “scientists” by doing the opposite of what turned Einstein into the commanding scientist he became. The focus these days is entirely – and I am not overstating this – on the acquisition, development and refining of methods to be deployed on problems which in turn are grounded in assumptions by means of hypotheses. Research training now is the training of practicing that model. The problems are defined by the assumptions and discursively formulated through the hypotheses – so they tolerate little reflection or unthinking, they are to be adopted. And what turns the student’s practices into “science” is the disciplined application of acquired methods to such problems resting on such assumptions. This, then, yields scientific facts either confirming or challenging the “hypotheses” that guided the research, and the production of such facts-versus-hypotheses is called scientific research. Even more: increasingly we see that only this procedure is granted the epithet of “scientific” research.
The stage in which ideas are produced is entirely skipped. Or better, the tactics, practices and procedures for constructing ideas are eliminated from research training. The word “idea” itself is often pronounced almost with a sense of shame, as an illegitimate and vulgar term better substituted by formal jargonesque (but equally vague) terms such as “hypothesis”. While, in fact, the closest thing to “idea” in my formulation is the term “assumption” I used in my description of the now dominant research model. And the thing is that while we train students to work from facts through method to hypotheses in solving a “problem”, we do not train them to questions the underlying assumptions that formed both the “problem” they intend to address and the epistemological and methodological routes designed to solve such problems. To put it more sharply, we train them in accepting a priori the fundamental issues surrounding and defining the very stuff they should inquire into and critically question: the object of research, its relations with other objects, the “evidence” we shall accept as elements adequately constructing this object, and the ways in which we can know, understand and communicate all this. We train them, thus, in reproducing – and suggestively confirming – the validity of the assumptions underlying their research.
“Assumptions” typically should be statements about reality, about the fundamental nature of phenomena as we observe and investigate them among large collectives of scientists. Thus, an example of an assumption could be “humans produce meaning through the orderly grammatical alignment of linguistic forms”. Or: “social groups are cohesive when they share fundamental values, that exist sociocognitively in members’ minds”. Or “ethnicity defines and determines social behavior”. One would expect such assumptions to be the prime targets of continuous critical reassessment in view (precisely) of the “facts” accumulated on aspects that should constitute them. After all, Einstein’s breakthroughs happened at the level of such assumptions, if you wish. Going through recent issues of several leading journals, however, leads to a perplexing conclusion: assumptions are nearly always left intact. Even more: they are nearly always confirmed and credentialed by accumulated “facts” from research – if so much research can be based on them, they must be true, so it seems. “Proof” here is indirect and by proxy, of course – like miracles “proving” the sacred powers of an invoked Saint.
Such assumptions effectively function not as statements about the fundamental nature of objects of research, open for empirical inspection and critique, but as axiomatic theses to be “believed” as a point of departure for research. Whenever such assumptions are questioned, even slightly, the work that does so is instantly qualified as “controversial” (and, in informal conversations, as “crackpot science” or “vacuous speculation”). And “re-search”, meaning “searching again”, no longer means searching again da capo, from step 1, but searching for more of the same. The excellent execution of a method and its logic of demonstration is presented as conclusive evidence for a particular reality. Yes, humans do indeed produce meaning through the orderly grammatical alignment of linguistic forms, because my well-crafted application of a method to data does not contradict that assumption. The method worked, and the world is chiseled accordingly.
Thus we see that the baseline intellectual attitude of young researchers, encouraged or enforced and positively sanctioned – sufficient, for instance, to obtain a doctoral degree and get your work published in leading journals, followed by a ratified academic career – is one in which accepting and believing are key competences, increasingly even the secret of success as a researcher. Not unthinking the fundamental issues in one’s field, and abstaining from a critical inquisitive reflex in which one looks, unprompted, for different ways of imagining objects and relations between them, eventually arriving at new, tentative assumptions (call them ideas now) – is seen as being “good” as a researcher.
The reproductive nature of such forms of research is institutionally supported by all sorts of bonuses. Funding agencies have a manifest and often explicit preference for research that follows the clear reproductive patterns sketched above. In fact, funding bodies (think of the EU) often provide the fundamental assumptions themselves and leave it to researchers to come up with proof of their validity. Thus, for instance, the EU would provide in its funding calls assumptions such as “security risks are correlated with population structure, i.e. with ethnocultural and religious diversity” and invite scientific teams to propose research within the lines of defined sociopolitical reality thus drawn. Playing the game within these lines opens opportunities to acquire that much-coveted (and institutionally highly rewarded) external research funding – an important career item in the present mode of academic politics.
There are more bonuses. The reproductive nature of such forms of research also ensures rapid and high-volume streams of publications. The work is intellectually extraordinarily simple, really, even if those practicing it will assure us that it is exceedingly hard: no fundamental (and often necessarily slow) reflection, unthinking and imaginative rethinking are required, just the application of a standardized method to new “problems” suffices to achieve something that can qualify as (new or original) scientific fact and can be written down as such. Since literature reviews are restricted to reading nothing fundamentally questioning the assumptions, but reading all that operates within the same method-problem sphere, published work quickly gains high citation metrics, and the journals carrying such work are guaranteed high impact factors – all, again, hugely valuable symbolic credit in today’s academic politics. Yet, reading such journal issues in search for sparkling and creative ideas usually turns into a depressing confrontation with intellectual aridity. I fortunately can read such texts as a discourse analyst, which makes them at least formally interesting to me. But that is me.
Naturally, but unhappily, nothing of what I say here is new. It is worth returning to that (now rarely consulted) classic by C. Wright Mills, “The Sociological Imagination” (1959) to get the historical perspective right. Mills, as we know, was long ago deeply unhappy with several tendencies in US sociology. One tendency was the reduction of science to what he called “abstracted empiricism” – comparable to the research model I criticized here. Another was the fact that this abstracted empiricism took the “grand theory” of Talcott Parsons for granted as assumptions in abstracted empirical research. A poor (actually silly) theory vulnerable to crippling empirical criticism, Mills complained, was implicitly confirmed by the mass production of specific forms of research that used the Parsonian worldview as an unquestioned point of departure. The title of his book is clear: in response to that development, Mills strongly advocated imagination in the sense outline earlier, the fact that the truly creative and innovative work in science happens when scientists review large amounts of existing “known facts” and reconfigure them into things called ideas. Such re-imaginative work – I now return to a contemporary vocabulary – is necessarily “slow science” (or at least slower science), and is effectively discouraged in the institutional systems of academic valuation presently in place. But those who neglect, dismiss or skip it do so at their own peril, C. Wright Mills insisted.
It is telling that the most widely quoted scholars tend to be people who produced exactly such ideas and are labeled as “theorists” – think of Darwin, Marx, Foucault, Freud, Lévi-Strauss, Bourdieu, Popper, Merleau-Ponty, Heidegger, Hayek, Hegel and Kant. Many of their most inspiring works were nontechnical, sweeping, bold and provocative – “controversial” in other words, and open to endless barrages of “method”-focused criticism. But they influenced, and changed, so much of the worldviews widely shared by enormous communities of people worldwide and across generations.
It is worth remembering that such people did really produce science, and that very often, they changed and innovated colossal chunks of it by means of ideas, not methods. Their ideas have become landmarks and monuments of science (which is why everyone knows Einstein but only very few people know the scientists who provided empirical evidence for his ideas). It remains worthwhile examining their works with students, looking closely at the ways in which they arrived at the ideas that changed the world as we know it. And it remains imperative, consequently, to remind people that dismissing such practices as “unscientific” – certainly when this has effects on research training – denies imperious scientific efforts, inspiring and forming generations of scientists, the categorical status of “science”, reserving it for a small fraction of scientific activities which could, perhaps far better, be called “development” (as in “product development”). Whoever eliminates ideas from the semantic scope of science demonstrates a perplexing lack of them. And whoever thinks that scientific ideas are the same as ideas about where to spend next year’s holiday displays a tremendous lack of familiarity with science.
Much of what currently dominates the politics and economies of science (including how we train young scientists) derives its dominant status not from its real impact on the world of knowledge but from heteronomic forces operating on the institutional environments for science. The funding structure, the rankings, the metrics-based appraisals of performance and quality, the publishing industry cleverly manipulating all that – those are the engines of “science” as we now know it. These engines have created a system in which Albert Einstein would be reduced to a marginal researcher – if a researcher at all. If science is supposed to maintain, and further develop, the liberating and innovative potential it promised the world since the era of Enlightenment, it is high time to start questioning all that, for an enormous amount of what now passes as science is astonishingly banal in its purpose, function and contents, confirming assumptions that are sometimes simply absurd and surreal.
We can start by talking to the young scholars we train about the absolutely central role of ideas in scientific work, encourage them to abandon the sense of embarrassment they experience whenever they express such ideas, and press upon them that doing scholarly work without the ambition to continue producing such ideas is, at best, a reasonably interesting pastime but not science.
Attracting external funding has become, everywhere, one of the main priorities of academics, and writing funding application has consequently also become one of their main tasks. The idea is “competitiveness”: quality will be evident when academics, individually or in teams, acquire funding after a strict and rigorously exclusive peer-review process. In addition, specific sources of funding are specified as benchmarks, suggesting that they are the “most competitive” ones, and therefore also the best and most objective indicators of quality: think of the ESRC in the UK or (the focus of this text) the European framework program Horizon 2020. In every form of performance management – for individual academics seeking promotion or tenure, for research teams, departments and entire universities – success in such benchmark external funding acquisition is given immense positive attention. Universities, consequently, impose quota on their academic units – “you shall apply for at least five EU grants and obtain at least one this year!” – and turn it into a compulsory, even key activity of their staff. Professional grant writers and administrators are hired in academic departments or labs, and universities now employ EU-targeting lobbyists to “assist” and “facilitate” their bids for funding.
Well, my team just submitted a Horizon 2020 application last week, following a thematic call several months ago. In view of the application, we had set up an international consortium earlier on, did profound content preparation, and one of our team members spent hundreds of hours and several international trips worth several thousands of Euros preparing the application.
After submitting, we heard that a total of 147 applications had been received by the EU. And that the EU will eventually grant 2 – two – projects. In a rough calculation, this means that the chance of success in this funding line is 1,3%; it also means that 98,7% of the applications – 145 of them, to be accurate – will be rejected.
And here is the problem.
It would be interesting to see the grand total of labor and resources invested in the 145 applications, calculated in Euros. My guess is that many millions’ worth of (usually) taxpayers’ money will have been used – wasted – in this massive and mass grantwriting effort. Several hundreds of researchers will have been involved, each spending dozens if not hundreds of their salaried working hours on preparing the application, and hundreds of university administrators will have been involved as well, also spending salaried working hours on the applications. These millions of Euros have not been used in creative and innovative research – they weren’t spent on doing fieldwork, experiments or tests, nor on writing papers and holding presentations in workshops and symposiums. They were spent on … nothing. For when a grant application is rejected, the time and energy investment spent on it evaporates, as if these hours of labor were never spent, and as if the academics who spent them had nothing else to do.
Thus, while this Horizon 2020 funding line will disburse half a dozen millions of Euros to the two “winning” teams, it will have cost more millions to the EU academic community represented by the 145 others who were rejected. Money, thus, has been sucked out of an already fragile funding base for universities across the EU, in a vain attempt to “win” and “be competitive” – and therefore “good”.
The attempt is futile, because if the rejection rate is 98,7%, the message given by the EU is, in effect, that almost all of the academic units participating across the EU in the funding call are not good enough. It is nonsense to try and argue that on grounds of pure academic quality just 1,3% will qualify, for the number of grants to be awarded is known before the peer review procedure takes place. In that sense, the peer review done by the EU panels is simply useless, for it has no impact on the number of awards granted by the EU – tens of applicants will receive a letter soon stating that their project was evaluated as “excellent but not selected for funding”. The criteria determining the “selection for funding” are, needless to say, carefully guarded secrets, and not grounded in assessments of academic quality. The system of selection is, when all has been said and done, simply irrational and unreasonable.
Still, and notwithstanding the previous remark, success or rejection is seen as an objective indicator of academic quality across the EU university system. By awarding just 1,3% of the applications, thus, a rather thoroughly absurd reality is shaped: almost 99% of the competing academics in the EU do not make the mark, and just 1,3% satisfies the EU benchmark. Now, we know that the 98,7% “losers” still have to compete in order to show that they are good enough; but when a selection bottleneck is thàt narrow, the effort, and the resources invested into it, are in effect simply wasted.
The paradox is clear: by going along with the stampede of competitive external funding acquisition, almost all universities across the EU will lose not just money, but extremely valuable research time for their staff. Little academic improvement will be achieved, and little progress in science, if doing actual research is replaced by writing grant proposals with an almost-zero chance of success. And as long as academics and academic units are told that success or failure in getting EU funding (with success rates such as the one mentioned here known in advance) is a criterium for determining their academic quality, gross injustice will be committed. People will be judged inadequate, mediocre or simply poor academics because they failed to get the benchmark funding – awarded, as we saw, on grounds that have little to do with academic quality assessments of applications. Heteronomy is the word that comes to mind here: academic practices and achievements are judged by means of non-academic standards, given a thin but hopelessly unconvincing veneer of “competitiveness”. And universities seeking to acquire external funding will be depleting their internal funding at extreme speed, the more they engage in this stampede for “competitiveness”.
I find this logic beyond comprehension. Those who rationalize the importance of acquiring benchmark external funding, are rationalizing an unreasonable and heteronomic system that produces tremendous numbers of “losers” and a tiny number of “winners”. The losers can be put under increasing pressure to show that they are competitive – increasingly risking their careers and spending funds better used on research and other intellectual activities.
To sum up: if the number of grants to be awarded is established before the peer-review process, this kind of “competitive” benchmark funding is not competitive at all, and a benchmark for nothing at all – least of all for academic quality. If, however, results in this weird game are maintained as serious and consequential criteria for assessing academic quality, then the conclusion is that there are no good academics in Europe – 99% of them will fail to get ratified as good enough. And these 99% will have to spend significant amounts of taxpayers’ money to eventually prove – what?
The entire thing really, seriously, begins to look and feel like buying lottery tickets or betting on horses: one spends money hoping to win some – and at moments of lucidity, one is aware of the fact that the net outcome will be loss, not gain. In the meantime, beautiful arias are sung about the extreme importance of research and innovation by the EU, by its member states, and by its universities. The question, of course, is how such a great cause is served by the present system of benchmark external funding acquisition. The money spent on it, I would say, would be better spent on … research and innovation proper.
In response to criticism by protesters, aimed at the neoliberalization of universities, governments often reply that “they are investing more” in universities than ever before. This argument often has a silencing effect, it is often seen and experienced as an effective rebuttal of the protesters’ claims.
It also fits nicely with the general call, repeated endlessly, that “the best possible investment is investment in education”. From Warren Buffett and the World Economic Forum down to local city councils and poverty-combating NGO’s: the same message is broadcast over and over again. Invest in education, and you will increase employment rate, build superior skills for the workforce of the future, produce surplus value and become innovative that way, and fight poverty most effectively. “Investing in education” is generally perceived as uniformly and unambiguously good, something we all want and something of which all of us would massively benefit. Governments pulling that rabbit out of their hats, therefore, can align their discourses with massively consensual ones at societal level. Which increases the silencing effect of the argument on protesters claiming a more democratic and low-threshold university.
The argument is very easy to answer though. The issue is what EXACTLY governments are investing in? The fact of investment itself is entirely conditioned, as to usefulness and impact, by the actual aspects of university life that receive the investments. And looking at these, we see that such investments usually go to constructing or strengthening a high-end elite environment of competitive science: new state-of-the-art labs specialized in research that can be immediately converted into market value. Or some “top” recruitment of celebrity professors capable of attracting loads of new students. “Low end” activities, on the contrary, are usually the object of disinvestment (think of student facilities, adjunct academics, basic infrastructure).