Research ethics in context

ScreenHunter_571 May. 17 12.00

Jan Blommaert 

I am often amazed at the naivety with which the thing called ‘research ethics’ is being addressed. This naivety, expressed in one-size-fits-all ‘ethical guidelines’ for research, overlooks the actual conditions under which so much of research is (necessarily) conducted: in sites often qualified as ‘margins’, with vulnerable, misrecognized and oppressed people whose position in society is precarious. Inequalities in the world are part and parcel of how actual research projects are undertaken, develop and evolve; this, of course, includes issues of method and methodology, but also issues of research ethics.

In 2008, I published a book called Grassroots Literacy. In the book, I analyzed handwritten texts by two authors from D.R. Congo – authors I had never met, about whom some suggested at the time that they had perished in the war raging in their area since the mid-1990s, and whose texts I had obtained, almost accidentally, through third persons. Working on these texts created acute ethical issues, which I raised and discussed in the preface. What follows is the relevant fragment from the preface to that book. The point I hope to make is that research ethics is a contextualized and situated matter, concrete features of which can and do escape the imagined simplicity (and equality) of the worldview often presupposed in ‘ethical guidelines’.

Globalization is a process that forces us to take the world as a context. This world is complex and highly diverse, and developments in the ‘centre’ of this world – the development of new telecommunication systems and media, for instance – have effects on the ‘margins’ of the world. Literacy is a case in point, and what the documents I examine here show us is that there is a growing gap between different literacy regimes in the world. Texts such as the ones I will discuss here do not quickly or easily communicate the messages they contain. Their meanings increasingly disappear in the widening gap between literacy regimes in diverse parts of the world. The problem is obviously not academic but very real, of immediate life-or-death importance to many people. Voice is a pressing concern in a globalizing context in which less and less can be taken for granted with respect to the communicative repertoires of people interacting with one another. I addressed these concerns in an earlier book called Discourse: A Critical Introduction (Cambridge University Press 2005), and in many ways the present study is a sequel to Discourse. It picks up, and develops, points embryonically made there, focusing on literacy because of the reasons specified above, and bringing literacy analysis in the same theoretical field of force as the one described in Discourse.

This purpose offers me the opportunity to write about a corpus of texts that has puzzled, intrigued and mesmerized me for more than a decade. I came across Julien’s life histories in the mid-1990s, by what I would call ‘structured accident’. The documents are rare instances of grassroots life-writing, and they offered me more theoretical and descriptive challenges than I could imagine at the time. My encounter with these documents coincided with a period in which I was deeply engaged with Johannes Fabian’s work. I had read and reviewed his History from Below, and few books ever had such a profound impact on me. Fabian has definitely been one of my maîtres à penser and the present book is, consequently, very much the upshot of a protracted dialogue with Fabian’s work.

This dialogue intensified when, again by accident, I started working on a handwritten history of the Congo written by the Congolese painter Tshibumba, about whose historical paintings Fabian had published the magnificent Remembering the Present. I received a copy of this massively intriguing document from Bogumil Jewsiwiecki, and quickly spotted the similarities between this history and Julien’s life-writing. Both displayed the constraints of sub-elite writing, and both produced a grassroots voice on history. In both, the very act of writing appeared to produce all sorts of things: texts, but also particular positions, subjectivities. The question guiding my work then became: what does this kind of grassroots literacy make possible for people such as Julien and Tshibumba?

I had, in the meantime, started realizing that the notion of constraint is central in considering this issue. Since the mid-1990s, I had frequently been requested by my national authorities to translate written statements by African refugees and Africans arrested by the police. Gradually, a corpus of texts had emerged in which I clearly saw that literacy achievements that had some value in sub-elite African contexts rather systematically failed to be seen as valuable in Belgium. The question about the possibilities of grassroots writing thus acquired a dimension of globalization: ‘grassroots’ equals local, and the local effectiveness and adequacy of communicative resources raises questions of mobility. Texts travel, and they not necessarily travel well. In the transfer from one place to another, they cross from one regime into another, and the changed orders of indexicality makes that they are understood differently. Having clearly understood that both Julien’s and Tshibumba’s texts were mobile texts – both were written for addressees in the West – I started realizing that these documents might offer exceptional possibilities for exploring and identifying the main issues of literacy in the age of globalization: issues that have to do with the locality of literacy regimes, with mobility and inequality.

This is the story of this book. There is irony in the story, because, naturally, it was hard not to reflect on my own writing practices while I was investigating those of Julien, Tshibumba and others. I saw my own literacy regime in action – writing in a globalized language that is not my own, in a particular register and genre, on a sophisticated laptop, in a solitary comfortable space surrounded by an archive and a working library, and with Google on the toolbar. All these material conditions: I don’t take them for granted anymore. There is so much inequality inscribed in the production of this book. The main inequality is in the result: voice. I can produce a globally mobile voice, they can’t; I can produce a prestige genre, they can’t; I can speak from within a recognizable position and identity, they can’t.

There are ethical issues here. I can write about Julien and Tshibumba in ways they themselves could not, for reasons that will become all too clear in the chapters of this book. And I could not consult them while writing. I never had contact with Julien, only with his patron, Mrs Arens. She informed Julien about my academic work on his texts, and she gave me, also on his behalf, permission for pursuing it. As for Tshibumba, he has disappeared from the radar screen several years ago and no one has been able to inform me about his whereabouts. Julien and Tshibumba, we should recall, live in the southern part of the Congo, in an area marked by deep poverty and marginalization, and torn by unrest and war since the second half of the 1990s. As for the refugees and police suspects whose documents I have analyzed, I hardly ever had any contact with them either, often because I did not even know their names and because my role as state-appointed translator proscribed contacts with these subjects.

I am aware of these issues, have reflected on them over and over again, and came across the bitter irony of contemporary realities. Customary ethical codes for research presuppose a particular socio-political environment in which everyone has a name, an administrative existence, a recognizable and recognised subjectivity that demands respect and distance. We can only use a pseudonym when people’s real names are known and when knowledge and possession of that name is connected to inalienable rights, to subjectivity and, consequently, to norms that separate the public from the private sphere. Underlying is the image of a fully integrated Modern society in which such elementary features are attached to everyone and recorded – officially – somewhere.

Real societies, alas, are different. There are people in our own Modern societies that do not possess such elementary features and rights. Illegal immigrants have no name and no identifiable ‘official’ existence. Their ‘lives’ and stories are, for all practical purposes, nonexistent. Their anonymity is not the result of a desire for ‘privacy’, it is the effect of erasure and silencing; not of choice but of oppression. And there are even more people elsewhere in the world to whom these conditions apply. African works of art kept in museums are only rarely attributed to an individual artist, they are attributed to an ethnic group or to a region somewhere in Africa. Millions of people there live ‘unofficial’ lives, and no one cares about their names, birth dates, addresses, or, in a wider sense, subjectivity. I write about their subjectivity, about their existence and lives – or seen from a different perspective: I invade their privacy – because I have voice and they don’t. I can invade their privacy because I have shaped a private sphere for them, and this act is an effect of global inequalities. I am not comfortable with that situation. But I believe there is great virtue in caring about their lives and in getting to know them, and if that exposes me to ethical criticisms, I will live with that. It is a lesson I have already learned about research in contemporary societies.

I have also learned that it is good to stop and reflect on such questions, and to realise (in Gunnar Myrdal’s footsteps) that existing ethical codes do not solve the moral dilemmas of social research. They merely highlight them.


Why use new words?


Jan Blommaert 

I sometimes get asked why I insist on using new and arcane terms such as “superdiversity” and “chronotope” in fields for which we (appear to) have an established and consensual vocabulary. My answer is usually: sometimes we need new words for no other reason than to examine the validity of the old ones. A form of quality control of analytical vocabulary, if you wish.

The history of science is replete with reformulations of the same, or very similar, realities, and authors such as Michel Foucault were extraordinarily productive in the creation of an entirely new terminology to describe processes already described in, e.g., Weber and Marx. The quest was, almost invariably, a quest for enhanced precision and accuracy – rendering visible and analytically identifiable (often small but relevant) distinctions that had been left aside as relatively insignificant details, side-effects or mere aspects of another phenomenon; or to identify a phenomenon previously treated only in part or in a much to generalizing way. Think of Foucault’s use of “biopower” or “governmentality” as instances, Scott’s “hidden transcripts” or Bourdieu’s “habitus”. Such terms do not replace an earlier vocabulary, they complement it with tools that allow and enable a different approach to the same field or object, focusing on different aspects and characteristics of it.

In that sense, they are no one’s enemy. The more since, as C. Wright Mills reminded us, the debate should not be about the words, but about the ideas they capture and for which the words are merely facilitators.


Who owns my ideas?


Jan Blommaert

I recently received an email from the academic sharing platform ResearchGate, where I maintain a profile.


So here we are. I am the originator of knowledge, but I can only share it with others under conditions specified by a commercial enterprise, who holds the license to it. In the philosophical literature, this situation is know as heteronomy, the opposite of autonomy.

There is a long tradition in which knowledge was seen as an exceptional kind of commodity: as opposed to e.g. an apple or a bottle of Coke,  consumption of knowledge doesn’t deprive its producer of it. On the contrary, it is supposed to make everyone better, to contribute to the common good. We are now in a situation where (a) this principle of knowledge as a commons is entirely rejected; (b) the producer is deprived of the autonomy to communicate it.

The contribution of a publisher to an academic article is nothing more than a reference. To the title and author, it adds something such as “Journal of Applied Linguistics 34/3: 111-131”. That’s it.  This reference, of course, is the stuff of careers. The article can be insignificant or outright useless, but the reference turns it into an academic achievement, a really-existent result and product of labor that can be turned into a line in someone’s CV and thence into an argument for appointment, tenure or promotion. It’s alchemy: a stone has been turned into gold. Or at least, that’s what we believe has happened.

The price to be paid for this bit of alchemy is colossal. I can make ideas, convert them into knowledge and write them into texts; but I have to ask permission for communicating them to others, for I don’t have the right to do so myself. I can, thus, violate the copyrights to my own thoughts, words and phrases. I can be punished for doing so. So what is the next thing? Perhaps a close monitoring by publishers of conference contributions? – imagine that I would read a paper published by them to an audience who haven’t paid for it? Surely that would be a crime.

Forgive me if I find all of that quite weird.

Copyright Jan Blommaert, 2016


Academic publishing and money


Jan Blommaert 

In what follows, I intend to place some footnotes to an earlier text, in which I addressed at length various highly contentious issues characterizing the field of academic publishing nowadays. That earlier text, roughly summarized, (a) described the present economic model of academic publishing as outspokenly exploitative; (b) included the current models of Open Access as equally absurd when viewed from the perspective of ownership, and (c) suggested that publishers become increasingly redundant as actors in the field of knowledge circulation. We can independently do almost everything currently done by publishers, and do it better.

The text became the topic of one of the first discussion sessions on and was widely picked up and redistributed (illustrating, thus, the exact point it was making). To the extent that the arguments in the text still require clarification and further elaboration, I wish to offer one point in what follows – about money.

As an element of background, it is good to recall that academic publishing is an extraordinarily lucrative business – in fact, one of the most lucrative businesses around. In 2013, Elsevier-Reed (one of the giants in the field) reported a net profit rate of 39% – a margin which for most other domains of industry belongs to the realm of dreams. Part of this is due to the escalation of subscription costs for academic journals, which has risen at a level three times higher than other average commodity costs since 1986. Academic publishing is, if one wishes, a robber economy. Open Access negotiations, especially those in which so-called “Gold Open Access” is the target, involve the payment of several thousands of Euro’s for a single article to be made Open Access. I refer the reader to the earlier text for details.

Grasping the nature of the transactions involved in all of this can be helped by the following illustration. Here is part of a copyright agreement I recently concluded with a prominent academic publisher.


There is nothing special about the text of the agreement; in fact, it is quite common in our field. We notice that I transfer all copyrights to the publisher, and that I do not get a reward for it. In fact, what I get in return is a reference to an electronically published version of my text, and some heavily limited rights in using this published version myself. People who wish to read my article and have no access to an institutional subscription to the journal have to pay the price of a book – between 30€ and 50€ per article. And if I (or my university) want to turn the article into something that can be read at no cost by anyone, a couple of thousands of Euros must be paid to the publisher.

It’s all about money, surely, but only parts of the money involved in this are shown so far. An aspect never mentioned in these transactions is the production cost of the article. Articles don’t grow on trees, they are manufactured by someone, and this process involves material and immaterial resources, and labor costs.

Now let us do a little simulation here, and a merciful one. Imagine that the production cost of an average article involves 100 hours of academic labor (from getting the idea, over the research, to reading, writing, editing and so forth, and including the material costs). And imagine that such labor costs about 20€ per hour (as I said, I am being merciful here). The production cost of the article is 2000€, and by signing the copyright agreement this is donated to the publisher, who, in turn, charges everyone (including the author) for reading the article. It’s a form of “enclosure” – you spent a season working hard growing apples, but if you wish to eat one you need to buy it from a grocer who happens to have licensed the apples.

Imagine now that I write a book. The book has seven chapters, and to keep things simple I use the calculation above – each chapter being the equivalent of an article. We then get 7 times 2000€, or 14.000 Euros’ worth of labor donated to the publisher. It is because these production costs are eliminated in the transactions we (have to) enter into with publishers, that academic publishing is so extraordinarily lucrative a business. Publishers, simply put, do not bear any cost in the production of their primary material – the papers and books we submit to them for publication. When they speak about “costs”, consequently, they only address the end of the line production costs – some editing and lay outing, and marketing, sales and distribution of things that consumed tremendous amounts of labor to produce and represent, consequently, tremendous value – all of which is made invisible now. Note, in passing, that these end-of-the-line costs are usually pressented as prohibitive and are also rolled off onto the author, as in the following illustration, a fragment from another copyright agreement:


Now, whose money is involved here? In my case, the money appropriated by publishers is that of my employer, a public university; through the system of subsidies in education, the money is ultimately put up by the taxpayer. Who, if s/he now wants to read the product they subsidized, need to pay 30€ for a single pdf download.

From the publishers’ viewpoint, this is an excellent business model (credentialed, I assume, by their profit margins). From the viewpoint of the producer, it’s a net, huge loss, and an economic  model that is profoundly unsustainable. I can therefore simply repeat the conclusion of the earlier text on this topic: let’s do the publishing ourselves. We can do it better, cheaper, more efficiently, and more democratically.



In yet another transition in the system described here, Elsevier now considers the copyright transfer agreement between author and publisher an “order”: by transferring all the rights to his/her text, the author places an order with Elsevier for publishing etc. services. Needless to say that there is no more room for negotiation (let alone disagreement) about the conditions of this order: if you do not click the right buttons, your text will not be submitted to the publisher – period. And One of the buttons one can click is the Gold Open Access one – which involves a payment (by the author) of 1500Euro.

There is no limit to the exploitation model in academic publishing…


(Some of the arguments here are inspired by the essays in Charlotte Hess & Elinor Ostrom, Understanding Knowledge as a commons: From theory to practice; MIT Press 2011)


Research training and the production of ideas


Jan Blommaert 

Can we agree that Albert Einstein was a scientist? That he was a good one (in fact, a great one)? And that his scientific work has been immeasurably influential?

I’m asking these silly questions for a couple of reasons. One: Einstein would, in the present competitive academic environment, have a really hard time getting recognized as a scientist of some stature. He worked in a marginal branch of science – more on this in a moment – and the small oeuvre he published (another critical limitation now) was not written in English but in German. His classic articles bore titles such as “Die vom Relativätsprinzip geforderte Trägheit der Energie” and appeared in journals called “Annalen der Physik” or “Beiblätter zu den Annalen der Physik”. Nobody would read such papers nowadays.

Two, his work was purely theoretical. That means that it revolved around the production of new ideas, or to put it more bluntly, around imagination. These forms of imagination were not wild or unchecked – it wasn’t “anything goes”. They were based on a comprehensive knowledge of the field in which he placed these ideas (the “known facts of science”, one could say, or “the state of the art” in contemporary jargon ), and the ideas themselves presented a synthesis, sweeping up what was articulated in fragmentary form in various sources and patching up the gaps between the different fragments. His ideas, thus, were imagined modes of representation of known facts and new (unknown but hypothetical and thus plausible or realistic) relations between them.

There was nothing “empirical” about his work. In fact, it took decades before aspects of his theoretical contributions were  supported by empirical evidence, and other aspects still await conclusive empirical proof. He did not construct these ideas in the context of a collaborative research project funded by some authoritative research body – he developed them in a collegial dialogue with other scientists, through correspondence, reading and conversation. In the sense of today’s academic regime, there was, thus, nothing “formal”, countable, measurable, structured, justifiable, or open to inspection to the way he worked. The practices that led to his theoretical breakthroughs would be all but invisible on today’s worksheets and performance assessment forms.

As for “method”, the story is even more interesting. Einstein would rather systematically emphasize the disorderly, even chaotic nature of his work procedures, and mention the fact (often also confirmed by witnesses) that, when he got stuck, he would leave his desk, papers and notebooks, pick up his violin and play music until the crucial brainwave occurred. He was a supremely gifted specialized scholar, of course, but also someone deeply interested (and skilled) in music, visual art, philosophy, literature and several other more mundane (and “unscientific”) fields. His breakthroughs, thus, were not solely produced by advances in the methodical disciplinary technique he had developed; they were importantly triggered by processes that were explicitly non-methodical and relied on “stepping out” of the closed universe of symbolic practices that made up his science.


Imagine, now, that we would like to train junior scholars to become new Einsteins. How would we proceed? Where would we start?

Those familiar with contemporary research training surely know what I am talking about: students are trained to become “scientists” by doing the opposite of what turned Einstein into the commanding scientist he became. The focus these days is entirely – and I am not overstating this – on the acquisition, development and refining of methods to be deployed on problems which in turn are grounded in assumptions by means of hypotheses. Research training now is the training of practicing that model. The problems are defined by the assumptions and discursively formulated through the hypotheses – so they tolerate little reflection or unthinking, they are to be adopted. And what turns the student’s practices into “science” is the disciplined application of acquired methods to such problems resting on such assumptions. This, then, yields scientific facts either confirming or challenging the “hypotheses” that guided the research, and the production of such facts-versus-hypotheses is called scientific research. Even more: increasingly we see that only this procedure is granted the epithet of “scientific” research.

The stage in which ideas are produced is entirely skipped. Or better, the tactics, practices and procedures for constructing ideas are eliminated from research training. The word “idea” itself is often pronounced almost with a sense of shame, as an illegitimate and vulgar term better substituted by formal jargonesque (but equally vague) terms such as “hypothesis”. While, in fact, the closest thing to “idea” in my formulation is the term “assumption” I used in my description of the now dominant research model. And the thing is that while we train students to work from facts through method to hypotheses in solving a “problem”, we do not train them to questions the underlying assumptions that formed both the “problem” they intend to address and the epistemological and methodological routes designed to solve such problems. To put it more sharply, we train them in accepting a priori the fundamental issues surrounding and defining the very stuff they should inquire into and critically question: the object of research, its relations with other objects, the “evidence” we shall accept as elements adequately constructing this object, and the ways in which we can know, understand and communicate all this. We train them, thus, in reproducing – and suggestively confirming – the validity of the assumptions underlying their research.

“Assumptions” typically should be statements about reality, about the fundamental nature of phenomena as we observe and investigate them among large collectives of scientists. Thus, an example of an assumption could be “humans produce meaning through the orderly grammatical alignment of linguistic forms”. Or: “social groups are cohesive when they share fundamental values, that exist sociocognitively in members’ minds”.  Or “ethnicity defines and determines social behavior”. One would expect such assumptions to be the prime targets of continuous critical reassessment in view (precisely) of the “facts” accumulated on aspects that should constitute them. After all, Einstein’s breakthroughs happened at the level of such assumptions, if you wish. Going through recent issues of several leading journals, however, leads to a perplexing conclusion: assumptions are nearly always left intact. Even more: they are nearly always confirmed and credentialed by accumulated “facts” from research – if so much research can be based on them, they must be true, so it seems. “Proof” here is indirect and by proxy, of course – like miracles “proving” the sacred powers of an invoked Saint.

Such assumptions effectively function not as statements about the fundamental nature of objects of research, open for empirical inspection and critique, but as axiomatic theses to be “believed” as a point of departure for research. Whenever such assumptions are questioned, even slightly, the work that does so is instantly qualified as “controversial” (and, in informal conversations, as “crackpot science” or “vacuous speculation”). And “re-search”, meaning “searching again”, no longer means searching again da capo, from step 1, but searching for more of the same. The excellent execution of a method and its logic of demonstration is presented as conclusive evidence for a particular reality. Yes, humans do indeed produce meaning through the orderly grammatical alignment of linguistic forms, because my well-crafted application of a method to data does not contradict that assumption. The method worked, and the world is chiseled accordingly.


Thus we see that the baseline intellectual attitude of young researchers, encouraged or enforced and positively sanctioned – sufficient, for instance, to obtain a doctoral degree and get your work published in leading journals, followed by a ratified academic career – is one in which accepting and believing are key competences, increasingly even the secret of success as a researcher. Not unthinking the fundamental issues in one’s field, and abstaining from a critical inquisitive reflex in which one looks, unprompted, for different ways of imagining objects and relations between them, eventually arriving at new, tentative assumptions (call them ideas now) – is seen as being “good” as a researcher.

The reproductive nature of such forms of research is institutionally supported by all sorts of bonuses. Funding agencies have a manifest and often explicit preference for research that follows the clear reproductive patterns sketched above. In fact, funding bodies (think of the EU) often provide the fundamental assumptions themselves and leave it to researchers to come up with proof of their validity. Thus, for instance, the EU would provide in its funding calls assumptions such as “security risks are correlated with population structure, i.e. with ethnocultural and religious diversity” and invite scientific teams to propose research within the lines of defined sociopolitical reality thus drawn. Playing the game within these lines opens opportunities to acquire that much-coveted (and institutionally highly rewarded) external research funding – an important career item in the present mode of academic politics.

There are more bonuses. The reproductive nature of such forms of research also ensures rapid and high-volume streams of publications. The work is intellectually extraordinarily simple, really, even if those practicing it will assure us that it is exceedingly hard: no fundamental (and often necessarily slow) reflection, unthinking and imaginative rethinking are required, just the application of a standardized method to new “problems” suffices to achieve something that can qualify as (new or original) scientific fact and can be written down as such. Since literature reviews are restricted to reading nothing fundamentally questioning the assumptions, but reading all that operates within the same method-problem sphere, published work quickly gains high citation metrics, and the journals carrying such work are guaranteed high impact factors – all, again, hugely valuable symbolic credit in today’s academic politics. Yet, reading such journal issues in search for sparkling and creative ideas usually turns into a depressing confrontation with intellectual aridity. I fortunately can read such texts as a discourse analyst, which makes them at least formally interesting to me. But that is me.


Naturally, but unhappily, nothing of what I say here is new. It is worth returning to that (now rarely consulted) classic by C. Wright Mills, “The Sociological Imagination” (1959) to get the historical perspective right. Mills, as we know, was long ago deeply unhappy with several tendencies in US sociology. One tendency was the reduction of science to what he called “abstracted empiricism” – comparable to the research model I criticized here. Another was the fact that this abstracted empiricism took the “grand theory” of Talcott Parsons for granted as assumptions in abstracted empirical research. A poor (actually silly) theory vulnerable to crippling empirical criticism, Mills complained, was implicitly confirmed by the mass production of specific forms of research that used the Parsonian worldview as an unquestioned point of departure. The title of his book is clear: in response to that development, Mills strongly advocated imagination in the sense outline earlier, the fact that the truly creative and innovative work in science happens when scientists review large amounts of existing “known facts” and reconfigure them into things called ideas. Such re-imaginative work – I now return to a contemporary vocabulary – is necessarily “slow science” (or at least slower science), and is effectively discouraged in the institutional systems of academic valuation presently in place. But those who neglect, dismiss or skip it do so at their own peril, C. Wright Mills insisted.

It is telling that the most widely quoted scholars tend to be people who produced exactly such ideas and are labeled as “theorists” – think of Darwin, Marx, Foucault, Freud, Lévi-Strauss, Bourdieu, Popper, Merleau-Ponty, Heidegger, Hayek, Hegel and Kant. Many of their most inspiring works were nontechnical, sweeping, bold and provocative – “controversial” in other words, and open to endless barrages of “method”-focused criticism. But they influenced, and changed, so much of the worldviews widely shared by enormous communities of people worldwide and across generations.

It is worth remembering that such people did really produce science, and that very often, they changed and innovated colossal chunks of it by means of ideas, not methods. Their ideas have become landmarks and monuments of science (which is why everyone knows Einstein but only very few people know the scientists who provided empirical evidence for his ideas). It remains worthwhile examining their works with students, looking closely at the ways in which they arrived at the ideas that changed the world as we know it. And it remains imperative, consequently, to remind people that dismissing such practices as “unscientific” – certainly when this has effects on research training – denies imperious scientific efforts, inspiring and forming generations of scientists, the categorical status of “science”, reserving it for a small fraction of scientific activities which could, perhaps far better, be called “development” (as in “product development”). Whoever eliminates ideas from the semantic scope of science demonstrates a perplexing lack of them. And whoever thinks that scientific ideas are the same as ideas about where to spend next year’s holiday displays a tremendous lack of familiarity with science.


Much of what currently dominates the politics and economies of science (including how we train young scientists) derives its dominant status not from its real impact on the world of knowledge but from heteronomic forces operating on the institutional environments for science. The funding structure, the rankings, the metrics-based appraisals of performance and quality, the publishing industry cleverly manipulating all that – those are the engines of “science” as we now know it. These engines have created a system in which Albert Einstein would be reduced to a marginal researcher – if a researcher at all. If science is supposed to maintain, and further develop, the liberating and innovative potential it promised the world since the era of Enlightenment, it is high time to start questioning all that, for an enormous amount of what now passes as science is astonishingly banal in its purpose, function and contents, confirming assumptions that are sometimes simply absurd and surreal.

We can start by talking to the young scholars we train about the absolutely central role of ideas in scientific work, encourage them to abandon the sense of embarrassment they experience whenever they express such ideas, and press upon them that doing scholarly work without the ambition to continue producing such ideas is, at best, a reasonably interesting pastime but not science.

Related texts


Rationalizing the unreasonable: there are no good academics in the EU


Jan Blommaert 

Attracting external funding has become, everywhere, one of the main priorities of academics, and writing funding application has consequently also become one of their main tasks. The idea is “competitiveness”: quality will be evident when academics, individually or in teams, acquire funding after a strict and rigorously exclusive peer-review process. In addition, specific sources of funding are specified as benchmarks, suggesting that they are the “most competitive” ones, and therefore also the best and most objective indicators of quality: think of the ESRC in the UK or (the focus of this text) the European framework program Horizon 2020. In every form of performance management – for individual academics seeking promotion or tenure, for research teams, departments and entire universities – success in such benchmark external funding acquisition is given immense positive attention. Universities, consequently, impose quota on their academic units – “you shall apply for at least five EU grants and obtain at least one this year!” – and turn it into a compulsory, even key activity of their staff. Professional grant writers and administrators are hired in academic departments or labs, and universities now employ EU-targeting lobbyists to “assist” and “facilitate” their bids for funding.

Well, my team just submitted a Horizon 2020 application last week, following a thematic call several months ago. In view of the application, we had set up an international consortium earlier on, did profound content preparation, and one of our team members spent hundreds of hours and several international trips worth several thousands of Euros preparing the application.

After submitting, we heard that a total of 147 applications had been received by the EU. And that the EU will eventually grant 2 – two – projects. In a rough calculation, this means that the chance of success in this funding line is 1,3%; it also means that 98,7% of the applications – 145 of them, to be accurate – will be rejected.

And here is the problem.

It would be interesting to see the grand total of labor and resources invested in the 145 applications, calculated in Euros. My guess is that many millions’ worth of (usually) taxpayers’ money will have been used – wasted – in this massive and mass grantwriting effort. Several hundreds of researchers will have been involved, each spending dozens if not hundreds of their salaried working hours on preparing the application, and hundreds of university administrators will have been involved as well, also spending salaried working hours on the applications. These millions of Euros have not been used in creative and innovative research – they weren’t spent on doing fieldwork, experiments or tests, nor on writing papers and holding presentations in workshops and symposiums. They were spent on … nothing. For when a grant application is rejected, the time and energy investment spent on it evaporates, as if these hours of labor were never spent, and as if the academics who spent them had nothing else to do.

Thus, while this Horizon 2020 funding line will disburse half a dozen millions of Euros to the two “winning” teams, it will have cost more millions to the EU academic community represented by the 145 others who were rejected. Money, thus, has been sucked out of an already fragile funding base for universities across the EU, in a vain attempt to “win” and “be competitive” – and therefore “good”.

The attempt is futile, because if the rejection rate is 98,7%, the message given by the EU is, in effect, that almost all of the academic units participating across the EU in the funding call are not good enough. It is nonsense to try and argue that on grounds of pure academic quality just 1,3% will qualify, for the number of grants to be awarded is known before the peer review procedure takes place. In that sense, the peer review done by the EU panels is simply useless, for it has no impact on the number of awards granted by the EU – tens of applicants will receive a letter soon stating that their project was evaluated as “excellent but not selected for funding”. The criteria determining the “selection for funding” are, needless to say, carefully guarded secrets, and not grounded in assessments of academic quality. The system of selection is, when all has been said and done, simply irrational and unreasonable.

Still, and notwithstanding the previous remark, success or rejection is seen as an objective indicator of academic quality across the EU university system. By awarding just 1,3% of the applications, thus, a rather thoroughly absurd reality is shaped: almost 99% of the competing academics in the EU do not make the mark, and just 1,3% satisfies the EU benchmark. Now, we know that the 98,7% “losers” still have to compete in order to show that they are good enough; but when a selection bottleneck is thàt narrow, the effort, and the resources invested into it, are in effect simply wasted.

The paradox is clear: by going along with the stampede of competitive external funding acquisition, almost all universities across the EU will lose not just money, but extremely valuable research time for their staff. Little academic improvement will be achieved, and little progress in science, if doing actual research is replaced by writing grant proposals with an almost-zero chance of success. And as long as academics and academic units are told that success or failure in getting EU funding (with success rates such as the one mentioned here known in advance) is a criterium for determining their academic quality, gross injustice will be committed. People will be judged inadequate, mediocre or simply poor academics because they failed to get the benchmark funding – awarded, as we saw, on grounds that have little to do with academic quality assessments of applications. Heteronomy is the word that comes to mind here: academic practices and achievements are judged by means of non-academic standards, given a thin but hopelessly unconvincing veneer of “competitiveness”. And universities seeking to acquire external funding will be depleting their internal funding at extreme speed, the more they engage in this stampede for “competitiveness”.

I find this logic beyond comprehension. Those who rationalize the importance of acquiring benchmark external funding, are rationalizing an unreasonable and heteronomic system that produces tremendous numbers of “losers” and a tiny number of “winners”. The losers can be put under increasing pressure to show that they are competitive – increasingly risking their careers and spending funds better used on research and other intellectual activities.

To sum up: if the number of grants to be awarded is established before the peer-review process, this kind of “competitive” benchmark funding is not competitive at all, and a benchmark for nothing at all – least of all for academic quality. If, however, results in this weird game are maintained as serious and consequential criteria for assessing academic quality, then the conclusion is that there are no good academics in Europe – 99% of them will fail to get ratified as good enough. And these 99% will have to spend significant amounts of taxpayers’ money to eventually prove – what?

The entire thing really, seriously, begins to look and feel like buying lottery tickets or betting on horses: one spends money hoping to win some – and at moments of lucidity, one is aware of the fact that the net outcome will be loss, not gain. In the meantime, beautiful arias are sung about the extreme importance of research and innovation by the EU, by its member states, and by its universities. The question, of course, is how such a great cause is served by the present system of benchmark external funding acquisition. The money spent on it, I would say, would be better spent on … research and innovation proper.


“Investing” in higher education


Jan Blommaert 

In response to criticism by protesters, aimed at the neoliberalization of universities, governments often reply that “they are investing more” in universities than ever before. This argument often has a silencing effect, it is often seen and experienced as an effective rebuttal of the protesters’ claims.

It also fits nicely with the general call, repeated endlessly, that “the best possible investment is investment in education”. From Warren Buffett and the World Economic Forum down to local city councils and poverty-combating NGO’s: the same message is broadcast over and over again. Invest in education, and you will increase employment rate, build superior skills for the workforce of the future, produce surplus value and become innovative that way, and fight poverty most effectively. “Investing in education” is generally perceived as uniformly and unambiguously good, something we all want and something of which all of us would massively benefit. Governments pulling that rabbit out of their hats, therefore, can align their discourses with massively consensual ones at societal level. Which increases the silencing effect of the argument on protesters claiming a more democratic and low-threshold university.

The argument is very easy to answer though. The issue is what EXACTLY governments are investing in? The fact of investment itself is entirely conditioned, as to usefulness and impact, by the actual aspects of university life that receive the investments. And looking at these, we see that such investments usually go to constructing or strengthening a high-end elite environment of competitive science: new state-of-the-art labs specialized in research that can be immediately converted into market value. Or some “top” recruitment of celebrity professors capable of attracting loads of new students. “Low end” activities, on the contrary, are usually the object of disinvestment (think of student facilities, adjunct academics, basic infrastructure).

So the issue is: where do the investments actually go to? After all, a company can say that it “invests” in its factories by repainting their façades.