The scope of the Greek defeat


Jan Blommaert

(translation: Adelei van der Velden)

The chronicle of bad decisions in Brussels and Athens has produced a small interim decision. After 17 hours of discussions by a group of weary, impatient, irrational and biased people, there is an “agreement” signed with Greece (see link below). Juncker calls this “agreement” a “win-win” situation “without winners or losers”, which according to Michel even has “hope” and “optimism” to offer. For whom, is the question.

Tsipras takes a series of demands back to Athens only to find their match in the requirements imposed on Germany by the Treaty of Versailles. The most urgent requirements are well known (tax reform, pension reform, etc.) and note that all these things are expressed in the most vague terminology (“broadening the tax base” for example, or “Modernizing Governance”, or constant references to “best practices” in the OECD or even “Internationally” – more vague is unfeasible), so there is plenty of room for manipulation in interpretation of actions and results.

Note also that the Greek vision included a number of important points (a) it  revolved around a tax shift from poor to rich. On this the document says virtually nothing, except for the aforementioned super vague “broadening the tax base”. Those measures, as we shall see later, are fully subject to the approval of the Troika. (b) Strengthening of the government, rather than a reduction of it, and (c) the entire Greek democratic control over the economic recovery plans, while international partners kept a say in the financial recovery plan. In addition, certain items were rejected a priori, while other proposals (the primary surplus, for instance) were stressed.

Note that these issues were also preserved in the “final proposals” of Tsipras to the Euro zone, as I explained in an earlier piece. That explains why these proposals were not a “capitulation”: they followed up on the approach already started in the proposals brought forth by Varoufakis in February this year. That this can not be seen as a submission to Europe was also apparent, needless to say, from the fact that it took 17 hours before the EU was able to define a position, and that this position is completely different from the Greek proposals, issued from February until last week. Only the current agreement is a defeat, and a defeat of enormous proportions.

Let us see what remains of the former Greek points, and what the Greeks now have to swallow.

  1. In the urgent demands there are the two points that were expressly rejected in February by Varoufakis as measures that would only deepen the recession: using VAT as an instrument, and reducing or delaying pensions. These expressly rejected items now need to be implemented. Also the privatizations, of which Varoufakis said they should be considered on a case by case basis, have been torn from the hands of the Greeks – not officially (Tsipras will explain that it is the Greek government that will do so) but effectively they are, because the EU is supervising the privatization and assesses their ‘implementation’ in a binding way.
  2. There is no longer a reduction of the primary surplus – another core requirement of Varoufakis in February. The structure of this surplus through an absurd 4.5% of GDP determines the rhythm of the budgets, and thus the austerity. Varoufakis proposed to reduce it to 1.5%. The document gives no figures, but “ambitious targets” which, if not met, should lead to “quasi-automatic spending cuts”. Moreover, this must be cast in a law by Wednesday. Needless to say that the Greeks are not the only ones who have a say in this: everything is under strict supervision.

“• full implementation of the relevant provisions of the Treaty on Stability, Coordination and Governance in the Economic and Monetary Union, in particular by making the Fiscal Council operational before finalizing the MoU and introducing quasi-automatic spending cuts in case of deviations from ambitious primary surplus targets after seeking advice from the Fiscal Council and subject to prior approval of the Institutions;”

  1. Furthermore theFebruarysectiononthelabor market has disappeared, in which the Greekswanted to followthe recommendations of theInternational Labor Organization. Whatwehavenowis this:

“• on labour markets, undertake rigorous reviews and modernisation of collective bargaining, industrial action and, in line with the relevant EU directive and best practice, collective dismissals, along the timetable and the approach agreed with the Institutions. On the basis of these reviews, labour market policies should be aligned with international and European best practices, and should not involve a return to past policy settings which are not compatible with the goals of promoting sustainable and inclusive growth;”

In plain language: collective labour agreements and the right to strike should submit to the chains. In a country where wages have fallen enormously, this is disastrous. And organized civil society also ceases to exist as a force in labour relations.

  1. Here we seeimmediatelyhow wide theEU conceives “economic policy”. EVERYTHINGfallsunder this label. While oneofthe fundamental elements ofthe Februarytext by Varoufakiswas thathe made adistinctionbetweena “financial” problem (in which others mayhavetheirsay) and”economic” problems(in which theGreek governmentis the onlyresponsibleactor).Thisdistinctionisreplaced here bytheclassicalelastic concept that”economic” is defined so broadly that it covers allpossible policies.
  2. That isthe next point.WhileSYRIZAexplicitlyrejected theinfluenceofthe Troika, a rejection to which itowed itsvictory, the Troikais back andstronger than ever.The Greekshave toswallowanunprecedentedmeasure, as you can read below.

“• to fully normalize working methods with the Institutions [= the Trojka] (…) The government needs to consult and agree with the Institutions on all draft legislation in relevant areas with adequate time before submitting it for public consultation or to Parliament.”

First, who decides what is ‘relevant’? Probably not Tsipras. Secondly, it is un-be-lie-va-ble that a political EU meeting imposes on a Member State to switch off its parliament and reduce it to a formalism. Legislative work is defined here as something that begins with the government, then goes to the Troika, and then finally to parliament (where amendments may again have to pass through the Troika). While we thought the parliament was the “legislator” in a democracy. The Troika checks here both the government and the parliament. It is indescribable that the EU imposes such a way to operate. And this is called NORMALIZATION of the methods of cooperation with the Troika. So this is “normal.”

  1. Moreover,andeven more mind-boggling: the Greekshave to roll back laws that were voted democratically:

“With the exception of the humanitarian crisis bill, the Greek government will reexamine with a view to amending legislations that were introduced counter to the February 20 agreement by backtracking on previous programme commitments or identify clear compensatory equivalents for the vested rights that were subsequently created.”

So, the Greek rule of law ceases to exist. A law that has been approved in a sovereign country can now be canceled by unelected external forces voted. Never demonstrated before in the EU.

  1. Tsiprassays this all is compensated by two things: an aid packageandadebt restructuring. As forthe aid program:watch how conditionally this has been formulated: thesummit”takes note” of a “possible” aid program, it asksthe Troikato examinehowthe aid packagecan be reduced(!!)through measures which further erode theroleof the GreekGovernment, and points tothe fact thatmoreneoliberalism allows for less support.

“The Euro Summit takes note of the possible programme financing needs of between EUR 82 and 86bn, as assessed by the Institutions. It invites the Institutions to explore possibilities to reduce the financing envelope, through an alternative fiscal path or higher privatisation proceeds. Restoring market access, which is an objective of any financial assistance programme, lowers the need to draw on the total financing envelope.”

  1. With regard to debt restructuring, this is also just a “rain check”. The Eurogroup is “ready to examine” whether there are “possible” and “necessary” “complementary measures” to be taken regarding the debt. BUT: (a) what is possible is just delay or spreading of the debt payments; (b) completely depending on the “full execution” from the rest of the agreement, and (c) with exception of real debt cancellation.

“Against this background, in the context of a possible future ESM programme, and in line with the spirit of the Eurogroup statement of November 2012, the Eurogroup stands ready to consider, if necessary, possible additional measures (possible longer grace and payment periods) aiming at ensuring that gross financing needs remain at a sustainable level. These measures will be conditional upon full implementation of the measures to be agreed in a possible new programme and will be considered after the first positive completion of a review. The Euro Summit stresses that nominal haircuts on the debt cannot be undertaken.”

Conclusion: there is simply nothing left anymore of the views of SYRIZA before and during the elections, of their Government Declaration, and even of  the plan by Varoufakis of February, which in itself was a major break with what went before.

The role of the Troika is even more extended, and Greece as legal construction is now fully in the position of a protectorate. Its executive power is controlled by the Troika, which takes all legislative work from the hands of the parliament. End of the democratic institutions as we know them.

The content of the agreement does not contain a single core element anymore of the plan Varoufakis submitted to the Euro zone in February. Every fundamental issue has been removed and replaced by perfectly orthodox neoliberal positions. It asks Syriza to destroy trade unions, to make the labor market entirely flexible, to sell off public assets, reduce or delay pensions. Of a more efficient Greek government remains nothing: the government is now back in Brussels and Washington, not in Athens.

And en route this informal club of State and Government leaders has on top of this abolished the sovereignty of an EU Member State, cancelled its rule of law, and rejected the cornerstone of Western democracy – the separation of powers. Surely something that creates “hope and optimism”. Until the day the same is applied to their own country.




Bye bye Montesquieu? How the EU redefines the “trias politica”


Jan Blommaert 

In the “agreement” concluded between the informal EU-summit and Greece on 13-07-2015, a remarkable statement was made, one that formally puts an end to two centuries of democracy-as-we-know-it. Here it is.

The government needs to consult and agree with the Institutions on all draft legislation in relevant areas with adequate time before submitting it for public consultation or to Parliament.”

“The Institutions” is shorthand here for what is more widely known as “the Troika”, the technocratic body composed of members of the IMF, the European Commission and the ECB and deployed in debt-ridden countries such as Greece. This body has no formal status and has, needless to say, no democratic statute. That means, concretely, that it is in no way publicly accountable for its decisions and actions, and that it cannot be in any way sanctioned by those affected by its decisions and actions. But this is unremarkable: the so-called “EU-summit” that acts as author of this clause is in itself an informal construction: it is not the EU Council, nor the Commission, let alone the Parliament acting here, but an ad-hoc “upgrade” of this other informal EU-construction, the so-called “Eurogroup”. Meetings of such bodies are in camera and not minuted. Formal control and democratic response are, thus, excluded.

Observe that the presence of the Troika as the all-powerful agent in determining the course of austerity policies in Greece – and the lack of democratic sovereignty following from that – was one of the key themes that propelled Alexis Tsipras’s Syriza party to a landslide electoral victory in January 2015. The acceptance of the Troika as an even more powerful actor is probably Tsirpas’s greatest defeat in the negotiations with the Eurogroup.

So let us return to the main point here. In the clause quoted above, the EU summit defines legislative work: it starts from the government, passes on to the Troika, and then ends in Parliament for “public consultation”.

Since Montesquieu defined the “trias politica” as the cornerstone of the modern democratic institutional architecture, the government is the “executive” branch, the elected parliament the “legislative” (the third, “judicial” branch is less relevant here). Put simply: laws are made in and by Parliament and then handed to the government for implementation. What the exhausted, frustrated and impatient political leaders present at the EU-summit (who in their own country undoubtedly would see the separation of powers through the “trias” as a sacrosanct item) have now written down, black on white, is the exact reversal: laws are drafted by the government, which thus becomes the “legislative” power as well as the “executive” one, while the elected Parliament is now openly reduced to “public consultation”, not decision. For in between both, now, stands a third party: the unelected (and, in effect, foreign) technocrats of the Troika, accountable to exactly no one, who act as ursurpers of both the executive and legislative powers in – what we still encouraged to call – a “democratic EU member state”. Observe that the possibility of legislative initiatives emerging from the parliamentary floor is not even entertained in the text of the agreement.

In a crisp but perplexing phrase, then, all of this is presented as the way “to fully normalize working methods with the Institutions“. This, in the eyes of its eminent authors, is normal “democratic” (or whatever) procedure. While it would be a violation of the Constitution in every EU member state, and remains the criterion defining the difference between a “parliamentary democracy” and, say, a dictatorship or a totalitarian system..

Not some obscure bunch of antipolitical technocrats has come up with this termination of the “trias”; but democratically elected political leaders meeting in an informal, but consequential, setting. These elected leaders will now have to defend this in their respective national parliaments. It would be good if democratic parliamentarians would raise this redefinition of the fundamental democratic institutional architecture, and ask them explicitly whether they really mean what they wrote down. I am sure that some strange responses will be forthcoming.



When “scientific” became a synonym for “unrealistic”.

Paul Samuelson

Jan Blommaert 

“From Adam Smith in 1776 to Irving Fisher in 1930, economists were thinking about intertemporal choice with Humans in plain sight. Econs began to creep in around the time of Fisher, as he started on the theory of how Econs should behave. But it fell to a twenty-two-year-old Paul Samuelson, then in graduate school, to finish the job”. (Richard H. Thaler, Misbehaving: The Making of Behavioral Economics, p.89; New York: Norton, 2015).

Richard Thaler, in this wonderful book, uses the terms “Humans” and “Econs” to distinguish between, respectively, real people observed in real life, having real interests, attitudes and modes of thought and behavior that are often, let us say, suboptimal; “Econs”, by contrast, are fictional characters, ideal people who don’t have passions or biases, are always rational, possess a maximum of information and are able to convert this linearly into economic behavior. Thaler’s book is a powerful argument in favor of an Economics science that keeps track of, and explains, Human behavior as, at least, a qualification to the kinds of fictional predictions of Econs’ behavior that are the Economics mainstream’s occupation.

In so doing, Thaler also directs our attention towards the small historical window in which this current mainstream’s doctrine occurred and flourished. For almost two centuries, Economics was precoccupied with real markets, customers, prices and policies – Adam Smith’s Theory of Moral Sentiments setting the scene for an Economics that dealt with the whims of human social behavior. The discipline abandoned this focus about merely half a century ago, when Samuelson, Arrow and some others replaced muddled descriptions of reality by elegant mathematical “models”, supposed to be of absolute and eternal precision and capable of bypassing the uncertainties and historical situatedness of real human minds. When critics pointed towards such minds (and their tendency to violate the rules of such elegant models), the response was that, willingly or not, people in economic activities would behave “as if” they had done the intricate calculations captured in the models. Thaler’s book is a lengthy and pretty detailed refutation of that “as if” argument: if nobody really actually operates in the ways laid down in mathematical models, why not take such deviations – “misbehaving” Humans – seriously? For someone such as myself, involved in ethnographic studies of Humans and their social behavior, this question is compelling and the arguments it provokes inescapable.

Thaler is not a nobody in his field – he’s the 2015 President of the American Economics Association; he will be able to ask this question urbi et orbi and with a stentorian voice. There might be some obstacles, though. Interestingly, the kinds of Economics designed by Samuelson and his comrades were (and are) seen as truly “scientific”. The conversion of a science grounded in observations of actually occurring behavior to a science concerned with abstract mathematical modeling was seen as the moment at which Economics became a real science, a complex of knowledge practices not tainted by the fuzziness of actual social facts but aiming at absolute Truth – something invariably expressed not in prose but in graphics, tables and figures, in which a new abstract model could be seen as a major scientific breakthrough (just look at the list of Economics Nobel Prize winners since the 1960s, and read the citations for their selection). As for the teaching and training of aspiring economists, it was thought that they would now be truly “scientific”, since students would learn abstract and ideal frameworks suggested to be absolutely generative in the sense that any form of real behavior could be measured against it and explained in its terms. No more nonsense, no more description – a normative theory such as that of Samuelson (sketching how ideal people should act) would henceforth be presented as a descriptive one (effectively documenting and explaining how they actually act) as well – an absolute theory, in other words. The shift away from “realism” – the aim of descriptive theories – towards ideal-typical modeling – the aim of normative theories – was seen as irrelevant. Economics became “scientific” as soon as it abandoned realism as an ambition.

It is interesting to see that in post-World War II academics, similar moves were made in different disciplines. Chomsky’s revolution in Linguistics (caused by his Syntactic Structures, 1957) is an example. Whereas Linguistics until Chomsky was largely driven by descriptive aims and methods (go out and describe a real language), in which careful empirical description  and comparison would ultimately lead to adequate generalization (Saussure’s Langue), Chomsky saw real Human language as propelled by an abstract formal and generative competence, describable as a finite set of abstract rules capable of generating every possible sentence in a language. This, too, was seen at the time as a major leap towards scientific maturity, and senior philosophers of science (already accustomed to see formalisms such as mathematical logic as the purest forms of meaning) argued that, with Chomsky, Linguistics had finally become a “science”. Linguists, from now on, would no longer do fieldwork – the interest in listening to what real people actually said was disqualified – but rely on “introspection”: one’s own linguistic intuitions were good enough as a base for doing “scientific” Linguistics. It took half a century of sociolinguistics to replace this withdrawal from realism with a renewed attention for actual variation and diversity in real language. Contemporary sociolinguistics, consequently, operates towards Linguistics very much like Thaler’s Behavioral Economics towards mainstream Economics: as a sustained attempt at making this “science” realistic again. Chomsky generative grammar

Similar stories can be told with respect to disciplines such as psychology and sociology, and later cognitive science, where the desire to become “scientific”, in the same era, led to a canonical “science” in which white-room experiments and quantifiable surveys replaced actual observation of situated social behavior and attention to what people really did and say about themselves and society.

There, too, the assumptions were the same: the actual social behavior of people is driven by a “deeper” abstract level of psychological, social and cognitive processes which can be captured and tested by detaching individuals from their real-life environments, submitting them to testing procedures that bear no connection whatsoever with any other actual form of social and meaningful behavior. Thus,  cognitive, psychic and emotional behavior can be accurately and “scientifically” studied by putting individuals into an MRI scanner, where they stay entirely immobile and cut off from any outside stimulus for 45 minutes. The outcomes of such procedures (quite paradoxically called “empirical” by practitioners) are presented, remarkably (or better, incredibly), as accurate accounts of real, situated and contextually sensitive social and mental activity. Abstract modeling of what we could call “Psychons”, here as well, is not seen as a normative enterprise but as a descriptive one as well, predicting (with various degrees of accuracy) Human behavior. This study of Psychons, then, is the real “science”, often rhetorically opposed to and contrasted with the “storytelling” or “journalism” of research grounded in actual observation and description of Humans (turning one of the 20th century’s most influential intellectuals, Sigmund Freud, into a fiction writer). Senior sociologists and psychologists such as Herbert Blumer and Aaron Cicourel brought powerful (and never effectively refuted) methodological arguments against this shift away from realism and towards “science” – their arguments were dismissed as unhelpful.

So here we are: knowledge disciplines concerned with Man and society appear to be “scientific” only when they deliberately reject the challenge of realism – “reality talking back”, as Herbert Blumer famously called it – and engage in abstract formalization and modeling, regardless of whether or not such formal schemes and models stand the test of empirical reality checks. Such “science”, because it dismisses this kind of systematic reality check, also becomes incapable of describing change. Experiments need to be “repeatable” in order to be “scientific”, and consequently we continuously check and test things that have to remain stable in order to be scientifically testable. The fact that actual social processes and realities are not “experiments”, and display a strong tendency to change perpetually, precludes repeatability and consequently can never be “scientifically” addressed. This feature – the bias towards stability and the incapability of addressing change – is a constant in all these “sciences”. And those who practice such “sciences” are actually proud of it. Strange, isn’t it?

We live by our mythologies, Roland Barthes famously said. One of the mythologies we live by is that of “science” being necessarily, because of its own criteria for validity, unrealistic, and therefore often outlandish and outrageous in its findings and conclusions. It would be good, therefore, to return to the old debates historically accompanying the shifts in the disciplines I mentioned here, and carefully examine the validity of critical arguments brought against these kinds of “science”. To the extent that people still believe that “re-search” means “looking again”, i.e. to be continuously critical of one’s own knowledge doctrines, this would be an eminently scientific practice.

PS (2017): Richard Thaler was awarded the Economics Nobel Prize in October 2017.


Rationalizing the unreasonable: there are no good academics in the EU


Jan Blommaert 

Attracting external funding has become, everywhere, one of the main priorities of academics, and writing funding application has consequently also become one of their main tasks. The idea is “competitiveness”: quality will be evident when academics, individually or in teams, acquire funding after a strict and rigorously exclusive peer-review process. In addition, specific sources of funding are specified as benchmarks, suggesting that they are the “most competitive” ones, and therefore also the best and most objective indicators of quality: think of the ESRC in the UK or (the focus of this text) the European framework program Horizon 2020. In every form of performance management – for individual academics seeking promotion or tenure, for research teams, departments and entire universities – success in such benchmark external funding acquisition is given immense positive attention. Universities, consequently, impose quota on their academic units – “you shall apply for at least five EU grants and obtain at least one this year!” – and turn it into a compulsory, even key activity of their staff. Professional grant writers and administrators are hired in academic departments or labs, and universities now employ EU-targeting lobbyists to “assist” and “facilitate” their bids for funding.

Well, my team just submitted a Horizon 2020 application last week, following a thematic call several months ago. In view of the application, we had set up an international consortium earlier on, did profound content preparation, and one of our team members spent hundreds of hours and several international trips worth several thousands of Euros preparing the application.

After submitting, we heard that a total of 147 applications had been received by the EU. And that the EU will eventually grant 2 – two – projects. In a rough calculation, this means that the chance of success in this funding line is 1,3%; it also means that 98,7% of the applications – 145 of them, to be accurate – will be rejected. And here is the problem.

It would be interesting to see the grand total of labor and resources invested in the 145 applications calculated in Euros. My guess is that many millions’ worth of (usually) taxpayers’ money will have been used – wasted – in this massive and mass grantwriting effort. Several hundreds of researchers will have been involved, each spending dozens if not hundreds of their salaried working hours on preparing the application, and hundreds of university administrators will have been involved as well, also spending salaried working hours on the applications. These millions of Euros have not been used in creative and innovative research – they weren’t spent on doing fieldwork, experiments or tests, nor on writing papers and holding presentations in workshops and symposiums. They were spent on – nothing. For when a grant application is rejected, the time and energy investment spent on it evaporates, as if these hours of labor were never spent, and as if the academics who spent them had nothing else to do. Thus, while this Horizon 2020 funding line will disburse half a dozen millions of Euros to the two “winning” teams, it will have cost more millions to the EU academic community represented by the 145 others who were rejected. Money, thus, has been sucked out of an already fragile funding base for universities across the EU, in a vain attempt to “win” and “be competitive” – and therefore “good”.

The attempt is futile, because if the rejection rate is 98,7%, the message given by the EU is, in effect, that almost all of the academic units participating across the EU in the funding call are not good enough. It is nonsense to try and argue that on grounds of pure academic quality just 1,3% will qualify, for the number of grants to be awarded is known before the peer review procedure takes place. In that sense, the peer review done by the EU panels is simply useless, for it has no impact on the number of awards granted by the EU – tens of applicants will receive a letter soon stating that their project was evaluated as “excellent but not selected for funding”. The criteria determining the “selection for funding” are, needless to say, carefully guarded secrets, and not grounded in assessments of academic quality. The system of selection is, when all has been said and done, simply irrational and unreasonable.

Still, and notwithstanding the previous remark, success or rejection is seen as an objective indicator of academic quality across the EU university system. By awarding just 1,3% of the applications, thus, a rather thoroughly absurd reality is shaped: almost 99% of the competing academics in the EU do not make the mark, and just 1,3% satisfies the EU benchmark. Now, we know that the 98,7% “losers” still have to compete in order to show that they are good enough; but when a selection bottleneck is thàt narrow, the effort, and the resources invested into it, are in effect simply wasted.

The paradox is clear: by going along with the stampede of competitive external funding acquisition, almost all universities across the EU will lose not just money, but extremely valuable research time for their staff. Little academic improvement will be achieved, and little progress in science, if doing actual research is replaced by writing grant proposals with an almost-zero chance of success. And as long as academics and academic units are told that success or failure in getting EU funding (with success rates such as the one mentioned here known in advance) is a criterium for determining their academic quality, gross injustice will be committed. People will be judged inadequate, mediocre or simply poor academics because they failed to get the benchmark funding – awarded, as we saw, on grounds that have little to do with academic quality assessments of applications. Heteronomy is the word that comes to mind here: academic practices and achievements are judged by means of non-academic standards, given a thin but hopelessly unconvincing veneer of “competitiveness”. And universities seeking to acquire external funding will be depleting their internal funding at extreme speed, the more they engage in this stampede for “competitiveness”.

I find this logic beyond comprehension. Those who rationalize the importance of acquiring benchmark external funding, are rationalizing an unreasonable and heteronomic system that produces tremendous numbers of “losers” and a tiny number of “winners”. The losers can be put under increasing pressure to show that they are competitive – increasingly risking their careers and spending funds better used on research and other intellectual activities.

To sum up: if the number of grants to be awarded is established before the peer-review process, this kind of “competitive” benchmark funding is not competitive at all, and a benchmark for nothing at all – least of all for academic quality. If, however, results in this weird game are maintained as serious and consequential criteria for assessing academic quality, then the conclusion is that there are no good academics in Europe – 99% of them will fail to get ratified as good enough. And these 99% will have to spend significant amounts of taxpayers’ money to eventually prove – what?

The entire thing really, seriously, begins to look and feel like buying lottery tickets or betting on horses: one spends money hoping to win some – and at moments of lucidity, one is aware of the fact that the net outcome will be loss, not gain. In the meantime, beautiful arias are sung about the extreme importance of research and innovation by the EU, by its member states, and by its universities. The question, of course, is how such a great cause is served by the present system of benchmark external funding acquisition. The money spent on it, I would say, would be better spent on … research and innovation proper.


Indonesia, its youth and “light communities”


Jan Blommaert

Comments on the panel “Margins, hubs and peripheries in decentralizing Indonesia” (part 1), Conference on ‘The Sociolinguistics of Globalization”, Hong Kong University 5 June 2015. Panel convenor: Zane Goebel. Line-up: Michael Ewing, Dwi Noverini Djenar, Lauren Zentz, Meinami Susilowati.

All of the papers in this part of the panel focused on youth language and the sometimes problematic ways such “new” forms of speech clash with strong nation-state institutional cultures of standardization. Over and beyond this general focus, three points merit deeper engagement; let me review them briefly.

1. Youth language, universally, is an example of how societies (in spite of often very strong homogeneistic self-imaginations) in effect contain numerous “niches” developing at different speeds, occupying specific spatiotemporal arenas, and operating along specific normative frameworks projected onto behavioral scripts in which specific forms of language are part of what counts as accepted/acceptable behavior. It was Cicourel who stated that what people effectively do when they do the work of interpretation is to try and make sense of situations by reading social structure into it. I shall have more to say on social structure in a moment, but the point can already be made that social structure is manifestly plural: different structures interact and intersect, triggering often unbalanced confrontations of normative frames – what is “meaningful” and therefore socially and politically expectable – with often unexpected outcomes.

2. Furthermore, the papers all showed how such confrontations of different normative frames represents the experience of change. Indonesia, like any other place on earth, changes fast as an effect of globalization (and, in this case, also because of momentous national political shifts), and the on-the-ground experience of such change often takes the shape of conflictual discourses of normativity (again projected, concretely, into behavioral scripts encompassing specific forms of language usage). These normative frames provide a sense of “order” (recall Cicourel’s idea bout understanding as reading social structure into situations), and it is the clash of different “orders” that creates the sense of insecurity, anguish and destabilization we often see and encounter in data on people’s actual social experiences. In our own jargon, it is the immersion in a polycentric social environment that constitutes the baseline experience of macro-changes triggered by globalization. It is the encounter with not one single, transparent and hegemonic social structure, but with multiple structures in competition over spaces, membership and socially ratified meaningfulness with the potentially threatening effect of restratification, that consitutes the lived experience of “change” for many.

3. But even more importantly, what are these contrasting and conflicting “orders” like? In order to answer this question, we need a distinction between nation-state and globalized forms and representations of “community”. Remember that, in the tradition of Durkheim, Weber and Parsons, the nation state was typically a local, “thick” community – a community in which people shared vast amounts of resources through common backgrounds, institutional governmentality and socialization.

The papers in this panel, however, showed invariably “light” communities often tied together by shared “niched” practices (Goffman’s “Encounters” can also inspire us here). These light communities, remarkably, are local – see the emphasis on locally grounded youth vernaculars in the papers here – but translocally infused and framed, which is why they are often seen and decried as “westernization” while strictly local vernaculars and indexicals are used. The new globalized order, thus, with its intense physical and virtual mobilities, appears to stimulate and even privilege the formation of “light”, local communities whose orientation is not towards the nation-state but towards ideals and imageries drawn from the wider world, and involving specific spaces of deployment, specific actors and specific codes of meaningful practice. To return for a moment to the issue of structure: the “light” communities represent a “light”, flexible, volatile and fast-moving structure, interacting with and often only perceptible from within “thick” and slower-moving structures. Our disciplinary traditions have consistently emphasized the “thick” structures, while “light” ones tended to be dismissed as insignificant or superficial.

I’m afraid we can’t afford this any longer. The tremendous importance of “light” communities, and the fact that for those inhabiting them they often experientially, emotionally and socially prevail upon the traditional “thick” communities of family, religion, ethnicity or nationality, is perhaps the most pressing theoretical and descriptive issue in the study of globalization nowadays. From practices and their performers and performing conditions, over the kinds of communities they generate, to specific modes of social structure they propel: this to me sounds like a research program of considerable interest. The papers in this session provide excellent and substantial food for thought in this direction.


The conference program, including the panel lineup and the abstracts, can be accessed via


Language, behavioral scripts, and valuation: Comments on “Transnationalizing Chineseness”

3.2 handwritten Chinese in Berchem

Jan Blommaert

Commentary, panel on “Transnationalizing Chineseness: language, mobility and diversity” (organizers: Shuang Gao & Xuan Wang; line-up: Xiaoxiao Chen, Shuang Gao, Hua Nie, Nkululeko Mabandla & Ana Deumert, Han Huamei – discussants Lionel Wee & Jan Blommaert). International Conference “The Sociolinguistics of Globalization: (de)centring and (de)standardization”, Hong Kong University, 4 June 2015.

Let me first point out that this panel was organized by two relatively junior scholars – Shuang Gao and Xuan Wang – and that they managed to bring together an exceptionally engaging and stimulating panel. They deserve a huge accolade for that. The points raised by them and by the authors of the papers in this panel are substantial and none of the participants can be accused of shoddy work. Neither can they be accused of irrelevance: the transformation of “Chineseness” as an effect of the global rise to economic and political prominence of the People’s Republic is a true globalization phenomenon of colossal scale, and each of the papers showed us how such terrifically large-scale transformations set down, so to speak, in actual small-scale situations, places and moments. The picture I added as a caption to this text shows that it also occurs in my own neighborhood in inner-city Antwerp, where a local Cantonese restaurant-owner, member of an older diasporic generation, addresses potential customers for renting a flat, and does so in an unstable hybrid of Cantonese and Mandarin and of simplified Chinese script and traditional character script. The global shift in “who is the Chinese in the world” forces him to readjust not only his linguistic and literacy repertoire, but to change his economic orientation, towards a new and very large community of immigrants from Mainland China – a population virtually unknown in most places in the world until the 1990s.

Such immigrants were also invisible in South Africa and Namibia (papers by Mabandla & Deumert and Han, respectively), and their contemporary presence in European societies can bump into uneasy and unpleasant walls of anachronistic stereotyping, as Hua Nie’s paper on ethnic stereotyping in “Holland’s Got Talent” so painfully showed. International tourists in China were also a rare commodity, and Chen as well as Gao document the ways in which Chinese state-governed media respond to the growing presence and politico-economic importance of the scores of international tourists presently flying into Chinese airports. For such tourists, the reassuring message is provided that the Chinese language is dauntingly difficult but nothing to worry about – multilingual celebrities such as Mama Moon (Gao) and travel writers in mainstream Chinese media (Chen) reassure the foreigners that they will have an easy time communicating with the Chinese. That, in the process, one of the world’s largest languages is made near-invisible is a price willingly paid, remarkably, by a reinvigorated Chinese nationalism.

In what follows, I will briefly review some general and analytical reflections prompted by these papers; together, they can perhaps serve as a heuristic for further empirical research; for me, they testify to the extent to which I was intrigued and captured by the exceptional scholarship presented in this panel.

1. An initial and seemingly trivial point is that the papers all addressed change, and the scale of the particular case of change – a global repositioning of “Chineseness” – has been mentioned above. In my view, studying globalization amounts to studying change – and not “flat”, linear change, but a terrifically complex array of different forms and modes of change, operating at different speeds, with different objects and instruments, mobilizing different forms of actors and resources, and with different (often unpredictable and nonlinear) outcomes. Thus put, the point that these papers address change loses its trivial ring, because we are traditionally not well equipped to address such complex and dynamic patterns of non-stability – our structuralist and synchronic toolkit has prepared us for precisely the opposite. As we shall see further, one important (and apparently constant) feature of such patterns of change is restratification: the “reordering” and “re-ranking” of cultural and symbolic capital such as language and forms of identity. The image in the caption shows precisely such restratification: the restaurant owner’s Cantonese and traditional script – until recently unchallenged in their hegemony as the linguistic emblems of “Chinese in the world” – are rapidly being overtaken by the codes of the People’s Republic, Putonghua and simplified script, forcing the author of the little advertisement to un-learn his usual codes and re-learn, problematically, the new codes. Almost all the papers in this panel showed aspects of such restratifications, in which people and their identity codes assume new, often contested, positions in the symbolic hierarchies that direct social life.I shall return to this issue below.

So we’re not really good at studying change; but the papers gave us a couple of perhaps useful leads into a productive way of addressing them. I picked up, in particular, three points that we best see as three different aspects of one bundle – an “object of change”, so to speak, for a sociolinguistics of globalization. Let me review them one by one.

2. The first point, and again seemingly evident, is that we study the ways in which processes of change affect and operate on and in language, in processes of meaningful social interaction and the resources used for them. We see that language, for instance in the presentation of Mandarin in the China Daily (Chen) becomes emblematic – no longer a “linguistic” thing strictly speaking (not used to produce denotational meaning), but a cultural thing, even a political and moralized one, as Gao’s discussion of the multilingual celebrities in the Chinese media showed. Languages, different degrees of proficiency in them, and specific ways of using them, become emblems of legitimate belonging, exemplary citizenship, personal character (or lack thereof, as in the “devious” Chinese traders in Northern Namibia discussed by Huamei Han). This works in several directions: constructively as a way of identifying oneself, but also destructively as a way of stigmatizing the other and removing the social capital potentially attributed to the other’s linguistic resources – as when a jury member in Holland’s Got Talent imitates the “l-r confusion” stereotypically attributed to Chinese immigrants (“number 39 with lice?” instead of “with rice?”). Small, almost homeopathic specks of performed or displayed language can be, and are, turned into powerful identifiers (and stratifiers) in the sociolinguistic world of globalization.

3. But, importantly, language rarely occurs alone. What we have seen in the papers (and in other recent studies), is how language almost always comes with a sort of indexical “envelope”, so to speak, of behavioral scripts. Such scripts can best be described as imaginable situations in marked (i.e. nonrandom) spacetime, provoking enregistered (and therefore normative, expected and presupposed) modes of behavior. The little bits of Chinese provided by the travel writers in Chen’s paper occurred in discussions of cuisine, in itself marked as “exotic” but presented as part of the “experience” of heritage tourism in China.

To unpack the definition somewhat: the behavioral scripts assume the form of actual real-life situations which we can somehow imagine (e.g. working for a Chinese employer in a Namibian shop, Huamei Han; or Chinese people “typically” being involved in small-scale but transnational trading or catering, Mabandla & Deumert and Nie Hua, respectively), and onto which we project normative patterns of behavior and – thence – templates of character and identity (the Chinese employer being harsh and demanding, the Chinese tradesmen being relentlessly competitive, the restaurant owner speaking nonnative varieties of language, etc.). Note that I mentioned “marked spacetime” as part of the definition: a crucial element in all of this is the actual spatial and temporal frame in which these behaviors are suggested to occur, or are preferred to occur: they are, in that sense, fundamentally chronotopic.

Language is mixed into these behavioral scripts as, sometimes, the key emblem that points to and invokes – indexes – the entire script and its normative dimensions. So language is rarely alone, and even when it operates alone, it often emblematizes, as a metonym, the broader package of behavioral expectations and identity templates. As linguists, we tend to overemphasize the former and downplay the latter, while phenomenally – as phenomena – they usually co-occur

4. The normative dimensions bring me to the final point. We already have language (in particular modes of occurrence) and the wider behavioral scripts within which they appear and operate. The third element is normative judgment, valuation. The two former features almost always appear wrapped in an evaluative frame – understood here in its Bakhtinian sense, as the social value attributed to “meaning” – and when we address change (here we come to our point of departure), it is the evaluative frames that might be of paramount concern. For all the papers (and other studies) showed that the global transformation of “Chineseness” (as with other forms of large-scale globalization-induced transformation) collapsed into actual real life situations in which we saw an unfinished struggle between old and new evaluative frames. The jury member in Holland’s Got Talent projected the “old” evaluative frame of “Chineseness” onto the candidate – a “new” PhD student and accomplished opera singer from Mainland China; and Nie Hua’s data on internet debates on the incident showed partly coordinated but also partly very different orientations in the “old” Chinese community in the Netherlands, and the “new” community. Similarly, Mabandla & deumert’s excellent historical overview of Chinese diasporas in South Africa showed how the “new” immigrants partly inscribe themselves into a slow and enduring structure of economic activity – small-scale trading and catering – but also move into more hybrid and dynamic forms of “entanglement” with the present conditions of economic, social and cultural life.

Thus, all the studies presented in this panel showed processes of social, cultural and political identity-formation developing in a polycentric environment in which various normative “cores” or “foci” could be identified – behavioral scripts, in short – but not necessarily in an equivalent way. Some behavioral scripts and their evaluative frames move slower than others, they affect other places, other social roles, other sociocultural and political effects: different spacetime frames, activity modes, membership criteria, and forms of value attribution clashed into often highly uncomfortable and sometimes densely conflictual actual situations. And this polycentric arena, I would argue, is the empirical engine of what we can observe in the way of change. Note, in passing, that when Nik Coupland emphasized reflexivity as the condition of globalization in his plenary, I assume that he has this normative and evaluative dimension firmly in mind; from what I see in our field, reflexivity in actual fact looks more like a highly effective sort of “pricing strategy”, something with a real bite in terms of power effects, rather than like a lofty meta-concern without social consequences. The same obviously counts for Penny Eckert’s “indexical fields”, which should also be understood as fields in which powerful evaluative effects prevail.

In conclusion: I have suggested three interlocked aspects of an object that might be useful in studying change in a sociolinguistics of globalization: the specific ways in which change settles down in and on language, the “packaging” of language in broader behavioral scripts, and the normative encoding and evaluation of this package in actual social practices developing in necessarily polycentric social arenas. Together, I do hope, they provide an empirical roadmap for an adequate study of phenomena of tremendous relevance and impact on our present world and the lives we lead therein.


The conference program, including the panel lineup and the abstracts, can be accessed via