When “scientific” became a synonym for “unrealistic”.

Paul Samuelson

Jan Blommaert 

“From Adam Smith in 1776 to Irving Fisher in 1930, economists were thinking about intertemporal choice with Humans in plain sight. Econs began to creep in around the time of Fisher, as he started on the theory of how Econs should behave. But it fell to a twenty-two-year-old Paul Samuelson, then in graduate school, to finish the job”. (Richard H. Thaler, Misbehaving: The Making of Behavioral Economics, p.89; New York: Norton, 2015).

Richard Thaler, in this wonderful book, uses the terms “Humans” and “Econs” to distinguish between, respectively, real people observed in real life, having real interests, attitudes and modes of thought and behavior that are often, let us say, suboptimal; “Econs”, by contrast, are fictional characters, ideal people who don’t have passions or biases, are always rational, possess a maximum of information and are able to convert this linearly into economic behavior. Thaler’s book is a powerful argument in favor of an Economics science that keeps track of, and explains, Human behavior as, at least, a qualification to the kinds of fictional predictions of Econs’ behavior that are the Economics mainstream’s occupation.

In so doing, Thaler also directs our attention towards the small historical window in which this current mainstream’s doctrine occurred and flourished. For almost two centuries, Economics was precoccupied with real markets, customers, prices and policies – Adam Smith’s Theory of Moral Sentiments setting the scene for an Economics that dealt with the whims of human social behavior. The discipline abandoned this focus about merely half a century ago, when Samuelson, Arrow and some others replaced muddled descriptions of reality by elegant mathematical “models”, supposed to be of absolute and eternal precision and capable of bypassing the uncertainties and historical situatedness of real human minds. When critics pointed towards such minds (and their tendency to violate the rules of such elegant models), the response was that, willingly or not, people in economic activities would behave “as if” they had done the intricate calculations captured in the models. Thaler’s book is a lengthy and pretty detailed refutation of that “as if” argument: if nobody really actually operates in the ways laid down in mathematical models, why not take such deviations – “misbehaving” Humans – seriously? For someone such as myself, involved in ethnographic studies of Humans and their social behavior, this question is compelling and the arguments it provokes inescapable.

Thaler is not a nobody in his field – he’s the 2015 President of the American Economics Association; he will be able to ask this question urbi et orbi and with a stentorian voice. There might be some obstacles, though. Interestingly, the kinds of Economics designed by Samuelson and his comrades were (and are) seen as truly “scientific”. The conversion of a science grounded in observations of actually occurring behavior to a science concerned with abstract mathematical modeling was seen as the moment at which Economics became a real science, a complex of knowledge practices not tainted by the fuzziness of actual social facts but aiming at absolute Truth – something invariably expressed not in prose but in graphics, tables and figures, in which a new abstract model could be seen as a major scientific breakthrough (just look at the list of Economics Nobel Prize winners since the 1960s, and read the citations for their selection). As for the teaching and training of aspiring economists, it was thought that they would now be truly “scientific”, since students would learn abstract and ideal frameworks suggested to be absolutely generative in the sense that any form of real behavior could be measured against it and explained in its terms. No more nonsense, no more description – a normative theory such as that of Samuelson (sketching how ideal people should act) would henceforth be presented as a descriptive one (effectively documenting and explaining how they actually act) as well – an absolute theory, in other words. The shift away from “realism” – the aim of descriptive theories – towards ideal-typical modeling – the aim of normative theories – was seen as irrelevant. Economics became “scientific” as soon as it abandoned realism as an ambition.

It is interesting to see that in post-World War II academics, similar moves were made in different disciplines. Chomsky’s revolution in Linguistics (caused by his Syntactic Structures, 1957) is an example. Whereas Linguistics until Chomsky was largely driven by descriptive aims and methods (go out and describe a real language), in which careful empirical description  and comparison would ultimately lead to adequate generalization (Saussure’s Langue), Chomsky saw real Human language as propelled by an abstract formal and generative competence, describable as a finite set of abstract rules capable of generating every possible sentence in a language. This, too, was seen at the time as a major leap towards scientific maturity, and senior philosophers of science (already accustomed to see formalisms such as mathematical logic as the purest forms of meaning) argued that, with Chomsky, Linguistics had finally become a “science”. Linguists, from now on, would no longer do fieldwork – the interest in listening to what real people actually said was disqualified – but rely on “introspection”: one’s own linguistic intuitions were good enough as a base for doing “scientific” Linguistics. It took half a century of sociolinguistics to replace this withdrawal from realism with a renewed attention for actual variation and diversity in real language. Contemporary sociolinguistics, consequently, operates towards Linguistics very much like Thaler’s Behavioral Economics towards mainstream Economics: as a sustained attempt at making this “science” realistic again. Chomsky generative grammar

Similar stories can be told with respect to disciplines such as psychology and sociology, and later cognitive science, where the desire to become “scientific”, in the same era, led to a canonical “science” in which white-room experiments and quantifiable surveys replaced actual observation of situated social behavior and attention to what people really did and say about themselves and society.

There, too, the assumptions were the same: the actual social behavior of people is driven by a “deeper” abstract level of psychological, social and cognitive processes which can be captured and tested by detaching individuals from their real-life environments, submitting them to testing procedures that bear no connection whatsoever with any other actual form of social and meaningful behavior. Thus,  cognitive, psychic and emotional behavior can be accurately and “scientifically” studied by putting individuals into an MRI scanner, where they stay entirely immobile and cut off from any outside stimulus for 45 minutes. The outcomes of such procedures (quite paradoxically called “empirical” by practitioners) are presented, remarkably (or better, incredibly), as accurate accounts of real, situated and contextually sensitive social and mental activity. Abstract modeling of what we could call “Psychons”, here as well, is not seen as a normative enterprise but as a descriptive one as well, predicting (with various degrees of accuracy) Human behavior. This study of Psychons, then, is the real “science”, often rhetorically opposed to and contrasted with the “storytelling” or “journalism” of research grounded in actual observation and description of Humans (turning one of the 20th century’s most influential intellectuals, Sigmund Freud, into a fiction writer). Senior sociologists and psychologists such as Herbert Blumer and Aaron Cicourel brought powerful (and never effectively refuted) methodological arguments against this shift away from realism and towards “science” – their arguments were dismissed as unhelpful.

So here we are: knowledge disciplines concerned with Man and society appear to be “scientific” only when they deliberately reject the challenge of realism – “reality talking back”, as Herbert Blumer famously called it – and engage in abstract formalization and modeling, regardless of whether or not such formal schemes and models stand the test of empirical reality checks. Such “science”, because it dismisses this kind of systematic reality check, also becomes incapable of describing change. Experiments need to be “repeatable” in order to be “scientific”, and consequently we continuously check and test things that have to remain stable in order to be scientifically testable. The fact that actual social processes and realities are not “experiments”, and display a strong tendency to change perpetually, precludes repeatability and consequently can never be “scientifically” addressed. This feature – the bias towards stability and the incapability of addressing change – is a constant in all these “sciences”. And those who practice such “sciences” are actually proud of it. Strange, isn’t it?

We live by our mythologies, Roland Barthes famously said. One of the mythologies we live by is that of “science” being necessarily, because of its own criteria for validity, unrealistic, and therefore often outlandish and outrageous in its findings and conclusions. It would be good, therefore, to return to the old debates historically accompanying the shifts in the disciplines I mentioned here, and carefully examine the validity of critical arguments brought against these kinds of “science”. To the extent that people still believe that “re-search” means “looking again”, i.e. to be continuously critical of one’s own knowledge doctrines, this would be an eminently scientific practice.

PS (2017): Richard Thaler was awarded the Economics Nobel Prize in October 2017.

by-nc

Advertisements

Rationalizing the unreasonable: there are no good academics in the EU

horizon_2020_1

Jan Blommaert 

Attracting external funding has become, everywhere, one of the main priorities of academics, and writing funding application has consequently also become one of their main tasks. The idea is “competitiveness”: quality will be evident when academics, individually or in teams, acquire funding after a strict and rigorously exclusive peer-review process. In addition, specific sources of funding are specified as benchmarks, suggesting that they are the “most competitive” ones, and therefore also the best and most objective indicators of quality: think of the ESRC in the UK or (the focus of this text) the European framework program Horizon 2020. In every form of performance management – for individual academics seeking promotion or tenure, for research teams, departments and entire universities – success in such benchmark external funding acquisition is given immense positive attention. Universities, consequently, impose quota on their academic units – “you shall apply for at least five EU grants and obtain at least one this year!” – and turn it into a compulsory, even key activity of their staff. Professional grant writers and administrators are hired in academic departments or labs, and universities now employ EU-targeting lobbyists to “assist” and “facilitate” their bids for funding.

Well, my team just submitted a Horizon 2020 application last week, following a thematic call several months ago. In view of the application, we had set up an international consortium earlier on, did profound content preparation, and one of our team members spent hundreds of hours and several international trips worth several thousands of Euros preparing the application.

After submitting, we heard that a total of 147 applications had been received by the EU. And that the EU will eventually grant 2 – two – projects. In a rough calculation, this means that the chance of success in this funding line is 1,3%; it also means that 98,7% of the applications – 145 of them, to be accurate – will be rejected. And here is the problem.

It would be interesting to see the grand total of labor and resources invested in the 145 applications calculated in Euros. My guess is that many millions’ worth of (usually) taxpayers’ money will have been used – wasted – in this massive and mass grantwriting effort. Several hundreds of researchers will have been involved, each spending dozens if not hundreds of their salaried working hours on preparing the application, and hundreds of university administrators will have been involved as well, also spending salaried working hours on the applications. These millions of Euros have not been used in creative and innovative research – they weren’t spent on doing fieldwork, experiments or tests, nor on writing papers and holding presentations in workshops and symposiums. They were spent on – nothing. For when a grant application is rejected, the time and energy investment spent on it evaporates, as if these hours of labor were never spent, and as if the academics who spent them had nothing else to do. Thus, while this Horizon 2020 funding line will disburse half a dozen millions of Euros to the two “winning” teams, it will have cost more millions to the EU academic community represented by the 145 others who were rejected. Money, thus, has been sucked out of an already fragile funding base for universities across the EU, in a vain attempt to “win” and “be competitive” – and therefore “good”.

The attempt is futile, because if the rejection rate is 98,7%, the message given by the EU is, in effect, that almost all of the academic units participating across the EU in the funding call are not good enough. It is nonsense to try and argue that on grounds of pure academic quality just 1,3% will qualify, for the number of grants to be awarded is known before the peer review procedure takes place. In that sense, the peer review done by the EU panels is simply useless, for it has no impact on the number of awards granted by the EU – tens of applicants will receive a letter soon stating that their project was evaluated as “excellent but not selected for funding”. The criteria determining the “selection for funding” are, needless to say, carefully guarded secrets, and not grounded in assessments of academic quality. The system of selection is, when all has been said and done, simply irrational and unreasonable.

Still, and notwithstanding the previous remark, success or rejection is seen as an objective indicator of academic quality across the EU university system. By awarding just 1,3% of the applications, thus, a rather thoroughly absurd reality is shaped: almost 99% of the competing academics in the EU do not make the mark, and just 1,3% satisfies the EU benchmark. Now, we know that the 98,7% “losers” still have to compete in order to show that they are good enough; but when a selection bottleneck is thàt narrow, the effort, and the resources invested into it, are in effect simply wasted.

The paradox is clear: by going along with the stampede of competitive external funding acquisition, almost all universities across the EU will lose not just money, but extremely valuable research time for their staff. Little academic improvement will be achieved, and little progress in science, if doing actual research is replaced by writing grant proposals with an almost-zero chance of success. And as long as academics and academic units are told that success or failure in getting EU funding (with success rates such as the one mentioned here known in advance) is a criterium for determining their academic quality, gross injustice will be committed. People will be judged inadequate, mediocre or simply poor academics because they failed to get the benchmark funding – awarded, as we saw, on grounds that have little to do with academic quality assessments of applications. Heteronomy is the word that comes to mind here: academic practices and achievements are judged by means of non-academic standards, given a thin but hopelessly unconvincing veneer of “competitiveness”. And universities seeking to acquire external funding will be depleting their internal funding at extreme speed, the more they engage in this stampede for “competitiveness”.

I find this logic beyond comprehension. Those who rationalize the importance of acquiring benchmark external funding, are rationalizing an unreasonable and heteronomic system that produces tremendous numbers of “losers” and a tiny number of “winners”. The losers can be put under increasing pressure to show that they are competitive – increasingly risking their careers and spending funds better used on research and other intellectual activities.

To sum up: if the number of grants to be awarded is established before the peer-review process, this kind of “competitive” benchmark funding is not competitive at all, and a benchmark for nothing at all – least of all for academic quality. If, however, results in this weird game are maintained as serious and consequential criteria for assessing academic quality, then the conclusion is that there are no good academics in Europe – 99% of them will fail to get ratified as good enough. And these 99% will have to spend significant amounts of taxpayers’ money to eventually prove – what?

The entire thing really, seriously, begins to look and feel like buying lottery tickets or betting on horses: one spends money hoping to win some – and at moments of lucidity, one is aware of the fact that the net outcome will be loss, not gain. In the meantime, beautiful arias are sung about the extreme importance of research and innovation by the EU, by its member states, and by its universities. The question, of course, is how such a great cause is served by the present system of benchmark external funding acquisition. The money spent on it, I would say, would be better spent on … research and innovation proper.

by-nc

Indonesia, its youth and “light communities”

PROJECTO-IMG_3115

Jan Blommaert

Comments on the panel “Margins, hubs and peripheries in decentralizing Indonesia” (part 1), Conference on ‘The Sociolinguistics of Globalization”, Hong Kong University 5 June 2015. Panel convenor: Zane Goebel. Line-up: Michael Ewing, Dwi Noverini Djenar, Lauren Zentz, Meinami Susilowati.

All of the papers in this part of the panel focused on youth language and the sometimes problematic ways such “new” forms of speech clash with strong nation-state institutional cultures of standardization. Over and beyond this general focus, three points merit deeper engagement; let me review them briefly.

1. Youth language, universally, is an example of how societies (in spite of often very strong homogeneistic self-imaginations) in effect contain numerous “niches” developing at different speeds, occupying specific spatiotemporal arenas, and operating along specific normative frameworks projected onto behavioral scripts in which specific forms of language are part of what counts as accepted/acceptable behavior. It was Cicourel who stated that what people effectively do when they do the work of interpretation is to try and make sense of situations by reading social structure into it. I shall have more to say on social structure in a moment, but the point can already be made that social structure is manifestly plural: different structures interact and intersect, triggering often unbalanced confrontations of normative frames – what is “meaningful” and therefore socially and politically expectable – with often unexpected outcomes.

2. Furthermore, the papers all showed how such confrontations of different normative frames represents the experience of change. Indonesia, like any other place on earth, changes fast as an effect of globalization (and, in this case, also because of momentous national political shifts), and the on-the-ground experience of such change often takes the shape of conflictual discourses of normativity (again projected, concretely, into behavioral scripts encompassing specific forms of language usage). These normative frames provide a sense of “order” (recall Cicourel’s idea bout understanding as reading social structure into situations), and it is the clash of different “orders” that creates the sense of insecurity, anguish and destabilization we often see and encounter in data on people’s actual social experiences. In our own jargon, it is the immersion in a polycentric social environment that constitutes the baseline experience of macro-changes triggered by globalization. It is the encounter with not one single, transparent and hegemonic social structure, but with multiple structures in competition over spaces, membership and socially ratified meaningfulness with the potentially threatening effect of restratification, that consitutes the lived experience of “change” for many.

3. But even more importantly, what are these contrasting and conflicting “orders” like? In order to answer this question, we need a distinction between nation-state and globalized forms and representations of “community”. Remember that, in the tradition of Durkheim, Weber and Parsons, the nation state was typically a local, “thick” community – a community in which people shared vast amounts of resources through common backgrounds, institutional governmentality and socialization.

The papers in this panel, however, showed invariably “light” communities often tied together by shared “niched” practices (Goffman’s “Encounters” can also inspire us here). These light communities, remarkably, are local – see the emphasis on locally grounded youth vernaculars in the papers here – but translocally infused and framed, which is why they are often seen and decried as “westernization” while strictly local vernaculars and indexicals are used. The new globalized order, thus, with its intense physical and virtual mobilities, appears to stimulate and even privilege the formation of “light”, local communities whose orientation is not towards the nation-state but towards ideals and imageries drawn from the wider world, and involving specific spaces of deployment, specific actors and specific codes of meaningful practice. To return for a moment to the issue of structure: the “light” communities represent a “light”, flexible, volatile and fast-moving structure, interacting with and often only perceptible from within “thick” and slower-moving structures. Our disciplinary traditions have consistently emphasized the “thick” structures, while “light” ones tended to be dismissed as insignificant or superficial.

I’m afraid we can’t afford this any longer. The tremendous importance of “light” communities, and the fact that for those inhabiting them they often experientially, emotionally and socially prevail upon the traditional “thick” communities of family, religion, ethnicity or nationality, is perhaps the most pressing theoretical and descriptive issue in the study of globalization nowadays. From practices and their performers and performing conditions, over the kinds of communities they generate, to specific modes of social structure they propel: this to me sounds like a research program of considerable interest. The papers in this session provide excellent and substantial food for thought in this direction.

Links

The conference program, including the panel lineup and the abstracts, can be accessed via http://programme.exordo.com/slxg2015/

https://www.academia.edu/10789675/Commentary_Culture_in_superdiversity

https://www.academia.edu/8403164/Conviviality_and_collectives_on_social_media_Virality_memes_and_new_social_structures_Varis_and_Blommaert_

by-nc

Language, behavioral scripts, and valuation: Comments on “Transnationalizing Chineseness”

3.2 handwritten Chinese in Berchem

Jan Blommaert

Commentary, panel on “Transnationalizing Chineseness: language, mobility and diversity” (organizers: Shuang Gao & Xuan Wang; line-up: Xiaoxiao Chen, Shuang Gao, Hua Nie, Nkululeko Mabandla & Ana Deumert, Han Huamei – discussants Lionel Wee & Jan Blommaert). International Conference “The Sociolinguistics of Globalization: (de)centring and (de)standardization”, Hong Kong University, 4 June 2015.

Let me first point out that this panel was organized by two relatively junior scholars – Shuang Gao and Xuan Wang – and that they managed to bring together an exceptionally engaging and stimulating panel. They deserve a huge accolade for that. The points raised by them and by the authors of the papers in this panel are substantial and none of the participants can be accused of shoddy work. Neither can they be accused of irrelevance: the transformation of “Chineseness” as an effect of the global rise to economic and political prominence of the People’s Republic is a true globalization phenomenon of colossal scale, and each of the papers showed us how such terrifically large-scale transformations set down, so to speak, in actual small-scale situations, places and moments. The picture I added as a caption to this text shows that it also occurs in my own neighborhood in inner-city Antwerp, where a local Cantonese restaurant-owner, member of an older diasporic generation, addresses potential customers for renting a flat, and does so in an unstable hybrid of Cantonese and Mandarin and of simplified Chinese script and traditional character script. The global shift in “who is the Chinese in the world” forces him to readjust not only his linguistic and literacy repertoire, but to change his economic orientation, towards a new and very large community of immigrants from Mainland China – a population virtually unknown in most places in the world until the 1990s.

Such immigrants were also invisible in South Africa and Namibia (papers by Mabandla & Deumert and Han, respectively), and their contemporary presence in European societies can bump into uneasy and unpleasant walls of anachronistic stereotyping, as Hua Nie’s paper on ethnic stereotyping in “Holland’s Got Talent” so painfully showed. International tourists in China were also a rare commodity, and Chen as well as Gao document the ways in which Chinese state-governed media respond to the growing presence and politico-economic importance of the scores of international tourists presently flying into Chinese airports. For such tourists, the reassuring message is provided that the Chinese language is dauntingly difficult but nothing to worry about – multilingual celebrities such as Mama Moon (Gao) and travel writers in mainstream Chinese media (Chen) reassure the foreigners that they will have an easy time communicating with the Chinese. That, in the process, one of the world’s largest languages is made near-invisible is a price willingly paid, remarkably, by a reinvigorated Chinese nationalism.

In what follows, I will briefly review some general and analytical reflections prompted by these papers; together, they can perhaps serve as a heuristic for further empirical research; for me, they testify to the extent to which I was intrigued and captured by the exceptional scholarship presented in this panel.

1. An initial and seemingly trivial point is that the papers all addressed change, and the scale of the particular case of change – a global repositioning of “Chineseness” – has been mentioned above. In my view, studying globalization amounts to studying change – and not “flat”, linear change, but a terrifically complex array of different forms and modes of change, operating at different speeds, with different objects and instruments, mobilizing different forms of actors and resources, and with different (often unpredictable and nonlinear) outcomes. Thus put, the point that these papers address change loses its trivial ring, because we are traditionally not well equipped to address such complex and dynamic patterns of non-stability – our structuralist and synchronic toolkit has prepared us for precisely the opposite. As we shall see further, one important (and apparently constant) feature of such patterns of change is restratification: the “reordering” and “re-ranking” of cultural and symbolic capital such as language and forms of identity. The image in the caption shows precisely such restratification: the restaurant owner’s Cantonese and traditional script – until recently unchallenged in their hegemony as the linguistic emblems of “Chinese in the world” – are rapidly being overtaken by the codes of the People’s Republic, Putonghua and simplified script, forcing the author of the little advertisement to un-learn his usual codes and re-learn, problematically, the new codes. Almost all the papers in this panel showed aspects of such restratifications, in which people and their identity codes assume new, often contested, positions in the symbolic hierarchies that direct social life.I shall return to this issue below.

So we’re not really good at studying change; but the papers gave us a couple of perhaps useful leads into a productive way of addressing them. I picked up, in particular, three points that we best see as three different aspects of one bundle – an “object of change”, so to speak, for a sociolinguistics of globalization. Let me review them one by one.

2. The first point, and again seemingly evident, is that we study the ways in which processes of change affect and operate on and in language, in processes of meaningful social interaction and the resources used for them. We see that language, for instance in the presentation of Mandarin in the China Daily (Chen) becomes emblematic – no longer a “linguistic” thing strictly speaking (not used to produce denotational meaning), but a cultural thing, even a political and moralized one, as Gao’s discussion of the multilingual celebrities in the Chinese media showed. Languages, different degrees of proficiency in them, and specific ways of using them, become emblems of legitimate belonging, exemplary citizenship, personal character (or lack thereof, as in the “devious” Chinese traders in Northern Namibia discussed by Huamei Han). This works in several directions: constructively as a way of identifying oneself, but also destructively as a way of stigmatizing the other and removing the social capital potentially attributed to the other’s linguistic resources – as when a jury member in Holland’s Got Talent imitates the “l-r confusion” stereotypically attributed to Chinese immigrants (“number 39 with lice?” instead of “with rice?”). Small, almost homeopathic specks of performed or displayed language can be, and are, turned into powerful identifiers (and stratifiers) in the sociolinguistic world of globalization.

3. But, importantly, language rarely occurs alone. What we have seen in the papers (and in other recent studies), is how language almost always comes with a sort of indexical “envelope”, so to speak, of behavioral scripts. Such scripts can best be described as imaginable situations in marked (i.e. nonrandom) spacetime, provoking enregistered (and therefore normative, expected and presupposed) modes of behavior. The little bits of Chinese provided by the travel writers in Chen’s paper occurred in discussions of cuisine, in itself marked as “exotic” but presented as part of the “experience” of heritage tourism in China.

To unpack the definition somewhat: the behavioral scripts assume the form of actual real-life situations which we can somehow imagine (e.g. working for a Chinese employer in a Namibian shop, Huamei Han; or Chinese people “typically” being involved in small-scale but transnational trading or catering, Mabandla & Deumert and Nie Hua, respectively), and onto which we project normative patterns of behavior and – thence – templates of character and identity (the Chinese employer being harsh and demanding, the Chinese tradesmen being relentlessly competitive, the restaurant owner speaking nonnative varieties of language, etc.). Note that I mentioned “marked spacetime” as part of the definition: a crucial element in all of this is the actual spatial and temporal frame in which these behaviors are suggested to occur, or are preferred to occur: they are, in that sense, fundamentally chronotopic.

Language is mixed into these behavioral scripts as, sometimes, the key emblem that points to and invokes – indexes – the entire script and its normative dimensions. So language is rarely alone, and even when it operates alone, it often emblematizes, as a metonym, the broader package of behavioral expectations and identity templates. As linguists, we tend to overemphasize the former and downplay the latter, while phenomenally – as phenomena – they usually co-occur

4. The normative dimensions bring me to the final point. We already have language (in particular modes of occurrence) and the wider behavioral scripts within which they appear and operate. The third element is normative judgment, valuation. The two former features almost always appear wrapped in an evaluative frame – understood here in its Bakhtinian sense, as the social value attributed to “meaning” – and when we address change (here we come to our point of departure), it is the evaluative frames that might be of paramount concern. For all the papers (and other studies) showed that the global transformation of “Chineseness” (as with other forms of large-scale globalization-induced transformation) collapsed into actual real life situations in which we saw an unfinished struggle between old and new evaluative frames. The jury member in Holland’s Got Talent projected the “old” evaluative frame of “Chineseness” onto the candidate – a “new” PhD student and accomplished opera singer from Mainland China; and Nie Hua’s data on internet debates on the incident showed partly coordinated but also partly very different orientations in the “old” Chinese community in the Netherlands, and the “new” community. Similarly, Mabandla & deumert’s excellent historical overview of Chinese diasporas in South Africa showed how the “new” immigrants partly inscribe themselves into a slow and enduring structure of economic activity – small-scale trading and catering – but also move into more hybrid and dynamic forms of “entanglement” with the present conditions of economic, social and cultural life.

Thus, all the studies presented in this panel showed processes of social, cultural and political identity-formation developing in a polycentric environment in which various normative “cores” or “foci” could be identified – behavioral scripts, in short – but not necessarily in an equivalent way. Some behavioral scripts and their evaluative frames move slower than others, they affect other places, other social roles, other sociocultural and political effects: different spacetime frames, activity modes, membership criteria, and forms of value attribution clashed into often highly uncomfortable and sometimes densely conflictual actual situations. And this polycentric arena, I would argue, is the empirical engine of what we can observe in the way of change. Note, in passing, that when Nik Coupland emphasized reflexivity as the condition of globalization in his plenary, I assume that he has this normative and evaluative dimension firmly in mind; from what I see in our field, reflexivity in actual fact looks more like a highly effective sort of “pricing strategy”, something with a real bite in terms of power effects, rather than like a lofty meta-concern without social consequences. The same obviously counts for Penny Eckert’s “indexical fields”, which should also be understood as fields in which powerful evaluative effects prevail.

In conclusion: I have suggested three interlocked aspects of an object that might be useful in studying change in a sociolinguistics of globalization: the specific ways in which change settles down in and on language, the “packaging” of language in broader behavioral scripts, and the normative encoding and evaluation of this package in actual social practices developing in necessarily polycentric social arenas. Together, I do hope, they provide an empirical roadmap for an adequate study of phenomena of tremendous relevance and impact on our present world and the lives we lead therein.

Link:

The conference program, including the panel lineup and the abstracts, can be accessed via http://programme.exordo.com/slxg2015/

https://www.academia.edu/10086732/Chronotopes_scale_and_complexity_in_the_study_of_language_in_society

by-nc