Be careful when researching antisemitism

Oof, this is another post about a post and its comments. What am I doing with my life, what a loser. But it’s so consuming.

So, my message is be careful because the topic is charged. Some people don’t want anti-semitism to be recognised. A few sickos actually want it to exist. Some people set a very high bar of evidence. Some people are actively looking for evidence that Jews “cry wolf”. Some Jews and anti-racists are scared and uncritical.

For example, we need objective measurements about whether or not antisemitism constitutes a threat, and if it does we need evidence to convince policy-makers to worry about it. There is some stuff around which isn’t quite all it could be. For example, I would have been more satisfied with Joel Kotek’s book Cartoons and Extremism: Israel and the Jews in Arab and Western Media if it had included a comparison between the way Jews are dealt with in political cartoons and the way other social groups are dealt with by the same, or different, political cartoonists. Reading it left me with questions – are any other groups depicted so enduringly with tentacles, or brooding over the earth, or as vampires? I suspect not, but it would be helpful to know because the strategies for dealing with particular hatred of one social group would be different from those dealing with hatred aimed at minorities with less discrimination (no pun intended).

In The Boston Review, Malhotra and Margalit are disturbed by inadvertant stereotyping of Jews and I think that although their study is valuable, they made a similar omission.

They recount an experiment to try to find out more about whether antisemitic attitudes shape individuals’ preferred measures to combat the financial crisis. After asking a question about how much to blame the Jews were for the financial crisis, they moved on to the next part:

“Participants in a national survey were randomly assigned to one of three groups. All three groups were prompted with a one-paragraph news report that briefly described the Madoff scandal. The text was the same for all three groups, except for two small differences: the first group was told that Bernard Madoff is an “American investor” who contributed to “educational charities,” the second group was told that Madoff is a “Jewish-American investor” who contributed to “educational charities,” and the third group was told that Madoff is an “American investor” who contributed to “Jewish educational charities.” In other words, group one did not receive any information about Madoff’s Jewish ties; group two was told explicitly that Madoff is Jewish; and group three received implicit information about Madoff’s religious affiliation. In a follow-up question, participants were asked for their views about providing government tax breaks to big business in order to spur job creation.”

Among non-Jewish respondents the variation was statistically significant, and you can probably guess in which direction – read on.

This piece proved highly controversial on Crooked Timber. Indeed there was no peer-reviewed research report, the methodology was incompletely set out, and the investigators didn’t ask about other ethnicities – for example there is also a tendency within US society which blames African American borrowers with sub-prime mortages for the financial crisis but this was not accommodated in the questionnaire. So it is not possible from this research to form an impression about whether blame is mono-causal, or whether one group is blamed more than another, or whether political affiliation is associated with blame of one or other ethnicity, or whether if you blame one group you are more likely to also blame another. I suppose my concerns are about demonstrating the specificity (singling-out) or otherwise of antisemitism. We also don’t know about the response rate, recruitment or sufficient demographic data.

Crooked Timber queried the methodology and then posted the authors reply. I gingerly ventured there (not a regular, it’s very clever but I don’t know what it’s for) and found, below the piece, depressing lethargy about antisemitism and deep interest in burrowing into the methodological minutiae while ignoring the big picture. There was a lot of discussion about the formulation “the Jews”, and some about the mention of Jews at all:

“the only possible “answer” to a question about “the Jews” is f—off.

That’s exactly what I thought. How about adding an option saying “I don’t like loaded questions designed to make me look like an anti-semite” in the next survey.”

Alternative methodologies for surveying the worrying area of antisemitism were overwhelmed by cynicism and methodological head-shaking. I’d be inclined to pick my response items from the media and bury ‘the Jews’ in amongst them to avoid salience bias. Even so, if bias was introduced by the question format then some respondents were readily susceptable to that bias – you can’t elicit stuff that isn’t at least latent. As Malhotra points out to a commenter who complained that asking questions which implicated ethnic groups in unfavourable circumstances introduces bias:

“Henri, you might be right about what we would get if we asked that question, but surely it would reveal something disturbing about the survey respondents? What would you have said in response to the questions? I assume you would have said ‘no’ to both; if you hadn’t, I would think less of you. If all that is right, then I fail to see how this is garbage.”

Safe to say that his work did not go down at all well on Crooked Timber, with one commenter suggesting that some others coordinate the task of going round the blogs raising the problems with the findings. I don’t know this person, but it seemed very important to him either that bad science should not stick, or that nobody should think there is more antisemitism than he has decided exists. And I think it’s the latter – he’d be busy elsewhere with higher-stakes stuff otherwise. There’s more weirdness.

Crooked Timber commenters are far more sophisticated in their diminishment of antisemitism than I’m used to. CounterPunch contributor Tim Wilkinson devoted himself to discrediting Malhotra’s and Margalit’s findings, penning a vignette that the researchers had an agenda to foment antisemitism where there was none. Somebody else cast aspersion on the motives of the researchers. I was reading for a long time before somebody said they were uncomfortable with the slurs and somebody else asked whether deep critics of Malholtra and Margalit had equally gone for Mearsheimer and Walt with their far flakier non-peer-reviewed thesis. I find Crooked Timber depressing, it’s so high brow and sophisticated but you get the same quota of crap below the post, you just have to spend more time deciphering it while the chill of the place eats into your bones.

(In contrast there’s a glow, warmth and rollicking that emanates from Harry’s Place like the brothel in the model village in Beetlejuice.)

What we know from Malhotra and Margalit is that a mention of a corrupt businessman’s Jewish ethnicity was associated with a significant drop in support for tax breaks for businesses, and this was enough to lead Malhotra and Margalit to feel disturbed, and make the modest conclusion that:

“The media ought to bear these findings in mind in their coverage of financial scandals such as the Madoff scam. In most cases, religious and ethnic affiliations have nothing to do with the subject at hand, and such references, explicit or implied, ought, then, to be avoided.”

My “be careful” is a caution to researchers of antisemitism to watch their backs as well as their methods. There is certainly a need for this kind of research and Malhotra and Margalit deserve praise for undertaking it. I look forward to the next iteration. Just as a final thought Malhotra is right to predict the end of peer review as we know it, but there’s a lot of leeway between a journalistic piece and a peer-reviewed one. I would really like to see the authors modify the study, conduct it again, produce the report and data as a wiki (non-editable), let peers review it on the Web, and adapt it accordingly. That way Neil Malhotra will never again have to defend being topical. Although, if they were British, this would butter them no parsnips in the confounded, evil Research Assessment Exercise.

Milgram’s findings reproduced

Stanley Milgram was the Yale psychologist who found that all but a few of the participants in his 1960s experiment inflicted what they believed to be painful punishment on other human beings when ordered to do so by an authority figure. His were seminal studies of social influence and its effects on behaviour.

Some people (I can’t remember who) raise the possibility of methodological flaws round recruitment for these and subsequent studies. Possibly the newspaper ads, emails, posters etc attracted people who were not after all ordinary and unremarkable but in fact the type of people who would cooperate with the investigators in any experiment.

The reporting of these experiments is so gappy and research ethics so evolved that I expected to keep this hope alive for some time to come.

However, today we learn (a year or so after it happened) that Jerry Burger and colleagues  reproduced Milgram’s findings, as reported in the BBC, Time, and Mail. I haven’t read the paper so I don’t know about recruitment and whether or not the participants were aware of Milgram’s work, which is famous. The research ethics criteria for conducting the study involved taking many measures to safeguard the wellbeing of participants – they seem like an exceptionally sane group of people – but what drew them to participate we don’t know.

Setting out to investigate not obedience, as Milgram did, but rather the extent to which virtual characters can substitute for real humans in social situations – Slater and colleagues reproduced Milgram’s findings with a virtual female character as the learner back in 2006 at UCL. They told recruits that they wanted to find out whether discomfort helped the virtual character learn to associate words. Administering electric shocks to the virtual character – seen and heard by two-thirds of the participants and animated, as you can see from the vids, to seem very much present – aroused all sorts of sympathetic physiological responses in the participants, some of whom withdrew from the study and others of whom attempted to interact with her in unscripted ways.

“The Learner had a quite realistic face, with eye movements and facial expressions; she visibly breathed, spoke, and appeared to respond with pain to the ‘electric shocks’. Not only that but she seemed to be aware of the presence of the participant by gazing at him or her, and also of the experimenter – even answering him back at one point (“I don’t want to continue – don’t listen to him!”). Finally, of course, the electric shocks and resulting expressions of discomfort were clearly caused by the actions of the participants.”

There was a fair bit of early withdrawal in this one, but withdrawal wasn’t reliably predicted by displays of empathy, which was interesting. Although they were not studying obedience, the investigators comment:

“We argue that whether participants complied because of ‘obedience to authority’ or politeness, or respect for expertise does not really matter. The fact is that they continued to carry out a task that they found to be unpleasant, when there was no reason for them to do so. Unlike the situation in, for example, the military, there were no real negative consequences that would follow from withdrawal – indeed participants had been advised that they were free to withdraw at any time without giving reasons. Hence, our experiment shows that it is possible to set up a situation in virtual reality where people will comply with requests to follow instructions that appear to cause pain to another entity thus causing discomfort to themselves. Explicitly they know that there is no pain, but it may be that the totality of their perceptions in that situation results in an implicit knowledge that indeed their actions are causing another entity to suffer. This idea fits with the evidence that participants in the VC tended to wait a relatively long time before giving the shocks after the Learner had stopped responding. From the point of view of their explicit knowledge waiting made no sense, but it did make sense at the implicit level.”

It’s also kind of comforting to separate obedience from willingness to enact violence – also based on Milgram’s work, there was a study (sorry, no ref – I learnt about it in a documentary about our collective propensity to fascism) about willingness to give up seats on public transport when the person making the request on behalf of the (perfectly healthy-looking) person who wanted the seat was wearing a uniform. In that case the participants were randomly selected, but they were almost all prepared to give up their seat.

So I suppose we’re always ask ourselves, “Why am I doing this?” and then, if we’re not satisfied with our own answer, ask the person who is making the request the same question. And if we’re not satisfied with their answer, then we change our behaviour accordingly. And either way, to carry on examining ourselves (without making a sport of it) in case we’re ethically complacent. Which it is very easy to be. Our own conscience – our guiding light – like any lighthouse requires regular, careful cleaning and can go up for sale. But it’s definitely all we have.

It’s interesting about the participants in Slater’s study who refused to even go through the motions. Sometimes conscience is more about ‘us’ – our need to cohere morally to our own satisfaction, and how we interpret this – than about ‘them’.

How to corrupt your survey data (1)

The Guardian’s Polly Curtis on the National Student Survey.

“The latest controversy over the NSS follows reports that a lecturer at Kingston University told students: “If Kingston comes bottom [in the NSS], the bottom line is that no one is going to want to employ you because they’ll think your degree is shit.” His remarks to a class were recorded and made available on the internet.”

Clearly what we need now is a national student survey to ascertain how many students have come under this kind of pressure.

Vinegar, brown paper and an inadequate reading of Hanan Alexander

I found an online speed reading programme, called spreeder – http://spreeder.com – which I used to read the following (at 350 wpm, font-size=30, chunking and punctuation pauses): Alexander, H (2006) A view from somewhere: explaining the paradigms of educational research. Journal of Philosophy of Education;40:205-221.

The paper discusses the methodology wars in educational research which have seen proponents of positivist methods pitted against those of constructivist ones. He spends a lot of time addressing the opposition (which came up for me last week in Peter Boghassian’s basic paper on whether either constructivists or behaviourists could as such deploy Socratic dialogue in their teaching) – first on its own terms, and then questioning the validity of the opposition with reference to John Dewey. Aiming to avoid relativism or absolutism, he nudges towards what he calls “transendental pragmatism” – educational research should seek to illustrate and provide insight according to its stated values, and avoid trying to generalise (or as he sees it, control). In this type of research, whether it be phenomenology, ethnography or whatever, the usual measures of validity and reliability do not apply – in fact they signal a queasiness about abandoning positivism which is at odds with the epistemological bases of a constructivist approach. Instead, in a way which reminds me of David Silverman call for “accurate accounting”, Alexander calls for lawyerly conduct in setting out, defending and substantiating a case. He ends with a caution to all educational researchers – we can only know very little. Very good.
Excerpt:

However, in contrast to Plato (1987) for whom illustrations facilitate communication of absolute truth, the position emerging here suggests that concrete cases are a good but imperfect means to articulate very limited understandings of what we can only assume lies beyond our complete grasp (Alexander, 2004). Truth is conceived in this view not as correspondence to objective reality, or as serving some theoretical function or purpose, but in the way descriptions embody and enable us to grasp the nuanced and dynamic form of transcendent ideals (Langer, 1954), the capacity of texts, symbols and stories to capture the contours of feelings in forms. Viewed from this perspective, even a large, random statistical sample is but an extended and elaborate case that outlines the conceptual shape of experiences common to a significant population of people (Feyerabend, 1996).

and from his conclusion:

Finally, the logic of illustration in educational research precedes the logic of generalisation. We come to understand ideals first through detailed examples of concrete cases, and only secondarily by means of abstract and universal covering laws. We have yet to articulate adequate cannons of rigor to govern this logic. But it is undoubtedly a category mistake of the first order to model these canons on weak forms of empirical standards such as reliability, validity, and generalisability. D. C. Phillips’ (2005) reference to the legal analogy is especially apt in this connection. The task of a legal advocate, he reminds us, is to present the ‘facts’ of a case to those who sit in judgment with sufficient corroborative evidence as to warrant their assertability, a term he borrows from
Dewey (1938). This evidence might come from a variety of witnesses, descriptions, documents, and measures. Yet a case based on ‘warranted facts’ will be meaningless without a strong argument concerning application and interpretation of the law in relation to those ‘facts’. If there is an ideal form of inquiry to inform and enhance educational policy and practice, in other words, it is more likely to resemble the practice of law than the discovery of statistical laws. Law involves the prescription of norms that actors must learn to follow based on proper reasoning, whereas statistical laws state regularities that control behavior  regardless of human choices.

This account may be a disappointment to those whose preferred epistemology seeks control on the basis of explanation and prediction. But the fact that we can sometimes predict does not authorise us to control, and in all events we control much less than the positivists may have once supposed. It follows that we should be wary about what we think inquiry enables us to predict, since what we take to be true or right today may turn out to false or troubled tomorrow. Inquiry at its best endows us with insights to better control ourselves, not generalisations to more efficiently dominate others; and the surest path to self-governance lies in reaffirming Socrates realisation that genuine wisdom begins with the recognition of how little we really know.