Hesse-Biber, S. (2015). Mixed Methods Research: The” Thing-ness” Problem. Qualitative health research, 25(6), 775-788
Welcome to the new academic year. I can’t help feeling that advocates of mixed methodology remind me of the united reformed church. Set up to transcend the binary between catholic and protestant, they simply add another division rather then a conclusive reconciliation. Mixed methodologists might reasonably claim to be pragmatists rising above the paradigm wars afray. This stance does not in itself mean that the deep differences surrounding methods and methodologies are resolved. It means that a new dimension has been added to the debate. Or rather, that some quite traditional ways of working have been rebranded and added to the debate.
Martyn Hammersley is proud of his capacity to irritate and unsettle.
The rather unexpected academic spat that takes place between him and Delamont et al (2010) across successive editions of Qualitative Researcher illustrates the caustic nature of the exchanges this playful devil’s advocacy can generate.
While I don’t often agree with Hammersly (2008) I do like the insight he inspires. It is unsettling because Hammersley is an ethnographer. Surely he appreciates that ositivism is dead. It has simply not survived the critical onslaught of post-positivist methodology and a new orthodoxy has taken hold. The qualitative researcher has won and nobody counts anything anymore. This is a caricature, after all what is the current policy inclination for ‘what works’ in education if not positivism? And surely one of the ways of interpreting Abbott (2001) is by saying that methods associated with qualitative research can be squeezed into a quantitative methodological frame. I’m inclined to wonder whether Hammersley is really – after all – a shy positivist in qualitative clothing.
But, despite being interested in his work I am not a fan … Questioning Qualitative Inquiry – was an interesting, illuminating, unsettling and enjoyable (re)read. Not only that, in bits it was funny. His introduction quotes a series of letters written by the American Comedian, Woody Allen. In what he calls a fantasy, Allen writes a piece entitled, ‘If impressionists had been dentists.’ This ‘fantasy’ is a series of imaginary letters from van Gogh to his brother Theo in which he recounts his troubles as a dentist The exchange opens with this letter:
Will life never treat me decently? I am wracked by despair! My head is pounding. Mrs Sol Schwimmer is suing me because I made her bridge as I felt it and not to fit her ridiculous mouth. That’s right! I can’t work to order like a common tradesman. I decided her bridge should be enormous and billowing and wild, explosive teeth flaring up in every direction like fire! Now she is upset because it won’t fit in her mouth! She is so bourgeois and stupid, I want to smash her. I tried forcing the false plate in but it sticks out like a star burst chandelier. Still, I find it beautiful. She claims she can’t chew! What do I care whether she can chew or not! Theo, I can’t go on like this much longer! I asked Cezanne if he would share an office with me but he is old and infirm and unable to hold the instruments and they must be tied to his wrists but then he lacks accuracy and once inside a mouth, he knocks out more teeth than he saves. What to do?
The humour of this exchange emerges from the transposition of an attitude or temperament of the artist onto a patently functional activity: dentistry.
The underlying point Hammersley is making is connected to the role, purpose and audience of educational research. He is an ethnographer (with strong empirical leanings) but contrasts a ‘scientific’ approach to research with the advocacy of ‘Dadaist’ qualitative research.
I can’t allow Hammersly to stand without challenge. I am and remain of the position that he clearly has openly acknowledged fantasies of control and limitation (of what counts as quality in educational research). I suspect he would – by some unintentional fluke of coincidence – restrict entry to the academic field to the tightly bound by adherence to a set of rules in his own likeness, I am so much more interested in the wild profusion. My hunch is we are so far away from what even begins to be definitive answers as we are so far away from even defining appropriate questions or without questions points of exploration.
Latter offers an amusing take on this. Hammersly addresses the question – if the researcher were an artist, what sort of artist would she be, and Latter remains in this vein. If the researcher were a drink, what sort of drink would she be, or if not drink, then game or if not game a celebrated figure …
The researchers in this paradigm would drink.
Positivist: Scotch on the Rocks (conventional, hard liquor for hard science, hegemonic
Interpretivist: California white wine, (natural, convivial, social and interactive)
Critical Theory: Vodka (the revolutionaries drink, fiery and subversive)
Deconstructivist: Zima (defines categorisation neither wine, nor beer, nor hard liquor; trendy)
Abbott, A. (2001). Chaos of disciplines. University of Chicago Press.
Delamont, S., Atkinson, P., Smith, R., da Costa, L., Hillyard, S., & Pilgrim, A. N. (2010). Review symposium: MARTYN HAMMERSLEY, Questioning Qualitative Inquiry. Qualitative Research, 10(6), 749-758.
Hammersley, M. (2008). Questioning qualitative inquiry: Critical essays. Sage.
When Martyn Hammersley describes himself as a trouble maker, we should believe him. So troubling is his approach that he seems to want an almost positivist approach to qualitative research. This is of course an exaggeration so extreme that it almost misrepresents him. None-the-less, he certainly leaves me doubtful about the value of qualitative work, while not precisely encouraging me to change direction and embrace big magic numbers.
What he does offer and what I think it worth considering is some discussion of how to evaluate the quality of qualitative research. Putting aside whether it is either necessary or desirable to do this, there is value in the fairly clear set of criteria he suggests. There is little point in basing the credibility of interpretive research on – for example – the numbers involved, or the replicability of the work undertaken, so what’s left?
I don’t agree with everything Hammersly says, but he is in my view a fantastic writer who I enjoy reading, not least of all because of his clarity. He holds his readers’ hand kindly throughout everything he says. He is mindful of wanting to ensure you understand him and that you appreciate the implications of his argument. But the stance he adopts in relation to research is not one I agree with. He would seem to prefer the application of a rigid, tightly defined, almost positivistic template upon qualitative research. I’ve reproduced a brief overview of some aspects of what he says below … troublemaker his is, but these are not unreasonable.
Box 1: Considerations in assessing the adequacy of research reports
The following considerations cover both clarity and sufficiency as standards:
1) The clarity of writing:
- Consistency in use of terms
- Are definitions provided where necessary?
- Are sentences sufficiently well constructed to be intelligible and unambiguous?
- Is there use of inappropriate rhetoric?
2) The problem or question being addressed:
- Is this clearly outlined?
- Is sufficient rationale provided for its significance?
3) The formulation of the main claims:
- Are these made sufficiently clear and unambiguous?
- Are the relations with subordinate claims (including evidence) made sufficiently explicit?
- Is the character of each claim (as description, explanation, theory, evaluation, or prescription) indicated?
4) The formulation of the conclusions:
- Is there a distinction between main claims about the cases studied and general conclusions?
- Is the basis for the conclusions signaled?
5) The account of the research process and of the researcher:
- Is there sufficient, and not too much, information about the research process?
- Is there sufficient, and not too much, information about the researcher? (In other words, is what is necessary and no more provided for assessing the validity of the findings, the value of the methods, the competence of the researcher, according to what is appropriate?)
Hammersley (2008) p162
Hammersley, M. (2008). Questioning qualitative inquiry: Critical essays. Sage.
While reading: Denzin, N. K. (2009). The elephant in the living room: or extending the conversation about the politics of evidence. Qualitative Research, 9(2), 139-160.
It is unlikely that anyone would argue that policy-making or education practice should be based on anything other than evidence. The issue is in referring to ‘evidence’ advocates of this approach tend towards a narrow belief in research evidence derived from narrowly defined notions of what does and what does not count. Typically research based on positivist methodology is preferred. I get a distinct feeling of déjà vu. Positivism and quantitative methodology are not equivalents as such, they just overlap with remarkable consistency.
The debate about quantitative or qualitative method – in sociology at least – is a long standing one. In 1926 Lundenberg p61 wrote:
The case method is not in itself a scientific method at all, but merely the first step in the scientific method … the statistical method is the best, if not the only scien- tific method …the only possible question … is whether classification of, and gener- alizations from the data should be carried out by random, qualitative, and subjective method … or through the systematic, quantitative and objective proce- dures of the statistical method.
Some years and several conversation later, Becker argued,
life history, when properly conceived and employed can become one of the sociologist’s most powerful observational and analytic tools. (Becker, 1966: xviii)
Denzin’s narrative weaves these together more meaningfully. In 2014, the conversation continues. In response to the current popular debate around Randomised Controlled Trials in education, Dylan Wiliam points out
[…] it is worth noting that RCTs were not required to establish that smoking causes cancer. If we truly wanted “gold standard” evidence that smoking causes cancer, we would have to solicit volunteers for an experiment, divide them into two groups at random, prevent one group from smoking, and ensure that all the members of the other group smoked a certain number of cigarettes per day for a significant length of time (say around 20 years) and then compare the prevalence of cancer in the two groups. Needless to say, this was not the approach adopted. Instead, researchers looked for a way of establishing a causal relationship without an RCT (Hill, 1965).
Denzin’s narrative – interrupted by Dylan William, weaves these quotes together more meaningfully. And then, along comes the elephant:
Consider the parable of the blind men and the elephant. Lillian Quigley (1996):
In my children’s book, The Blind Men and the Elephant (1959) I retell the ancient fable of six blind men who visit the palace of the Rajah and encounter an elephant for the first time. Each touches the elephant, and announces his discovery. The first blind person touches the side of the elephant and reports that it feels like a wall. The sec- ond touches the trunk and says an elephant is like a snake. The third man touches the tusk and says an elephant is like a spear. The fourth person touches a leg and says it feels like a tree. The fifth man touches an ear and says it must be a fan, while the sixth man touches the tail and says how thin, an elephant is like a rope.
There are multiple versions of the elephant in this parable. Multiple lessons. We can never know the true nature of things. We are each blinded by our own perspective. Truth is always partial.
Truth One: The elephant is not one thing. If we call SBR* the elephant, then according to the parable, we can each know only our version of SBR. For SBR advocates, the ele- phant is two things, an all-knowing being who speaks to us, and a way of knowing that produces truths about life. How can a thing be two things at the same time?
Truth Two: For skeptics, we are like the blind persons in the parable. We only see par- tial truths. There is no God’s view of the totality, no uniform way of knowing.
Truth Three: Our methodological and moral biases have so seriously blinded us that we can never understand another blind person’s position. Even if the elephant called SBR speaks, our biases may prohibit us for hearing what she says. In turn, her biases prevent her from hearing what we say.
Truth Four: If we are all blind, if there is no God, and if there are multiple versions of the elephant then we are all fumbling around in the world just doing the best we can.
*Scientifically based research (SBR), or evidence-based movement (EBM)
My take on this is entirely personal. Given that I frequently refer to my preferred style of data analysis as akin to an ‘analytical daydream’, I can see little to gain from qualitative researchers even trying on the methodological straight jacket implied by SBR. More importantly, it misses the question. Teaching is not a technical process of pedagogic intervention leading to predefined learning outcome. Likewise, at what point has any policy maker anywhere ever been interested in what actually works. Policy is politics. Evidence is also politics.
Denzin’s paper aptly illustrates the extent to which all the reservations levelled at qualitative research are evident in qualitative approaches. It is only when truths are settled and accepted by all that flux, change, contestation and controversy subsides.
In the meantime, the question that needs to be answered is not what works in education, but what matters?