The Problem with Randomised Controlled Trials (RCTs)

I have recently been debating the merits and problems of objective, quantitative research in mental health. (One of my interlocutors has posted a lengthy response here, arguing in favour of ‘objectivism’). RCTs are a methodological device introduced into mental health from general medicine. Whilst they are merely problematic in the latter, they are outright misleading in the former. Participants must be selected, or rather, must self-select. This process alone probably renders the generalizability of RCTs redundant, unless one presumes that those who self-select for outcome studies are the same as everybody else. Then, in the case of psychological treatments, the treatment must be standardised and quantified, as if it were a ‘dose’. (Of course, there can be no such thing as a ‘standardised’ treatment in psychology. Even if a clinician were to adhere unwaveringly to the same script in each session, they would have no way of ascertaining how their particular interventions might effect particular subjects, with a given history, context, language, etc). The results of this too have to be quantified, in a process in which subjectivity is objectivised, reduced and repackaged as an easily digestible number, and which I have discussed elsewhere. The main assumption here is one of homogenisation, as if everybody’s depression, anxiety or other symptoms were essentially the same, and it was only a matter of measuring them. A further assumption is one of individualisation, namely, that a problem can be localised within one person, rather than as a symptom of a system or ensemble. Here, the fetish for quantification reveals a crude Platonist dualism rather similar to that involved in the search for biomarkers. It is as if beneath particular depressions (for instance), researchers believe they can access a context-free, alinguistic ‘Depression’ as such, which they can measure in the manner of water in a cup.

That these assumptions are entirely untenable shows that RCTs, in mental health, at least, are more a matter of following convention than ‘objective’ science, since, at every step of the way, the methodology demands reduction and construction of the objects in question. Hence, these procedures are not so much a matter of embracing objectivity, since this isn’t there anyway, as it is of refusing subjectivity, of both the researcher, and the patient.

Furthermore, it isn’t only psychoanalysts who have reservations about RCTs. Psychology has much in common with economics – a point to which I shall return – and, as one economist and mathematician puts it:

So while RCTs may be superior within their confines—“internally valid”—the process of generalizing from them remains fraught with precisely the difficulties RCTs are supposed to solve. We cannot avoid these difficulties because the derivation of general conclusions from specific results is essential if the RCTs are to be part of science. It’s also essential if they are to be useful.

In sum, while non-randomized methods have problems of comparability within, randomized methods have them beyond. RCTs avoid messy questions about who to equate to whom during implementation only to slam into those questions upon interpretation.

The convention of taking subjects en masse and reducing them to quantities is a good example of ideology infecting methodology. A psychologist can only ever take subjects one at a time. This is in contrast to government departments, or health insurance companies, for whom treatment is ultimately a numbers game, and whose frameworks have been unwittingly adopted by some practitioners themselves. Generalising from the outcomes of RCTs is rather like trying to fit individual shoes based on an average shoe size taken from a survey somewhere. As one critic put it:

“They don’t tell you the critical information you need, which is which patients are going to benefit from the treatment.”

To account for heterogeneity among participants, he explains, RCTs must be quite large to achieve statistical significance. What researchers end up with, he says, is the “central tendencies” of a very large number of people — a measure that’s “not going to be representative of much of anybody if you look at them as individuals.”

The same article notes that RCTs are particularly ‘ ill-suited to psychological interventions versus medical ones’, since ‘medications…have a straightforward biochemical effect that’s unlikely to vary across individuals’, whilst ‘psychological interventions tend to interact with such factors as gender, age and educational level’. One critic cited suggests a broader range of evidence be considered, such as “Phase II trial data, epidemiological data, qualitative data and reports from the field from clinicians using an intervention”.

So why, if RCTs in mental health provide limited to non-existent generalisability, use poor, even ludicrous measures, and have to concoct fictitious experimental variables, are they heralded as the ‘gold standard’ in clinical research? In part, there are some relatively benign answers for this. In psychology, researchers are determined to affirm the scientificity of their discipline precisely because this latter is lacking, thereby disavowing the heritage of the humanities, from which psychology ultimately derives. And it is not surprising that psychologists would choose the clean idealism of quantified, idealised ‘Depression’ (or ‘Anxiety’, or ‘Wellness’) over the material of language, and the schmutz of subjectivity. The conservatism of psychology as a discipline also contributes to this. Convention and dogma solidify into the illusion of methodological rigour. One can check virtually any academic psychology journal for proof of this – it is as if the adoption of certain (tedious) stylistic devices and methodological conventions (and not the content or reasoning itself) creates scientificity.

Yet the decision to choose number over language should not be mistaken for a neutral, technical consideration. The division here between objectivity and subjectivity is not merely a binary symmetry but is, in Derridean terms, a violent hierarchy, the more so when one ponders the ideological load that ‘objectivism’ is carrying*. No doubt it is easier for the coercive elements of psychology and psychiatry to operate with ease and good conscience with objects, not subjects. Moreover, the conventions of RCTs clearly lend themselves best to simplistic, short-term and authoritarian (and therefore more easily ‘standardised’) treatments.

And this is where the psy-disciplines find some affinity with their dismal cousin, economic ‘science’. In both cases, the science is used to launder ideology, to convince people that some highly partisan, politically-loaded intervention is in fact a neutral, ‘evidence-based’, apolitical, technical procedure. Thus, Reagan-era economists could point to the Laffer curve as ‘scientific’ justification for cutting tax rates for the rich at the expense of the poor. So too can authoritarian clinicians impose neoliberal ideals of productivity, individualism and ‘self-management’ on the suffering, all the while gesturing to the veneer of science lent by RCTs and similar forms of ‘evidence’. But to even accept RCTs as the gold standard of  legitimate evidence means to have taken so many wild assumptions for granted as to have left science behind from the start.

As an aside, it is interesting to question the role of number in all this. In recent online discussions, I have been told by empiricists and objectivists that number is ‘pre-linguistic’ or extra-linguistic, or systematically linguistic, or that it has a relation to science that is lacking in language. It is true that, as Lacan argued, mathematical formalisation is the ideal of science. Yet if one examines the use of number in the psy-disciplines, it is almost invariably of non-Bayesian statistics. In other words, number in psychology, for instance, is used in the least rigorous ways possible, rather akin to an opinion polls. (There are even comical attempts to construct pseudo-equations of psychological phenomena through the convoluted abuse of correlational statistics). This use of number is opposed not merely to language but to logic. At best, one might construe it as a crude metonym for’ Depression’, ‘Anxiety’, etc. It misses the point that mathematical formalisation is not principally numerical in any case, but relies on letter, not number. It is founded on algebra, not correlational surveys. The sight of humanities disciplines – the human sciences – aping the mathematical sciences to justify the construction of evidential ‘objects’, in the service of dubious ideology lends an air of grotesquerie to already troubled subject matter.

 

* It is an interesting coincidence that, in some contexts, at least, ‘objectivism’ is a synonym for epistemological arrogance and stupidity, coupled with greed.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s