As a final word on epistemology, it is worth noting that the prop which keeps CBT concepts upright, and which supports most of empirical psychology, is the area of psychometrics. Psychometrics is psychology’s proudest achievement, and perhaps the only body of knowledge unique to it. As with CBT, however, its epistemological base is as dubious as the uses to which it is put. To simplify things, we can divide psychometric tests into three broad categories: those that measure performance (the IQ tests are the most famous example), those that measure personality ‘traits’ (the Rorschach and the MMPI, for instance) and those that measure subjective ‘states’ (such as the Beck Depression Inventory, or BDI-2). Each suffers from similar problems. Each is predicated on a naïve scientific realism, in which the psychometrician presumes that his or her quantification corresponds to some underlying thing, which exists unmediated in nature, simply waiting to be measured. In every case, this process of reification is then propped up by correlative statistics (the various forms of validity and reliability), as if mere correlation was tantamount to proof. That the statistics are often bountiful and complex demonstrates only that methodological rigour is, as a rule in psychology, inversely proportional to the intricacy of one’s statistics. Human subjects are implicitly presumed to be mere bundles of data, waiting to be mined by psychometricians, who neglect to consider that their very exercise might change that which is supposedly being measured.
No doubt, one can measure performance – the question is whether (and how, and to what extent) said performance reflects some underlying factor. Even if it does not, perhaps one could argue for the benefits of performance testing (for instance, to assess a person’s cognitive decline). These benefits do not hold for the testing of ‘states’ and ‘traits’, both of which arguably do not even exist. To identify a ‘state’ in a system in perpetual flux is a conceptual and linguistic manoeuvre that allows for articulation of affect, but there is nothing scientific (or non-arbitrary) about it. Again, to use a BDI-2 to gauge a person’s depression is to force that person to shoehorn their subjective experiences into Beck’ language and categories. That it is a matter of indifference to Beck and his followers whether a depressive regards herself as ‘melancholic’, ‘broken-hearted’, ‘flat’ does not, on that strength, make it irrelevant altogether. Indeed, given that depression is a subjective experience par excellence, the language with which it is articulated is constitutive of it as an experience. There is no evidence to suggest that such ideas trouble our earnest quantifiers, or that there is anything to be gained from such testing that could not be better learnt through a discussion.
The pursuit of ‘traits’ is even more ridiculous, and is a clear return to 19th Century facultyism and phrenology. There is, after all, no agreed-upon definition of a ‘personality’ in psychology, another fact which the psychometricians blithely ignore. The personality testers devote statistical analysis to a series of hypothetical constructs, but not one iota of ontological analysis to the sort of being to whom such constructs might apply. The relentless focus on the quantitative tends to efface the possibility of a qualitative assessment, or an historical assessment, which are then ignored altogether. Imagine if a patient presented at an emergency ward with a pain in his right side. The staff assessing him may undertake tests, such as x-rays and blood samples, but may also wish to know something about the history of his pain, its phenomenology and subjective qualities, whether it might be appendicitis, gallstones, a hernia, and so forth. If the medical staff in question were psychology researchers, their focus would, instead, be on the patient articulating a subjective quantum of pain (i.e. ‘9 out of 10’), since this is the only datum of which mainstream psychology can make any use. Unsurprisingly, this is diagnostically useless, if not dangerous, and is another example of an alienated subjectivity being forced into the incoherent categories of a pseudoscience. This is precisely what occurs with a BDI-2, for example, which constructs a number to correspond with a person’s depression, but which does not and cannot distinguish between qualitatively different elements, such as the presence of an arrested grief, a psychotic versus neurotic depression, and so on. Where a subject’s particularities ought to be of interest to a psychologist, these are dispelled in favour of bland and useless generalities.
If psychometrics are, as I have argued, a bizarre and confused conceptual apparatus for assessment, why use them at all? To answer this question, we should, as ever, ask Cui bono? In this regard, Foucault is illuminating in Discipline and Punish (p. 193) –
All the sciences, analyses, or practices employing the root ‘psycho-‘ have their origin in this historical reversal of the procedures of individualization. The moment that saw the transition from historico-ritual mechanisms for the formation of individuality to the scientific-disciplinary mechanism, when the normal took over from the ancestral, and measurement from status, thus substituting for the individuality of the memorable man that of the calculable man, that moment when the sciences of man became possible is the moment when a new technology of power and a new political anatomy of the body were implemented.
In this vein, psychometrics can be compared with Orientalism, except that where the latter provided the intellectual prop for colonialist endeavours, psychometrics concerns itself with the biopolitics of different subjected populations. This is clearly reflected in the uses to which most psychometric technologies are put. Above all, psychometric measures benefit the academic psychologist, lending researchers a veneer of scientificity (presumed, uncritically, to be tied to quantification). Moreover, there have long been perverse incentives that subjugate academic psychologists to the publishing machine, and this is inextricably linked to obtaining ‘positive’ results. The more measures one takes, the greater the opportunity for this, since there is greater ability to indulge in post hoc fishing for statistical significance. Since psychologists, as ‘scientist practitioners’ take their cues from the academics, this corruption pollutes the discipline as a whole, and not merely the tenured pseudoscientists.
And where else are such measurements required? We should look to the courts, which need quantification as a basis for coercion, discipline and surveillance following their judgements; the HR departments, who wish, pre-emptively, to eliminate potentially non-compliant workers; and for the vast bureaus of health, in the form of insurance companies and government departments, for whom diagnosis and treatment are a financial burden, and, at bottom, a numbers game. It is for these entities and their power – the dominions of petty tyranny – and not for some vision of science that psychometrics serves as an ideological buttress.
Excellent… do you have published this analysis somewhere else?
Thanks Carlos. I haven’t published this elsewhere, but may do so in the future.
This whole series on the critique of CBT is one of the best I have ever come across, including Eric Laurent’s excellent “Lost in Cognition.” It seems to me to be a matter of the utmost urgency that a beautifully reasoned and rigorously evidenced critique of scientism in psychology like this one gets published in journal and/or book form. In Britain and the United States, secretive tribunals known as the “Family Courts” routinely use psychometric assessments by charlatan clinical psychologists (who make a mint out of their trade as “expert witnesses”) to justify removing children forcibly from parents.
The parents “fail” the tests, effectively. Courts are seduced into assuming that standardised assessments, which assume that human beings are, as you brilliantly put it, data banks to be objectively mined, are somehow highly “scientific” and therefore trustworthy. They are kept oblivious to the effects on an anxious parent of an “expert” treating them as some kind of pathological specimen and “probing” them with these nefarious instruments. Parents who know very well that their responses could derive them of their children.
The same “instrument” has a radically different effect on the human subject depending on whether it’s used as part of an assessment in a mental health setting, where at least the individual may have agreed to undergo it, or a court-mandated inquisition into their “parenting” abilities. Needless to say, the parents’ own speech is rendered invisible and inaudible. All that’s recorded are their responses to contrived questionnaires or lists of true-false statements that are fashioned from the language of the “experts” who devised them.
Please publish. Terrible injustices are being perpetrated not simply by transposing discredited positivist science into psychology, but into law courts via that psychology, as it migrates from its clinical moorings into disciplinary-punitive system.
What you have just written about Psychometrics is EXACTLY what I wanted to tell the “Race Realists” about their use of IQ testing as a “scientific” “proof” that Blacks are biologically, genetically, and ontologically inferior to Whites. The average IQ score of African Americans is 85 (points) (which is about 15 points below the mean) and the average IQ score of Whites is 100 (points). Everyone knows that the standard deviation of the IQ score is 15 (points) and that the mean is 100 points. The average IQ score of the female sex is 5 points below that of the average IQ score of the male sex. And these are PRECISELY the results that male chauvinists NEED in order to “prove” that the female sex is biologically inferior to the male sex.
If we can prove that psychometrics is NOT science, then the other branches of psychology and psychiatry will automatically collapse as well!
True science (in the sense in which physics and chemistry are true hard sciences) is material and reductionistic. Furthermore, it is INSENSITIVE to the removal of statistics. Even when the physical sciences like Mechanics, Thermodynamics, Electromagnetics, Relativity, and Quantum Mechanics DO use probability, the sample sizes they use are incredibly large (for example, on the scale of number of millseconds it takes for light to travel 10 billion light years, or the total number of individual stars in the known Cosmos): therefore the “Statistical studies” employed in thermodynamics and quantum mechanics can make predictions and distinctions with uncommonly high accuracy and precision. Whereas, the statistical studies employed in sociology, politics, marketing research, and psychology use sample sizes on the order of magnitude of one thousand, or one million. The margin of error is commonly known (all other things being equal) to be inversely proportional to the square root of the sample size.
“But statistics is problematic when it comes to the social sciences. The first key issue is sample size.
Think of a political survey poll. Every one of these polls states a margin of error; surveys with a larger number of respondents have a correspondingly smaller margin of error. Most social research studies use sample sizes of tens, hundreds, and occasionally thousands. [This means that the margin of error would be on the order of 10^(-3/2), which would on the order of 1 part in 100 equal parts.] That [sort of sample size] may sound [or even FEEL] like a lot [of accuracy and precision, and therefore a very small margin of error], but remember that statistical physics deals with sample sizes that can be described in unimaginable ways like this: One thousand trillion times more than the total number of stars in the Universe. Or, enough sample atoms that if each one were a grain of sand, they could build a sand castle 5 miles high. Or, a number of molecules greater than the number of milliseconds since the Big Bang. [In other words, a sample size greater than the number of millseconds it would take light to travel 10 billion light years, or on the order the number of molecules of H2O in 20 grams of water, which is on the order of 10^23: therefore the margin of error is less than 1 part in 10^11 equal parts.]
The next big difference is a bit more subtle: quantifiability.
Working with such variables as awareness, happiness, self-esteem, and other squishy concepts makes quantifiability hard. This is the sloppy language problem. Even when these ideas are translated into some more concrete measure (say how long it takes a test subject to push a button or eat a marshmallow), the simplicity and truth of this transformation is far from crystal clear or rock solid.
Precision of measurement is another big issue. A social science survey may measure ten subjects with a stopwatch for a handful of seconds and produce an error of a second or two. They may ask people to rate things on a 1-10 scale. How sure are you that your “8” is not another person’s “6.5”? The sorts of measurements chemists make have no such wiggle room. They ask molecules questions that have exact answers that cannot be fudged. What’s your temperature? How much kinetic energy [or angular momentum] do you possess? [What is the exact position of your center of mass? Or what is your instantaneous velocity at this current moment?] A scientist in Texas and a scientist in Alaska and a scientist on the moon and a scientist at the bottom of the sea and a scientist on poor icy demoted dwarf planet Pluto could all measure the same molecule under the same experimental conditions and get the same answer to five decimal places [or even more, if their experiments are designed cleverly enough].
Even the supposedly concrete measurements often fall vastly short of the rigor of true science. Photon-counting experiments often measure times in the range of nanoseconds. Timing subjects by hand with a stopwatch is quite literally one billion or even one trillion times less precise.”
Furthermore, statistical physics DERIVES and PREDICTS its probability distributions A PRIORI and AB INITIO from even more fundamental and reasonable axioms, whereas sociology and psychology cannot do so, but must discover them experimentally and A POSTERIORI.
I am glad that you have published all this. For more information, see:
Pingback: Why is CBT still being promoted as a cure-all? – Psychotherapy, Counselling and Personal Development in Glasgow, Scotland