The Dirty War on the NHS – a new documentary by John Pilger

undefined

The desperate plight of the National Health Service, much of which has already been privatised, is demonstrated by the terrible case of Trevor Moncrieff. As described in John Pilger’s latest documentary, Mr. Moncrieff, a councillor, had a heart attack in 2018. The ambulance that arrived to attend him was one of many “NHS” vehicles that is actually part of a private fleet. It was carrying a second-hand defibrillator that had not recently been tested, and which failed to work. Mr. Moncrieff’s son Matt attempted, unsuccessfully, to provide CPR whilst the paramedics struggled with the equipment. The paramedics had not been given radios, so tried to reach their dispatcher by phone, but ended up leaving a message with a call centre. Eventually, they advised Matt Moncrieff to call the fire service, as they were likely to have a defibrillator. It was too late: Trevor Moncrieff could not be revived.

This instance belies the notion that private enterprise is always more efficient than a publicly-owned service. Indeed, the prime objective for private commercial ventures is to benefit their shareholders, which means cutting costs wherever possible. In 2010, Hinchingbrooke hospital was sold off to the Circle Health group, founded by a former Goldman Sachs executive, Ali Parsa. The hospital quickly began to accrue an increasing deficit, which only extreme savings could hope to address. Whereas hospital staff had always worked in cooperation, as would be expected for such a vocation as healthcare, now the lead nurses were expected to compete with each other to get the patients out of the hospital as fast as possible. Morale plummeted, and a concerning report by the National Audit Office was followed by a damning one from the Care Quality Commission. In 2015, Circle Health withdrew from the contract, handing back control to the NHS.

Ali Parsa went on to found Babylon Health, which offers a NHS-funded chatbot GP service. In 2018, the Labour Party complained that Health Secretary Matt Hancock had breached the ministerial code when he praised Babylon’s app.

Meanwhile, Pilger visits the United States to show us the harsh consequences of a fully-privatised healthcare system. Healthcare costs are the leading cause of bankruptcy in the US, where swingeing excess charges often make insurance useless for those who can afford it in the first place. People whose financial means run out whilst they are in hospital often then find themselves the victims of “patient dumping”. During the hours of darkness, some hospitals escort penniless patients out into the street and leave them there, perhaps outside the doors of a homelessness shelter if the patient is lucky. Pilger notes that we already have a form of patient dumping in the UK, in that people with mental health problems that are not severe enough to meet certain criteria for treatment can find themselves out on the street.

He describes the creation of the NHS as a “revolutionary moment”. At the end of the second world war, the public – and especially returning servicemen – were expecting something better than they had previously been used to. Some servicemen even went on strike. The decision to set up a National Health Service was opposed by the Conservatives, but they were out of touch with the public mood, and we see footage of Churchill being barracked by an audience at an election hustings in the north of England. Several decades later, however, influenced by a growing movement of American free marketeers, Oliver Letwin and John Redwood were developing plans to open up the NHS to market competition, plans that were then accelerated under Blair’s New Labour. However, the PFI (Private Finance Initiative) deals that were arranged burdened hospitals with huge debts. Still, Health Secretary Alan Milburn went on to join Bridgepoint Capital, a venture capital firm that specialises in financing private healthcare enterprises. Simon Stevens, health advisor under the Blair government, went on to join the American healthcare provider United Health, before returning under the Conservative/Lib-Dem coalition to run the NHS.

What is clear from Pilger’s film is that the worsening plight of the NHS is not because it is an inefficient public service, but precisely the opposite: it is being deliberately starved of public funding in the hope that the public will be deceived into believing that further marketisation is what’s required, yet it is quite clear that privatisation only works for shareholders, not for patients. But now, in the 2019 General Election, we finally have a chance to turn things around. Interviewed towards the end of the film, Shadow Health Secretary Jonathan Ashworth says that the Labour Party will reverse the trend towards privatisation – basically, re-nationalising the NHS. Following this, Pilger informs us that he has spent six months trying to arrange interviews with the politicians currently in charge of the NHS, as well as David Cameron and Alan Milburn; none of them responded.

Moby-Dick

To mark the 200th anniversary of Herman Melville’s birth (1st August 1819), the Guardian online carried an article by Philip Hoare about the author’s masterpiece, Moby-Dick. Hoare described it as “the Mount Everest” of literature, as many people apparently start but fail to finish the book. Having done this myself many years ago, Hoare’s article spurred me to revisit Moby-Dick with a refreshed determination to read it through to the end. It took me about three weeks to finish and I enjoyed it hugely. It is a very unusual book that raises many questions, not the least of which is ‘What is it all about?’ Presumably this is why the book was not successful during Melville’s lifetime. As with so many great works of art, though, the ambiguities, oddities and uncertainties are what give it its longevity, as people keep returning to unpick its mysteries. In this blog post, I give my own reflections on Moby-Dick (which are those of an enthusiastic reader, not of an academic expert in English literature).

I’ll begin with a couple of short, simple observations that might be of interest to people who have never so much as glanced at Moby-Dick. First, although it is a long book most of the chapters are very short, some less than a page. This makes it easy to read over a series of short intervals, such as on a rail commute or during your lunch breaks at work, without having to abandon the text in the middle of a long section. Second, if you are expecting a thrilling adventure story you are likely to be disappointed. Perhaps this is why readers often don’t make it through to the end; they may be expecting a different kind of tale. There is action, but mostly towards the end of the book. The notion of the whale-hunt really just seems to be a device – the “MacGuffin”, as Hitchcock called it – to motivate the characters, and thereby allow certain themes to be explored.

The first third of the book consists largely of an introduction to characters and locations, with the first chapter and its famous opening sentence – “Call me Ishmael” – being about our narrator himself. Ishmael’s motivation for going to sea appears to be boredom: “It is a way I have of driving off the spleen, and regulating the circulation”. He rocks up at the Spouter Inn, at New Bedford, where lack of spare accommodation means he has little alternative but to share a bed with Queequeg, a South Sea chieftan who has left his home to explore the world. Queequeg is an experienced harpooneer for whaling ships. Initially rather afraid of “The Pagan”, Ishmael begins to grow close to him in what seems a quite romantic fashion:

“I began to be sensible of strange feelings. I felt a melting in me. No more my splintered heart and maddened hand were turned against the wolfish world. This soothing savage had redeemed it […] Wild he was; a very sight of sights to see; yet I began to feel myself mysteriously drawn towards him”.

Ishmael, a Presbyterian, is invited to join Queequeg in his religious rituals, which he does. Eventually, the two of them depart New Bedford for Nantucket, where they join a whaling ship, the Pequod. Across several chapters we then get introductions to the crew of the ship, notably Captain Ahab; the mates Starbuck (chief mate), Stubb (second mate) and Flask (third mate); and the other harpooneers Tashtego and Dagoo. There is also Pip, the young black cabin boy, who is part prophet and part court jester, especially after he later begins to lose his mind following a period of time alone in the sea.

Chapter 32 is titled ‘Cetology’ and concerns the different types of whales and their classification. Many subsequent chapters are devoted to the physiology of the sperm whale (its head, brain, tale, spout, and so on) and there are even chapters devoted to pictures of whales.

As the Pequod‘s journey progresses, they meet a series of other whaling ships, each of which has had an encounter with Moby-Dick, successively more serious, but without killing the creature. The critical thing to know about Captain Ahab is that he lost a leg, on a previous voyage, to Moby-Dick. He is now obsessed with killing the whale, no matter what the dangers, and is not at all deterred by the reports from the other ships’ captains he meets. Ahab himself does not appear before the crew until several days into the ship’s voyage, thus adding to the air of mystery that surrounds him. This is compounded, on the first occasion that a whale is sighted, by the appearance of previously unseen shipmates: “five dusky phantoms that seemed fresh formed out of air”. Four of these men are of a “tiger-yellow complexion peculiar to some of the aboriginal natives of the Manillas”, while their leader – Fedallah, also referred to as “the Parsee” (a Zoroastrian) – is a dark-skinned man in a white turban. This latter figure is the source of many rumours and is regarded by the crew with deep suspicion, not least because he is the source of some darkly prophetic comments.

What really struck me about the book is the contrast between Ishmael’s wish to understand others, whether they be people or whales, and Ahab’s obsessive pursuit of the whale which precludes any attempt to understand the creature beyond predicting its movements. A good example of Ishmael’s open-mindedness comes when his new friend Queequeg – a “wild idolator” – invites him to join his worship:

“But what is worship? – to do the will of God? – that is worship. And what is the will of God? – to do to my fellow man what I would have my fellow man to do to me – that is the will of God. Now Queequeg is my fellow man. And what do I wish this Queequeg would do to me? Why, unite with me in my particular Presbyterian form of worship. Consequently, I must then unite with him in his; ergo, I must turn idolator”.

Ishmael talks of whaling as a noble activity. Indeed, if he did not believe in whaling it would make no sense for him to be on board the Pequod. However, the numerous chapters that are devoted to understanding all aspects of the sperm whale, and other whales, creatures of no little intelligence, inevitably create an empathy for these extraordinary animals that sits uneasily with the vivid descriptions of them being hunted and harpooned until they expire, exhausted, in waters red with their own blood. In contrast to Ishmael’s empathy, Ahab puts the lives of his own men increasingly at risk, pushing them to the limit even when lives have already been lost, the dangers appear overwhelming, and Starbuck is urging him to give up the pursuit.

The narration of the story is unusual. Ishmael’s own presence as a character in the story is perhaps strongest in the first third of the book, especially where he describes the growing bond between himself and Queequeg. Ishmael himself is an actor within the story in these early chapters, which includes getting tossed into the water at one point during a whale hunt. Elsewhere, though, the narration shifts. When Starbuck is introduced, Ishmael’s narration becomes God-like, telling us about Starbuck’s thoughts. Chapter 37 is entirely Ahab’s thoughts, whilst alone in his cabin. Chapter 38 relates Starbuck’s thoughts, as he leans against the mainmast at night. Chapter 39 is Stubbs’ thoughts, as he performs his duty as first night-watch. Chapter 40 is written in the form of a script, giving us the voices of numerous crew members who are on the forecastle at midnight. After this deviation in narrative voice, Chapter 41 returns us to the main narrator, with the opening sentence “I Ishmael, was one of that crew…”. A little later, at the start of Chapter 45, Ishmael seems to highlight that his is not a conventionally told tale: “So far as what there may be of a narrative in this book…”.

In the last third of the book, although Ishmael continues to narrate, he himself mostly seems to disappear as a character who participates in events. One exception to this is Chapter 94, ‘A Squeeze of the Hand’, in which Ishmael describes his feelings as he bathes his hands in the sperm of the whale. To modern ears much of this chapter seems quite comical, and I wonder if this was how it was meant to read. Philip Hoare’s Guardian article referred to the “queerness” of Moby-Dick, and reference to online dictionaries indicates that the term ‘sperm’ in reference to spermatozoa, as distinct from spermaceti (the waxy substance from the sperm whale), has been in currency since the fourteen century. What then do we make of the lyrical way in which Ishmael rhapsodizes about the act of washing his hands in sperm? –

“I felt divinely free from all ill-will… Squeeze! squeeze! squeeze! all the morning long; I squeezed that sperm till I myself almost melted into it… I found myself unwittingly squeezing my co-labourers’ hands in it… that at last I was continually squeezing their hands, and looking up into their eyes sentimentally… let us squeeze ourselves universally into the very milk and sperm of human kindness”.

Strangely, though, given the emotionality of the early chapters involving Queequeg, when his friend becomes seriously ill, Ishmael does not give any indication that he is especially troubled by this turn of events, beyond stating that all the crew were concerned. This seems a little odd, but then the main motivation of this chapter appears to be to set up certain events that happen later on. Thereafter, as the hunt for Moby-Dick begins to dominate the story and develops into outright action, the focus is on the other characters; although Ishmael is there, we do not know what he is doing. In fact, he largely disappears as an actor until, perhaps, the final page.

For me, one of the great pleasures of Moby-Dick is the quality of the writing, the beauty of the description, whether Melville is describing the anatomy of whales, the layout of the ship, or the characteristics of people. Turning at random to almost any page reveals such penmanship, as in this example:

“The starred and stately nights seemed haughty dames in jewelled velvets, nursing at home in lonely pride, the memory of their absent conquering Earls, the golden helmeted suns! For sleeping man, ’twas hard to choose between such winsome days and such seducing nights” (Chapter 29).

Melville must have been on a creative roll by the time he wrote Moby-Dick and surely had great confidence in what he was doing. It is a shame that, like many of the great artists, his best work was not appreciated in his own lifetime. This is a book that, once finished, really sticks in the mind. Like stepping ashore after a long period at sea and feeling as though the ground beneath you is swaying, so there seems to be a period of mental turbulence upon reaching the end of Moby-Dick, as the thoughts continue to splash around inside your head. It is a memorable literary voyage.

Fear is the key: A review of ‘Why Horror Seduces’ (by Mathias Clasen)

Screenshot 2019-03-30 at 4.05.46 PMLooking back, I think the first horror movie I saw at the cinema must have been The Omega Man, the 1971 version of I Am Legend, Richard Matheson’s tale of modern-day vampires. I was only nine at the time, which was probably below the age certification for that film, though I think back then the ticket sellers at my local cinema weren’t always too bothered about checking and enforcing such matters. For a long time, The Omega Man remained my favourite film. I still love the scene where Charleton Heston sits alone in a cinema watching the documentary film of the 1969 Woodstock festival, listening to hippies talking about a world of peace and love, a wonderful juxtaposition with the world we know Heston is now living in – bereft of human beings during the daytime and besieged by malevolent vampires at night.

A little later I saw Jaws (1975). This was still a time when your ticket enabled you to enter at any point during the film and then stay for the next showing (hence the saying “This is where we came in”). Thus, my introduction to the film was seeing Quint disappearing into the mouth of the shark, without any of the dramatic build-up to this point, which is arguably more fear-inducing than the final scenes.

Both I Am Legend (the book) and Jaws (the film) are among the works included in a selective review of American horror fiction discussed in the 2017 book Why Horror Seduces, by Mathias Clasen, Associate Professor of Literature and Media at Aarhus University. Before we get to this review, though, Clasen addresses the wider questions of what horror is, how it works, and how it has been and should be studied. Horror is notoriously hard to define other than in terms of the reactions that a work of fiction elicits from the viewer or reader. Whilst some theorists date the origins of horror to the advent of Gothic fiction in the nineteenth century, Clasen agrees with one of the genre’s most celebrated practitioners, Lovecraft, who wrote that “the horror tale is as old as human thought and speech themselves”.

Survey research shows that most people enjoy horror but, like Goldilocks’ porridge, it needs to be just right – it doesn’t work if it fails to provoke unease or a fear reaction, and likewise most people don’t want horror to be too frightening. But why do we want to be frightened at all by a work of fiction? Enter a plethora of theorists who want to tell us that the stories we love are actually about something other than an entity or situation that is scaring the crap out of us. There are Freudian, feminist, queer, Lacanian, Marxist, race studies, post-colonial and post-structuralist readings of horror fiction. Sometimes, a critical interpretation is simultaneously based on several of these approaches.

Whilst acknowledging that works of fiction may include multiple themes, and also expressing pleasure that the horror genre is taken seriously by these writers, Clasen believes that their various theoretical approaches invariably miss what is at the core of horror fiction. He is especially scathing, albeit in a polite fashion, about the psychoanalytic approaches to horror, which are so divorced from empirical evidence that they enable the shark in Jaws to be interpreted as both “a greatly enlarged, marauding penis” (Peter Biskind) and a “vagina dentata” (Jane Caputi), a giant vagina with teeth.

Critical interpretations are sometimes at odds with the explicitly expressed intentions of writers or directors. Thus, one Lacanian reading of The Shining insists that the novel is really about repressed homoerotic and Oedipal desires, despite author Stephen King’s insistence that the story is based on his own battle with alcohol. The early slasher movies provoked a range of critical reactions. One critic insisted that these films were a means for young people to assuage their guilt about their own hedonistic lifestyles, a claim that had no evidential basis whatsoever. Others claimed that slasher movies were inherently misogynistic, depicting their female victims as being punished for their sexually active lifestyles. Yet, content analysis of slasher films has shown that men are just as likely as women to be victims. In Halloween, the character of Laurie Strode (Jamie Lee Curtis) is supposedly spared death because she adheres to socially conservative norms, overlooking the fact that Michael Myers is actually trying to kill her and thereby causing her to be terrified. Furthermore, writer/director John Carpenter explicitly rejects the moralistic interpretation: Laurie Strode survives because she is the only character to detect and adequately respond to the danger.

Carpenter’s explanation is aligned with Clasen’s own interpretation of how horror works and why we are drawn to it. The human capacity for emotion, he points out, is a product of evolution. Fear and anxiety are the most primal of emotions, as these help shape our responses to immediate and anticipated threats, respectively (more on the topic of evolution and emotions can be found in Randolph Nesse’s new book Good Reasons for Bad Feelings). Potentially threatening stimuli, such as strange noises in the house at night, tend to grab our attention, even if in fact there is no danger. It is better to be anxious about something that turns out to be harmless than to be unconcerned about something truly dangerous. Organisms that do not respond to potential threats are fairly quickly removed from the gene pool. For our hunter-gatherer ancestors, those threats included environmental hazards, non-human predators, other people (in the form of physical danger, loss of status, and potentially lethal social ostracization), and diseases in the form of invisible pathogens, bacteria and viruses (hence certain stimuli, such as excrement and rotting meat, universally lead to feelings of disgust).

Our hunter-gatherer ancestors faced regular challenges to their survival on a scale that most of us will never experience. Even contemporary hunter-gatherers mostly live shorter lives than the rest of us. Horror fiction enables us to experience emotional reactions to potentially threatening stimuli within a safe environment. As Clasen puts it (p.147):

The best works of horror have the capacity to change us for life – to sensitize us to danger, to let us develop crucial coping skills, to enhance our capacity for empathy, to qualify our understanding of evil, to enrich our emotional repertoire, to calibrate our moral sense, and to expand our imaginations into realms of the dark and disturbing.

In his review of several works of American horror fiction, Clasen not only skewers the inadequacy of many previous critical approaches to horror, but spells out precisely the behavioural challenges that are posed to the characters in these works. For example, a staple ingredient of many zombie films is the tension between acting self-interestedly versus cooperating with others to fight the encroaching threat (zombies themselves arouse feelings of disgust associated with contagion). Often goodness and selfishness are embodied in different characters, yet in some works of fiction they may represent a conflict within a single character. One such example is Jack Torrance in The Shining. His failing literary career represents a loss of status, which he hopes to address by focusing on his writing whilst at the Overlook Hotel. When it becomes clear that there is some kind of threat to his son, Danny, feelings of parental concern are aroused. However, the hotel itself – once the home to various gangsters and corrupt politicians – exerts an evil influence on Jack, a recovering alcoholic, poisoning him against his own son.

Elsewhere, in Rosemary’s Baby author Ira Levin “successfully targeted evolved fears of intimate betrayal, contamination of the body, and persecution by metaphysical forces of evil” (p.91), whilst The Blair Witch Project plays upon our tendency to attribute negative value to a place where something bad has happened, a tendency which is adaptive because it makes people avoid dangerous places. As Clasen notes (p.143):

The same psychological phenomenon is at work when people shun houses in which murders or other particularly violent or grisly forms of crime have taken place.

Although Clasen himself says that there is far more we don’t know about how horror fiction works than what we do know, the evolutionary psychology approach would appear to offer a far more promising prospect for our understanding than any other approach that has so far been proposed. It is also an approach which should be far more satisfying to those of us who enjoy horror fiction, because it is in line with our intuitive understanding that we like horror because we simply enjoy the thrill of being scared whilst knowing that we are not really in danger.

Review – Meltdown: Why Our Systems Fail and What We Can Do About It

screenshot 2019-01-26 at 15.14.17In the opening chapter of Meltdown, the authors Chris Clearfield and András Tilcsik describe the series of events that led to a near-disaster at the Three Mile Island nuclear facility in the United States. The initiating event was relatively minor, and occurred during routine maintenance, but as problems began to multiply the operators were confused. They could not see first-hand what was happening and were reliant on readouts from individual instruments, which did not show the whole story and which were open to misinterpretation.

The official investigation sought to blame the plant staff, but sociology professor Charles “Chick” Perrow argued that the incident was a system problem. The author of Normal Accidents: Living With High Risk Technologies, Perrow states that systems can be characterised along two dimensions: complexity and coupling. Complex systems have many interacting parts and frequently the components are invisible to the operators. Tightly-coupled systems are those in which there is little or no redundancy or slack. A perturbation in one component may have multiple knock-on effects. Perrow argues that catastrophic failures tend to be those where there is a combination of high complexity and tight coupling. His analysis forms the explanatory basis for many of the calamities described in Meltdown. Not all of these are life-threatening. Some are merely major corporate embarrassments, such as when Price-Waterhouse Cooper cocked up the award for the Best Picture at the 89th Academy Awards. Others nonetheless had a big impact on ordinary people, such as the problems with the UK Post Office’s Horizon software system, which led to many sub-postmasters being accused of theft, fraud and false accounting. Then there are the truly lethal events, such as the Deepwater Horizon oil rig explosion. Ironically, it is often the safety systems themselves that are the source of trouble. Perrow is quoted as saying that “safety systems are the biggest single source of catastrophic failure in complex tightly-coupled systems”.

The second half of Meltdown is devoted to describing some of the ways in which we can reduce the likelihood of things going wrong. These include Gary Klein’s idea of the premortem. When projects are being planned, people tend to be focused on how things are going to work, which can lead to excessive optimism. Only when things go wrong do the inherent problems start to appear obvious (hindsight bias). Klein suggests that planners envisage a point in time after their project has been implemented, and imagine that it has been a total disaster. Their task is to write down the reasons why it has all gone so wrong. By engaging in such an exercise, planners are forced to think about things that might not otherwise have come to mind, to find ways to address potential problems, and to develop more realistic timelines.

Clearfield and Tilcsik also discuss ways to improve operators’ mental models of the systems they are using, as well as the use of confidential reporting systems for problems and near-misses.

They devote several chapters to the important topic of allowing dissenting voices to speak openly about their concerns. There is ample evidence that lack of diversity in teams, including corporate boards, has a detrimental effect on quality of discussion. Appointing the “best people for the job” may not be such a great idea if the best people are all the same kind of people. One study found that American community banks were more likely to fail during periods of uncertainty when they had higher proportions of banking experts on their boards. It seems that these experts were overreliant on their previous experiences, were overconfident, and – most importantly – were over-respectful of each others’ opinions. Moreover, domination by banking experts made it harder for challenges to be raised by the non-bankers on the boards. However, where there were higher numbers of non-bankers, the bankers more often had to explain issues in more detail and their opinions were more often challenged.

Other research shows that both gender and ethnic diversity are important, too. An experimental study of stock trading, involving simulations, found that ethnically homogeneous groups of traders tended to copy each other, including each others’ mistakes, resulting in poorer performance. Where groups were more diverse, people were just generally more skeptical in their thinking and therefore more accurate overall. Another study found that companies were less likely to have to issue financial restatements (corrections owing to error or fraud) where there was at least one woman director on the board.

Clearfield and Tilcsik argue that the potential for catastrophe is changing as technologies develop. Systems which previously were not both complex and tightly-coupled are increasingly becoming so. This can of course result in great performance benefits, but may also increase the likelihood that any accidents that do occur will be catastrophic ones.

Meltdown has deservedly received a lot of praise since its publication last year. The examples it describes are fascinating, the explanations are clear, and the proposed solutions (although not magic bullets) deserve attention. Writing in the Financial Times, Andrew Hill cited Meltdown when talking about last year’s UK railway timetable chaos, saying that “organisations must give more of a voice to their naysayers”. The World Economic Forum’s Global Risks Report 2019 carries a short piece by Tilcsik and Clearfield, titled Managing in the Age of Meltdowns.

I highly recommend this excellent book.

Review: ‘The Mind is Flat’ by Nick Chater

The nature of consciousness is a topic that psychologists and philosphers have spilt much ink and many pixels over. Outside of psychoanalytic circles, what has been less discussed is the nature of the ‘unconscious mind’. Claims made by some psychologists about the power of the unconscious mind to influence behaviour have proven controversial.

Now, in a book that will have psychoanalysts and many others protesting loudly, cognitive scientist Nick Chater has plunged a stake through the very concept of an unconscious mind. In The Mind Is Flat Chater argues that our minds have no depths, let alone hidden ones. His primary claim is that the brain exists to make sense of the world by creating a stable perception of it and ourselves; but the brain does not provide us with an account of its own workings. These perceptions are created from our interpretations of a limited number of sensory inputs, with the assistance of various memory traces (themselves based on our interpretations of past events).

Chater’s opening chapter, The Power of Invention, describes how we can create an apparently rich internal picture of a fictional person or location based on a limited description that may have gaps or inconsistencies (Chater discusses Anna Karenina and Gormenghast). So it is with our perceptions of the actual world and, indeed, ourselves.  Most of our visual receptors are incapable of colour detection, yet we perceive the world in glorious colour. Our eyes are continually darting about all over the place, yet our perception of the world is smooth, not jerky. In short, much or most of what we perceive is an illusion foisted upon us by our brains.

Screenshot 2018-08-05 at 16.44.32

For centuries, philosophers consulted their ‘inner oracle’ in order to determine how the world works. Yet, Chater points out, the inner oracle has consistently misled us about concepts such as heat, weight, force and energy. Early researchers in artificial intelligence (AI) tried to do the same thing. They tried to excavate the mental depths of experts, recover ‘common sense theory’ and then devise methods to reason over this database. However, by the 1980s it had become clear that this program was going nowhere, and so was quietly abandoned.

As Chater puts it:

The mind is flat: our mental ‘surface’, the momentary thoughts, explanations and sensory experiences that make up our stream of consciousness is all there is to mental life. (p.31)

One reason why we are unaware of the fictional nature of our perceptions is precisely because our eyes are constantly moving about and picking up new sensory fragments. I may be unaware of the type of flower on the mantelpiece, but if you mention it my eyes go there automatically. In gaze-contingent eye tracking studies, the text on a screen changes according to where a person is looking. In fact, most of the text on the screen consists of Xs. As a participant’s eyes move across the screen the Xs that would have been in their fixation point change to become real words, and the area where they had been looking reverts to Xs. The participant, however, perceives that the entire page consists of meaningful text.

Likewise, when we construct a mental image it is never truly a ‘picture in the mind’. If we are asked to describe some details from the image, we simply ‘create’ those in our imagination in response to the question. Nothing is being retrieved from a complete image.

We often talk about a battle between ‘the heart and the head’, but Chater argues that we are in fact simply posing one reason against another reason. Citing the Kuleshov Effect, and the work of Schacter & Singer (1962) and Aron & Dutton (1974) on the labeling of emotional states, Chater concludes that “our feelings do not burst unbidden from within – they do not pre-exist at all” (p.98). Indeed:

The meaning of pretty much anything comes from its place in a wider network of relationships, causes and effects – not from within. (p.107)

Despite, or perhaps because of, our lack of inner depth, we are extremely good at dreaming up explanations for all kinds of things, including our inner motives. Perhaps my favourite example is from the work on choice blindness, in which participants were asked to choose the most attractive of two faces, each of which was presented on a card. After a participant made their choice, the researcher supposedly passed them the card they had chosen and asked them to explain why they had preferred that face. In fact, the researcher used sleight-of-hand to pass them the face they hadn’t chosen. Most people didn’t spot the discrepancy and readily provided an explanation as to why they preferred the face that they had not in fact chosen.

This research links to a wider body of work in decision making research, which shows that people’s preferences are constructed during the process of choice, depending on various contextual factors, as opposed to the conventional economic account that assumes people to have stable preferences that are revealed by the choices they make.

Chater also goes on to talk about people’s attentional limitations, arguing that – in almost all circumstances – our brains are only able to work on one problem at a time (where a problem is something which requires an act of interpretation on our part, rather than an habitual action such as putting one foot in front of the other when walking). This also fits with decades of work on human judgment, which has repeatedly found that people are unable to reliably integrate multiple items of information when trying to make a judgment.

Finally, Chater isn’t arguing that there are no unconscious processes. However, these unconscious processes aren’t ‘thoughts’. The mind isn’t like an iceberg, with a few thoughts appearing in consciousness and many others below the level of consciousness. Rather, the real nature of the unconscious is “the vastly complex patterns of nervous activity that create and support our slow, conscious experience” (p.175). Thus:

There is just one type of thought, and each thought has two aspects: a conscious read-out, and unconscious processes operating the read-out. And we can have no more conscious access to these brain processes than we can have conscious awareness of the chemistry of digestion or the biophysics of our muscles.

 The Mind is Flat is a book that I wish I’d written, in that it expresses, with evidence, a viewpoint that I have held for some time. The writing is clear and entertaining, and I devoured the book in just a few days. Recommended.

 

British Library exhibition – James Cook: The Voyages

416px-James_Cook's_portrait_by_William_Hodges
Captain James Cook, by William Hodges

For anyone looking for something to do in London before the end of August 2018, I thoroughly recommend a trip to the British Library (nearest rail/tube station: Kings Cross/St. Pancras) to see this new exhibition about the voyages of Captain James Cook.

Born in North Yorkshire in 1728, Cook joined the Royal Navy in 1755, took part in the seven year’s war against Canada and made a name for himself by charting the coast of Newfoundland. In 1765, the Admiralty engaged Cook to lead an expedition to Tahiti in order to observe the Transit of Venus. Commencing in 1768, this was the first of Cook’s three voyages to the Pacific and the order of the exhibition follows these three voyages.

Cook’s first voyage didn’t stop at Tahiti. They also spent six months circumnavigating New Zealand (given that name by the Dutch), where some of the encounters with the native population were violent. From there they went on to Australia, where they charted most of the Eastern coast (the rest of the coast having already been charted by the Dutch). And from Australia they sailed to Batavia, the centre of the Dutch empire in the East Indies. Upon return to Britain, Cook was promoted to Commander.

The second voyage, from 1773-1775, was in search of the Great South Continent. This turned out not to exist, but during the voyage Cook and his men became the first explorers to cross the Antarctic Circle, which they eventually did three times. The journey also took in Easter Island, Dusky Sound (at New Zealand), a sighting of South Georgia, and the New Hebrides (now called Vanuatu).

The third voyage, from 1776-1780, was to the North Pacific, taking in Alaska and the Hawai’ian islands.

The various naturalists and artists that accompanied Cook on his expeditions amassed a valuable collection of plants and artistic renderings of people, animals and landscapes. Some wonderful examples of these are on display in the exhibition galleries. Especially noteworthy are the artistic works of William Hodges, described by Sir David Attenborough as the first academically-trained artist to go on such an expedition (“and it shows”). Hodges accompanied Cook on the second voyage and one of his pictures shows the expedition’s ships dwarfed by the vast icebergs of the Antarctic, something which the general public would never have seen before.

There is no doubt that the voyages must have been exceedingly arduous and fatalities were numerous. Deaths through illness on the first voyage included Sydney Parkinson, famous for his drawings of Maori people and the first European to draw a kangaroo. Another artist, Alexander Buchan, died on Tahiti from an epileptic seizure. The surgeon William Monkhouse died during the stopover at Batavia, as did Tupaia – the High Priest of Tahiti – who had helped Cook chart the Tahitian islands (one current native of Tahiti describes Tupaia as a ‘traitor’, though others speak of him more admiringly).

Men were also lost in violent encounters. In 1773, ten men were killed in a dispute at Queen Charlotte Sound. Cook himself was killed by an angry crowd on the Hawai’ian island of Kealakekua.

What seems clear from the exhibition is that the scientific work carried out by the expeditions was secondary to the unstated goal of colonisation. Whilst the first voyage had the publicly-stated goal to map the Transit of Venus, Cook in fact had secret orders to search for “convenient” land. The exhibition includes testimony from the native people’s of the territories visited by Cook, one of whom notes that the expeditions described Australia as “terra nullius”, or “no people”, indicating that the non-white natives weren’t counted as people. In 1934, Cook’s house was transported from North Yorkshire to Melbourne, yet increasingly the indigenous people and their supporters are questioning the traditional view of Cook, and Australia Day has now become a day of protest for many.

It has been suggested that Cook was relatively egalitarian for a man of his time, yet the exhibition makes clear that he frequently hostaged native chiefs whenever some bit of naval property went missing. Such an event led to his own killing on Hawai’i, though the exact events of that day are unclear due to contradictory accounts. It was in fact a wealthy naturalist, Joseph Banks, who had paid for his place on the first voyage, that first proposed that Australia – specifically, Botany Bay – be used as the location for a penal colony. It is hard not to visit this exhibition and not feel a great sadness of what befell the native peoples of the places Cook visited (much of the worst, of course, came in the wake of Cook’s voyages).

Eventually, some voices of the Enlightenment began to question the wisdom of such ventures and Adam Smith, in his 1776 book The Wealth of Nations, argued in favour of free trade rather than territorial expansion.

In a video display Sir David Attenborough describes Cook as the greatest naval explorer that has ever lived, which seems like a fair assessment in terms of distance travelled, lands explored, and hardships endured. However, his legacy is increasingly under the spotlight. This is an exhibition well worth visiting.

Destroying the soul, by numbers

I think the first time I became aware of metrics in the workplace was between 1990 and 1993, when I was studying for a PhD at the University of Wales, College of Cardiff (now simply ‘Cardiff University’). One day, A4 sheets of paper had appeared on walls and doors in the Psychology Department proclaiming “We are a five star department!” A friend explained to me that this related to our performance in the ‘Research Assessment Exercise’ (RAE), about which I knew nothing. He scoffed at this proclamation in a rather scathing manner, clearly thinking that this kind of rating exercise had little to do with what really mattered in science. I didn’t realise then how right he was. But the RAE was used as a determinant of how much research income institutions could expect from government (via the funding councils).

A few years later, in my first full-time lecturing post, at London Guildhall University, I was put in charge of organising our entry to the next RAE. Part of this pre-exercise exercise was to determine which members of staff would be included and which excluded. Immediately this raised the question in my mind: “If the RAE is supposed to assess a department’s strengths in research, then shouldn’t all staff members be included?” Such was my introduction to the “gaming” of metrics. Every institution was, of course, gaming the system in this and various other ways. Those that could afford it would buy in star performers just before the RAE (often to depart not long afterwards), leading to new rules to prevent such behaviour.

At some point, universities also got landed with the National Student Survey (NSS), which consisted of numerous questions relating to the “student experience”, but with most of the impact falling on lecturing staff who, either explicitly or implicitly, were informed that they needed to improve. With the introduction of – and subsequent increase in – tuition fees, students were now seen as consumers for whom league tables in research and the NSS were sources of information that could be used to distinguish between institutions when applying. The NSS has also led to gaming, sometimes not so subtly – as when lecturers or managers have warned students that they themselves might suffer from a worse educational experience resulting from institutional loss of income as a consequence of their own low ratings.

These changes within universities have been accompanied by another change: an expansion in the number of administrative staff employed and a shift in power away from academics. And academic staff themselves now spend considerably more time on paperwork than was the case in the past.

A new book by Jerry Z. Muller, The Tyranny of Metrics, shows that the experience of higher education is typical of many areas of working life. He traces the history of workplace metrics, the controversies surrounding them and the evidence of their effectiveness (or lack of). As far back as 1862, the Liberal MP Robert Lowe was proposing that the funding of schools should be determined on a payment-by-results basis, a view that was challenged by Matthew Arnold (himself a schools inspector) for the narrow and mechanical conception of education that it promoted.

In the early twentieth century, Frederick Winslow Taylor promoted the idea of “scientific management”, based on his time-and-motion studies of pig iron production in factories. He advocated that people should be paid according to output in a system that required enforced standardisation of methods, enforced adoption of the best implements and working conditions, and enforced cooperation. Note that the use of metrics and pay-for-performance are distinct things, but often go together in practice.

Later in the century, the doctrine of managerialism became more prominent. This is the idea that the differences among organisations are less important than their similarities. Thus, traditional domain-specific expertise is downplayed and senior managers can move from one organisation to another where the same kinds of management techniques are deployed. In the US, Defence Secretary Robert McNamara took metrics to the army, where “body counts” were championed as an index of American progress in Vietnam. Officers increasingly took on a managerial outlook.

The use of metrics found supporters on both the political left and the right. Particularly in the 1960s, the left were suspicious of established elites and demanded greater accountability, whilst the right were suspicious that public sector institutions were run for the benefit of their employees rather than the public. For both sides, numbers seemed to give the appearance of transparency and objectivity.

Other developments included the rising ideology of consumer choice (especially in healthcare), whereby empowerment of the consumer in a competitive market environment would supposedly help to bring down costs. ‘Principal-Agent Theory’ highlighted that there was a gap between the purposes of institutions and the interests of the people who run them and are employed by them. Shareholders’ interests are not necessarily the same as the interests of corporate executives, and the interests of executives are not necessarily the same as those of their subordinates (and so on). Principals (those with an interest) were needed to monitor agents (those charged with carrying out their interests), which meant motivating them with pecuniary rewards and punishments.

In the 1980s, the ‘New Public Management’ developed. This advocated that not-for-profit organisations needed to function more like businesses, such that students, patients, or clients all became “customers”. Three strategies helped determine value for money:

  1. The development of performance indicators (to replace price).
  2. The use of performance-related rewards and punishments.
  3. The development of competition among providers and the transparency of performance indicators.

Critics of this approach have noted that not-for-profit organisations often have multiple purposes that are difficult to isolate and measure, and that their employees tend to be more motivated by the mission rather than the money. Of course, money does matter, but that recognition should come through the basic salary rather than performance-related rewards.

Indeed, evidence indicates that extrinsic (i.e. external to the person) rewards are most effective in commercial organisations. Where a job attracts people for whom intrinsic rewards (e.g. personal satisfaction, verbal praise) are more important, the application of pay-for-performance can undermine intrinsic motivation. Moreover, the people doing the monitoring tend to adopt measures for those things that are most visible or most easily measured, neglecting many other things that are important but which are less visible or not easily measured. This can lead to a distortion of organisational goals.

Many conservative and classic liberal thinkers have criticised such ideas, including Hayek, who drew a comparison with the failed attempts of socialist governments (notably the Soviet Union) at large-scale economic planning. Nonetheless, from Thatcher to Blair, from Clinton to Bush and Obama, politicians of different hues have continued to expand metrics further into the public domain.

Muller is not entirely a naysayer on metrics, noting that they can sometimes genuinely highlight areas of poor performance. In particular, he notes that in the US there have been some success stories associated with the application of metrics in healthcare. However, closer examination of these cases shows that these successes owe more to their being embedded within particular organisational cultures rather than with measurement per se. Indeed, these successes seem to be the exceptions rather than the rule, with other research showing no lasting effect on outcomes and no change in consumer behaviour. Research by the Rand corporation found that stronger methodological design in studies was associated with a lower likelihood of identifying significant improvements associated with pay-for-performance.

What is clear – and Muller looks at universities, schools, medicine, policing, the military, business, charities and foreign aid – is that metrics have a range of unintended consequences. These included various ways in which managers and employees try to game the system, including: teaching to the test (education), treating to the test (medicine), risk aversion (e.g. in medicine, not operating on the most severely ill patients), and short-termism (e.g. police arresting the easy targets rather than chasing down the crime bosses). There is also outright cheating (e.g. teachers changing the test results of their pupils).

Incidentally, another recent book, The Seven Deadly Sins of Psychology (by Chris Chambers) documents how institutional pressures and the publishing system have incentivized a range of behaviours that have led to ‘bad science’. For instance, ‘Journal Impact Factors’ (JIFs) supposedly provide information about the overall quality of the research that appears in different journals. Researchers can cite this information when applying for tenure, promotion, or for their inclusion in the UK’s Research Excellence Framework (formerly the RAE). However, only a small number of publications in any given journal account for most of the citations that feed into the JIF. Another issue with JIFs concerns statistical power – the likelihood that a study will identify a genuine effect (statistical power depends on sample size and several other factors). It turns out that there is no relationship between the JIF and the average level of statistical power within a journal’s publications. Worse, high impact journals have a higher rate of retractions due to errors or outright fraud.

But one of the impacts of metrics is the expansion of resources (people, time, money, equipment) in order to do the necessary monitoring. Even the people being monitored must give up time and effort in order to produce the necessary documentation to satisfy the system. And as new rules are introduced to crack down on attempts to game the system, so the administrative resources are expanded even further. This diversion of resources obviously works against the productivity gains that are supposed to be produced by the application of metrics.

I was less convinced by the penultimate chapter in Muller’s book, in which he addresses transparency in politics and diplomacy. He speaks scornfully of the actions of Chelsea Manning and Edward Snowden in disclosing secret documents, which he says have had detrimental effects on American intelligence. Undoubtedly, transparency can sometimes be a hazard – compromise between different parties is made harder under the full glare of transparency – and there is a balance to be struck, but I would argue that the scale of wrongdoing revealed by these individuals justifies the actions they took and for which they have both paid a price. In the UK, as I write, there is an ongoing scandal over the related issues of illegal blacklisting of trade union activists in the construction industry and spying on political and campaigning groups (including undercover police officers having sexual relationships with campaigners). A current TV program (A Very English Scandal) concerns the leader of a British political party who – in living memory – arranged the attempted murder of his former lover, and was exonerated following an outrageously biased summing up in court by the judge.  And of course the Chilcot report into the Iraq war found that Prime Minister Blair deliberately exaggerated the threat posed by the Iraq regime, and was damning about the way the final decision was made (of which no formal record was kept).

However, as far as the ordinary workplace is concerned, especially in not-for-profit organisations, the message is clear – beware of metrics!

Book review: Behind the Shock Machine (author: Gina Perry)

One evening in 1974, at a home in New Haven, the family of the late Jim McDonough gathered around their television to watch The Phil Donahue Show. To their horror, a piece of 1960s black and white footage was being shown in which Jim was having electrodes attached to his body. Jim was apparently the learner in an experiment whereby he would receive increasingly strong electric shocks whenever he failed to deliver a correct response to a question.

Bearing in mind that Jim had died of a heart attack in the mid-60s, his late wife Kathryn must have been concerned that there might be a connection with this extraordinary piece of research. She wrote to the show’s producer, asking to be put in touch with the man who’d run the experiment, Dr Stanley Milgram. Shortly afterwards, she received a phone call from Milgram, who provided reassurance that her late husband had not in reality received any electric shocks at all. He also sent her an inscribed copy of the book that had caused the media interest: Obedience to Authority.

The Milgram shock experiments are the subject of an enthralling book by psychologist Gina Perry, published in 2012: Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. By sifting through Milgram’s archive material, as well as interviewing some of his experimental subjects  and assistants (or their surviving relatives), Perry shows that the popular account of the shock experiments, as promoted by Milgram himself, is but a pale and dubious version of what really happened and what the research means.

The popular account goes as follows. Milgram wanted to know whether the behaviour of the Nazis during the Holocaust was due to something specific about German culture, or whether it reflected a deeper aspect of humanity. In other words, could the same thing happen anywhere? In order to investigate this question, Milgram created an experimental scenario in which people would be pressured to commit a potentially lethal act. His subjects were recruited through newspaper advertisements in which they were promised payment for taking part in a study of learning and memory. As they arrived at Milgram’s laboratory in Yale University, a second subject (actually a paid staff member) would also appear. The experimenter (also a paid confederate of Milgram’s) explained that they were to take part in a study of the effects of punishment on learning. One of them would be the teacher and the other the learner. The two men drew a piece of paper to determine which would be which, but this was of course rigged: the subject was always the teacher. The teacher was told that any shocks received by the learner would be painful but not dangerous. He would then receive a small shock himself as an illustration of what he would potentially be delivering to the learner. During the experiment, the teacher and learner would be in separate rooms, unseen to each other but connected by audio.

At the beginning of the experiment, the teacher would read out a list of word pairs to the learner. After this, he would read out each target word followed by four words, only one of which was paired with the target. The learner would supposedly press a button corresponding to the word he thought was correct. If the learner picked the wrong word, then the teacher had to flick a switch on a machine in order to deliver an electric shock to the learner. The level of shock increased with each word, varying from 15 volts to 450 volts. The two highest settings on the shock machine were labelled ‘XXX – dangerous, severe shock’. The experimenter was always present to oversee the teacher and, if the teacher began to show concern or balk at giving further shocks, would deliver an increasingly stern series of commands (according to a script) requiring the teacher to carry on.

In the first version of the experiment the teacher did not hear from the learner, but in other experiments the learner would begin to call out in increasing levels of distress once the 150V level was reached. There were additional variations, too, such as having the learner and teacher in the same room, having the teacher place the learner’s hand on the shock plate, changing the actors, changing the location to a downtown building, having the learner mention heart trouble, and using female subjects. The experiments began in August 1961 and concluded in May 1962. During the last three days of the experiments, Milgram shot the documentary footage that would form the basis of his film Obedience.

Obedient subjects were defined as those who delivered the highest possible supposed shock of 450V. In most scenarios about 65% of subjects were classed as obedient, though some of the variations (such as teacher and learner in the same room) did lead to lower levels of obedience. By the time Milgram came to write up his research, the Nazi Adolf Eichmann had been tried and hanged in Israel and Hannah Arendt had coined the phrase “the banality of evil”. The observation that dull administrative processes could lie behind the most atrocious war crimes was an ideal peg on which Milgram could hang his research. In an era when the Korean war had given rise to concerns about brainwashing, the concept of ‘American Eichmanns’ took hold.

Milgram’s first account of his work was published in October 1963 in the Journal of Abnormal Psychology, but his famous book – still in print – did not appear until 1974. The original publication of Milgram’s work, and the later publication of his book, met with a mixed response from academics. Critics raised ethical concerns about the treatment of his subjects, pointed to the lack of any underlying theory, and wondered whether it all really meant anything. Wasn’t Milgram just showing what we all knew already, that people can be pushed to commit extreme acts? In response, Milgram pointed to a survey of psychiatrists in which most of them believed that his subjects would not be willing to cause extreme harm to the learners. He also cited follow-up interviews with subjects by a psychiatrist, Dr Paul Errera, which concluded that they had not been harmed and that most had endorsed Milgram’s research.

In his 1974 book, Milgram provided the theory to explain the behaviour of his obedient subjects. This was the notion of the ‘agentic shift’, according to which the presence of an authority figure leads people to view themselves as the agents of another person and therefore not responsible for their own actions. I can recall reading Obedience to Authority as a student in the late ’80s and being confused. To me, the agentic shift theory didn’t seem to be explaining anything. It simply begged the question of why people might give up their sense of responsibility in the presence of an authority figure. Gina Perry points out that the theory also fails to explain the substantial proportion of people who didn’t obey, not to mention the discomfort, questions and objections of those people who nonetheless ended up delivering the maximum supposed shock (these objections figured in Milgram’s earlier publications but less so in his book). In suggesting that ordinary Americans could behave like Nazis, Milgram was also ignoring the entire counterculture movement and especially widespread protest and civil disobedience in relation to America’s involvement in the Vietnam war.

But Perry goes deeper than merely questioning Milgram’s theory, which many other academics have also done. Her research into the archives resulted in the realisation that, over time, Milgram’s paid actors began to depart from their script. The experimenter was provided with a series of four increasingly strict commands that he was expected to give when faced with a subject who was reluctant to continue. If the subject still refused to continue, then the experimenter was expected to call a halt. But John McDonough, Milgram’s usual paid experimenter, began to extemporise some of his commands and to cycle back through the list of four. In other words, some subjects were classed as obedient when in fact they should have been classed as disobedient.

It also turns out that many or most of Milgram’s subjects were not told straight away that the study they had taken part in was a hoax. In a relatively small community, he didn’t want the word to get about that this was the case. Despite this, in the published reports Milgram referred to “dehoaxing” the subjects at the end of the study. Subjects were sent a report about the study, including that the procedure had been a hoax, a little while after the entire series of studies had been completed. However, for whatever reason, some of the people that Gina Perry tracked down said they had never received such a report. They had gone most of their lives not knowing the truth.

Worse than this, contrary to what Milgram claimed, it is clear that some subjects were not happy about the nature of his research, either at the time (the usual experimenter, John Williams, appears to have been assaulted on more than one occasion) or later on. Some appear to have been adversely affected by their participation. In some cases, Milgram did manage to mollify people by taking them into his confidence. He then cited them as evidence that subjects were happy to endorse his studies. Some of Milgram’s subjects were Jewish, an ironic fact given Milgram’s linkage of his research to the Holocaust (Milgram himself was Jewish, but this was not something he disclosed in his earlier writings).

It also turns out that the clean bill of health given to Milgram’s research by the Yale psychiatrist Paul Errera was not quite what it seems. In fact, Errera’s interviews with some of Milgram’s subjects had taken place at the insistence of Yale University after complaints had been made. Only a small proportion of subjects were contacted and an even smaller number agreed to be interviewed, but in his book Milgram referred to these – against Errera’s wishes – as the “worst cases”, who had nonetheless endorsed his work. Milgram actually watched the interviews from behind a one-way mirror and, in some instances, revealed himself to the subjects and engaged in interaction with them. Perry suggests that Errera’s endorsement of Milgram’s work may have been influenced by his reluctance to derail the career of a young psychologist who clearly had so much riding on his controversial research. In any case, the presence of Milgram at the interviews was hardly ideal.

Milgram moved to Harvard University in July 1963. Perhaps mindful of the controversy surrounding his work, his research there avoided personal contact with subjects. In 1967, having been denied tenure at Harvard, he left for a job at the City University of New York. Perry notes that with both staff and students Milgram could alternate between graciousness and rudeness. She wonders if his mood swings might have been influenced by his drug use. This doesn’t feature highly in the book, but Milgram had been using drugs since his student days, including marijuana, cocaine and methamphetamine. When writing Obedience to Authority he used drugs to help overcome his writer’s block and occasionally kept notes on the influence of his intake on the creative process.

Did his research ultimately tell us much at all? It seems unlikely that it really sheds light on the Holocaust, an event involving the actions of people working in groups and in the grip of a specific ideology. By contrast, Milgram’s subjects were acting as individuals in a highly ambiguous context. On the one hand they believed they were being instructed by a scientist, a highly trusted figure whom they would have been reluctant to let down. On the other hand, the setup didn’t make sense. Why was it necessary for a member of the public to play the role of the teacher in the experiment? Why didn’t the experimenter do this for himself? Also, some of Milgram’s own subjects were aware that punishment is not an effective method for making people learn, something that was well-established by the time that he ran his studies. One of Milgram’s research assistants, Taketo Murata, conducted an analysis that showed that the subjects who delivered the maximum shock were more often the ones who expressed disbelief in the veracity of the setup. Whilst Milgram argued that their responses after the study couldn’t be trusted, he was nonetheless happy to use these when it suited him.

Gina Perry shows that in private Milgram often shared many of the doubts that critics voiced about his work, including their ethical concerns. Publicly, though, he strongly defended his work, and more so with the passage of time. He wanted to be seen among the greats of social psychology, including his own mentor Solomon Asch, whose work on conformity was an obvious precursor to Milgram’s work. It seems, though, that Asch eventually stopped responding to Milgram’s letters, presumably increasingly uncomfortable with the ethical issues surrounding the shock experiments. Another famous psychologist, Lawrence Kohlberg, had watched some of the experimental trials with Milgram behind the one-way mirror. Yet he subsequently regretted his own passivity in the face of unethical research. In a letter to the New York Times he described Milgram as “another victim, another banal perpetrator of evil”.

What about Milgram’s paid actors, Williams and McDonough? Were they also culpable in perpetrating evil? Perry is sympathetic to these men. Like the subjects, they had been duped. They needed the money and had responded to an advertisement for assistants in a study of learning and memory. Possibly as the trials proceeded, they themselves became desensitized to what was happening. In any case, they received two pay rises from Milgram in recognition of the efforts they were making on his behalf. Another actor, Bob Tracy, took part in some trials but quit after an army buddy arrived at the lab and he couldn’t go through with the deception. But what kind of pressure were Williams and McDonough under? We know that Williams was assaulted more than once in the lab. And both men were dead of heart attacks within five years of the research ending. This is ironic, as many of the experiments featured the learner stating at the outset that he had a heart problem. There is also evidence that McDonough did experience a heart ‘flutter’ during one of the trials. Did Milgram know about his heart problem and deliberately incorporate this into the experimental scenario?

In conclusion, it is undeniably true that human beings, under certain circumstances, can do terrible things. But Gina Perry has done us a great service by showing that the behaviour of authority figures does not automatically turn us into unthinking automata who will commit atrocities. Through an exemplary piece of detective work she has shown that the people who served as Milgram’s subjects were, by turn, concerned, questioning, rebellious and even disbelieving. Some, though, were affected by the experiments for years afterwards. After all, if you had been pressured into delivering  very painful shocks, and possibly a lethal shock, in the name of science, only to be told that you were the person being studied, and possibly not being told that no real shocks were delivered, how would you feel about yourself later on?

Note: Gina Perry is also the author of a new book ‘The Lost Boys’, which I hope to write about in due course.