close
Money Matters

The risk of harm and the greater good

By Tim Harford
29 June, 2020

While the world celebrated the discovery that the steroid dexamethasone was an effective treatment for Covid-19 patients on ventilators, my physician friend was unimpressed. It was obvious that dexamethasone would work, she opined; intensive care units should have been using it as a matter of course.

While the world celebrated the discovery that the steroid dexamethasone was an effective treatment for Covid-19 patients on ventilators, my physician friend was unimpressed. It was obvious that dexamethasone would work, she opined; intensive care units should have been using it as a matter of course.

Perhaps. But that is what doctors thought about the use of similar steroids to treat patients with head injuries. Logically, steroids would be so effective that a clinical trial seemed unethical. Overcoming these objections, the Corticosteroid Randomization After Significant Head Injury trial put the steroids to the test — only to discover that, far from being lifesavers, they raised the risk of death.

From steroids to social policy, what works and what doesn’t is often surprising. That is why rigorous experiments in real-world settings are invaluable.

This was the true contribution of the much-vaunted “behavioural insight” teams that became fashionable about a decade ago in the UK, US and elsewhere. Behavioural scientists have some useful ideas, but like doctors they are often wrong. More useful than any “insight” was the increased use of randomised trials in policymaking.

It is surprising how far one can push the idea, as Ben Goldacre describes in the International Journal of Epidemiology. Should we have had a randomised trial, in the 1960s, of whether beating boys with canes discouraged them from smoking? The idea of caning children is repugnant today, and rightly so. But then it was commonplace — so it might have been worth checking if it worked as advertised.

A non-randomised study was even conducted in 1962. But as Archie Cochrane, a pioneer of evidence-based medicine, wrote, “when one thinks about it, the results do not tell us anything at all”.

There are, of course, examples of randomised trials that clearly risked harm to the participants. One 1958 experiment lured 200 children into simulated refrigerators rigged with internal video cameras; the idea was to watch what the children tried to do to escape.

It is an unnerving study that distressed some children and for which no meaningful consent could have been obtained. On the other hand, it informed improvements in fridge safety that have plausibly saved several hundred lives.

A modern parallel would be a trial that deliberately infected healthy volunteers with coronavirus to see whether there was a way to trigger an immune response with a low-risk dose. Exactly this idea was proposed to me in March by a senior adviser to the UK government, who grumbled that doctors refused to approve the scheme. Unlike the toddler-in-a-fridge study, informed consent would have been easy to obtain. But otherwise the ethics are similar: a clear risk of harm to the study group, with the greater good in mind.

The idea is an old one; before we had a true vaccine for smallpox, people were “variolated” with a controlled exposure to the deadly virus. Variolation was truly dangerous, but broadly effective.

Trials of caning, fridge-escapes and variolation worry us not because of the trial but because of what is being tested. But often experiments make us uneasy for no good reason. A recent study by Michelle Meyer and others described hypothetical cases to survey respondents and asked them if the behaviour was appropriate.

Imagine, for example, a clinical director who tried to reduce hospital infections by putting up posters with a safety checklist for medical staff. No problem, right? Or imagine he or she instead puts the checklist on the back of the badges worn by doctors and nurses. Also, surely, no problem.

Now imagine instead that they decide to run an experiment by randomly assigning people to be treated in a room with the poster, or by a doctor wearing the badge.

When Ms Meyer and her colleagues described one of these scenarios, few people were concerned about either the poster or the badge, but a substantial minority who were told about the randomised experiment raised objections.

It is unclear why we have this aversion to randomising between two unobjectionable alternatives.

The most straightforward explanation is that people either object to the idea of being arbitrarily manipulated, or they are unnerved by the realisation that the clinical director doesn’t know what he or she is doing.

But while understandable, these are not good arguments against experimentation. If decision makers are fallible — which they are — then randomised trials are a solution to, not a symptom of, that problem. So researchers should work hard to demonstrate the trustworthiness of their experiments. Securing real consent is ethically invaluable, but it is also good public relations.

And policymakers should embrace randomisation. Steroids were surprisingly effective in treating ventilated Covid-19 patients, and surprisingly harmful in the head injuries trial. There are plenty of policy interventions with a similar capacity to surprise.

The equivalent of dexamethasone for crime, or early-years education, or tax compliance, may be out there. Randomised trials, however queasy they may make some of us feel, are a good way to find out.

Copyright

The Financial Times Limited 2020