The basic rule: ‘but-for’ causation
Children and incompetent adults
The spectre of experimentation on people who cannot validly give or refuse consent is terrifying. The Declaration seeks to exorcise it by tight controls. Such potential subjects cannot be used (my word, designed to raise Kantian hackles) in a research study that has no likelihood of benefiting them personally, unless it’s intended to promote the health of the population from which the subject comes (say children, or adult patients with Down syndrome), the research can’t be done with competent subjects, the study entails only minimal risk and burden, and informed consent has been obtained from the relevant proxy decision-maker, where one exists (Article 27).
What happens, though, where the test for lawful proxy consent is the ‘best interests’ of the individual patient, and there’s no conceivable benefit to the individual subject? This is very common indeed. Take studies of normal child physiology, for instance. Typically blood samples will be taken. That poses only minimal risk, but it’s uncomfortable, and confers no benefit whatever to the screaming child. To say that participation is in the child’s best interests is to take an unfashionably wide, communitarian view of ‘best interests’. The result has been massive under-testing of products for use in children. Many of the drugs used routinely in paediatric medicine are, for this reason, not specifically licensed for use in children. The studies that would be necessary to get the regulator’s rubber stamp haven’t been thought to be ethically or legally appropriate.
Everyone has seen adverts for medical research participants. Depending on the country, they might say ‘Remuneration of expenses’, or, more brazenly, ‘Generous compensation for your time and trouble’. But everyone knows what’s being offered. Money is given as an inducement to allow bodies to be experimented on. And people are induced.
The Declaration itself is silent about the payment of money to research subjects. It contents itself with saying that researchers must ensure that consent is real. There are many things that might taint the reality of consent. Money is one of them, says the orthodoxy. That orthodoxy is embodied in many local protocols, which typically insist that payment is restricted to the reimbursement of expenses and compensation for the time taken. Those protocols are ignored. The law, whose main concern is coercion of a darker kind, turns a Nelsonian blind eye. Across the world there’s payment for participation. Research on healthy volunteers would stop if there weren’t. There aren’t enough genuine altruists out there, happy, just for the love of science and their fellow men, to give up their afternoons and their blood.
There’s nothing offensive about this pragmatism. The relationship between money and voluntariness is too complex to be summarized in one or even many paragraphs of a code. Most people wouldn’t go to work if they weren’t paid, and yet rarely is it suggested that there should be laws to stop them working. Workers in dangerous occupations tend to get paid more: again it is rarely suggested that compensation for risks is contrary to public policy.
Project review and the outlawing of mavericks
The amateurish, ad hoc ‘let’s-start-it-and-see-how-it-goes’ maverick is dead. The Declaration has certainly done its best to see him off. It seeks to entrench its own principles by demanding that research is done systematically and thoughtfully, with due regard to the ethical issues at stake. Its main entrenching tool is the research ethics committee, to which a research protocol must be submitted, by which the protocol must be approved, and which has the right to monitor the progress of the research.
Each country has a slightly different way of arranging this regulatory oversight. All too often the policemen of the ad hoc and amateurish are ad hoc and amateurish themselves-sometimes failing to understand the science they are supposed to be regulating, sometimes too cavalier about the ethics.
Perhaps the real problem is a crisis of identity: they don’t know what they are meant to be. Are they meant to be the eyes and conscience of the public? Or the guardian of the rights of the research subjects? Is there a difference between these functions? How should the interests of researchers and potential beneficiaries be represented? Does a committee discharge its function if it goes through a thorough, transparent, and consistent procedure, or is there a notional ‘right answer’ to each problem it faces? There is no consistency within nations, let alone worldwide, about these central questions.
It’s not surprising that academic and political observers of the committees haven’t been deafening in their applause.
Perhaps the most embattled committees are the Institutional Review Boards (IRBs) in the US, and the most damaging criticisms leveled against them are of conflicts of interest. Many have spent a lot of time in bed with big pharmaceutical and medical device manufacturers, and have come away with some nasty ethical diseases. A 2006 study of IRB members at university medical centers showed that over a third rarely or never disclosed their conflicts of interests to other members of the IRB, and over a third had financial ties to relevant industries. The US government has promised to clean up the IRBs. It remains to be seen how successful it is.
There is an infinite amount of suffering in the world. There is a distinctly finite amount of resources to deal with it. How do we decide who gets what? The dilemmas are agonizing. One man’s treatment is another man’s denial of treatment. To save X is to condemn Y. Medical creativity has made the problem worse. If the options available to doctors were now what they were 100 years ago, we would stand a reasonable chance of being able to give everyone the treatment available. But so much more can now be done. Each new advance generates a new moral dilemma.
Each dilemma is politically explosive. Should life-saving and life-enhancing innovation be available only to the rich who can afford to pay for it themselves? If there’s a state health service, should the government say frankly: ‘We’ll provide the basics. If you want anything exotic, you’d better go private’? Or should it say instead: ‘We can’t give everyone everything for free, but to show that we’re true democrats, we’ll give some patients the world-class, cutting-edge treatment’? But if so, which patients? And if the obligation is to provide the basics, what are those basics?
We tend to look at these problems through exclusively western, or at least narrowly national, eyes. About 40,000 children died today of hunger. Tens of thousands more died of malaria, and tens of thousands more of waterborne infectious diseases. Almost all of these were preventable. The money spent on a few heart transplants in elderly westerners would have saved almost all those lives. Who’d get involved in all this if they could possibly help it? Well, not the judges. They’ve made that very clear. Wherever in the world an advocate stands up to suggest that health-care resources have been deployed unlawfully, there’s a sharp intake of judicial breath, followed by the noise of Pilatian hand-washing. When the courts are pressed to give reasons for their non-intervention they come over all democratic, insisting that interfering with health-care resource allocation policy would be to usurp unacceptably the function of the legislature, or, where the question is whether a particular treatment should be provided to A rather than B, that this turns on clinical judgment, and accordingly it would be inappropriate to interfere.
They are not consistently deferential, of course. Legislative decisions are often pre-empted or struck down in the course of judicial review-usually by reference to a constitution or a set of human rights principles. And in many jurisdictions that bulwark of judicial deference to clinical opinion, the Bolam test, has been demolished or eroded. Judicial reluctance to say ‘Yes’ or ‘No’ to life-sustaining treatment has more to do with human reluctance to make hard decisions than with legal principle.
Judges want to be able to sleep at night. And who can blame them? The people who do blame them are the desperate litigants. There are few more desperate than the parents of 10-year-old ‘Child B’. She suffered from leukaemia. The prognosis didn’t look good, but there was one possible treatment, which had a 20 per cent chance of success. The health authority refused to fund it, and the parents challenged that refusal in the English Court of Appeal. They failed. Unless the authority could be shown to have acted irrationally in its decisions about resource allocation, the court could not interfere (R v Cambridge Health Authority ex p B (1995)).
That’s how it works in most places in the world.
It’s hard to be irrational. Most claims of irrationality in the arena of health-care funding are framed not as an outright assault on the outcome (such assaults are generally hopeless), but as attacks on the decision-making process. Suppose that an authority decides not to fund gender reassignment surgery for transsexuals. If its reason is simply that it thinks it is more important to buy kidney dialysis machines for the money that it would otherwise spend on the surgery, the decision is likely to be unimpeachable. But if the same decision is reached because the authority thinks that, as a matter of public policy, sex change ought to be discouraged; the outcome might be very different.
Discrimination itself isn’t unlawful. It’s essential. The law simply requires discrimination to be transparent and reasonable. In most jurisdictions, for instance, it wouldn’t be unlawful to deny cardiac bypass surgery to smokers as long as the decision was justified carefully. The justification wouldn’t be hard. The success rate for the surgery is significantly lower for smokers. Put another way (the utilitarian way beloved of hospital accountants), you get fewer Quality Adjusted Life Years per dollar when you spend your dollars on smokers. The surgery is not good value for money.
Human rights legislation has had surprisingly little impact on the law of health-care resource allocation. One might have thought that, in countries that apply the European Convention on Human Rights, Article 2 (which imposes a liability on states to institute measures to protect life), Article 3 (which prohibits inhuman and degrading treatment), and Article 8 (which, broadly, protects autonomy and gives people a qualified right to live their lives as they wish) might have made decisions about health-care funding rather more justiciable. But it hasn’t happened.
Article 14 prohibits discrimination in the way that the other Convention rights are recognized or effected, but it’s hard to point to a case where Article 14 would give a claimant a remedy but the domestic law would not. The transsexuals whose surgery had been denied on the basis of public policy would be able to frame their claim in terms of Article 14, but why bother? In most western countries discrimination of that sort is irrational and unlawful without Article 14’s help. Some countries, of course, aren’t so enlightened, and would endorse a public policy of discrimination. But although the Article 14 challenge might succeed against them in the European Court of Human Rights, the victory would be a pyrrhic one. The errant country would just be less candid in future about its reasons for denying the surgery.
So much for policymaking. What about decisions about individual patient care?
A patient in permanent vegetative state (PVS) lies on the ward, being fed through a nasogastric tube. If the diagnosis is right, by definition she does not get, and cannot ever get, anything at all out of life (although her family may get some comfort from visiting her). It’s very expensive to keep her there. She might lie there for years. Her existence is killing and disabling lots of perfectly salvageable patients. She is, in particular, a lethal parasite on the patient in the bed next to her. That patient, a 35-year-old mother of four, has an entirely curable type of cancer. But the hospital doesn’t have the money to pay for the necessary drugs.
Should one kill the PVS parasite (who many would say was really dead already) so that the mother can live? Well, perhaps. But the law in most jurisdictions-with a wary eye on the consequences of saying that one human life is worth more than another-will not say so. Indeed the UK House of Lords, in Airedale NHS Trust v Bland (1993), said that in deciding whether to withdraw life-sustaining treatment from the PVS patient, it was illegitimate to take into account the funds that would be released to treat others. Most other countries agree.
Let’s examine that. It’s lawful to have in place a policy that says that one will treat a class of patient with condition X, but not one with condition Y. It’s lawful for the clinicians on the ward with the PVS and the cancer patient to decide on clinical grounds to maintain the PVS patient but not the cancer patient, or even to say to the cancer patient: ‘Sorry. We’ve run out of money in this year’s budget. You’ll have to die.’ If the cancer patient says to a court: ‘It’s irrational to condemn me and save the PVS patient,’ she’ll get short shrift. The court won’t interfere.
Some judges are unhappy with this abdication of responsibility. Judges are, after all, paid to judge. They already make awesome decisions about the withdrawal of life-sustaining treatment; they sometimes (for instance in cases involving the separation of conjoined twins, where the separation will kill one but save the other) weigh one life against another (although they typically protest, unconvincingly, that that’s not what they’re doing). Hospital funding committees have to take decisions about whom to treat and whom to deny. They don’t do that with the benefit of much more information or skill than a judge could, with the help of expert evidence, bring to bear on the same questions. Individual clinicians no doubt take financial considerations into account when deciding whether to treat X rather than Y-it’s just that the law as it presently stands requires them to deny they’re doing it. If clinicians can do it, why can’t judges? And wouldn’t it be healthier if clinicians were encouraged to be honest about the basis of their decisions?
The problem, of course, is not just a lack of judicial will or expertise. It’s not, either, that it would be hard to devise a system of substantive law that did the job. The real problem is the old one of the floodgates. Make it too easy to challenge funding decisions, and the courts would be swamped by patients and their relatives scrabbling for the money. Very often the substantive law is shaped by practical considerations: health-care resource allocation is a classic example.
The end of life
A terminally ill patient lies in a hospital bed. A doctor comes in and stands beside her bed. He draws the curtain around, so that no one can see what he’s doing. He takes an instrument out of the bag he’s holding. He does something to the patient using the instrument. Whatever he does causes the patient to die.
What should the law do?
The answer, of course, is that it depends on many, many things. Lots of crucial information is missing. We’d need to know where the hospital was. If this is a case of euthanasia or assisted suicide, there are some jurisdictions where it would not be unlawful. If it was euthanasia, and took place somewhere where euthanasia was lawful, we’d need to know whether the procedures prescribed by the euthanasia law had been followed. Those procedures might include certification by independent practitioners that the patient was terminally ill within the meaning of the relevant law; that the patient had voluntarily requested euthanasia, having been fully informed of the prognostic and palliative facts; that there was no undue influence on the patient from relatives; that there had been a ‘cooling-off’ period since the request for euthanasia; that the request was signed and duly witnessed; and so on. We’d want to know what instrument the doctor used. If it was a syringe, and he’d given an injection that caused death, we’d want to know what was in the syringe. If it was potassium chloride, we’d probably want to call the police. A bolus of potassium chloride stops the heart immediately. It has no therapeutic use at all, unless you think that death is a type of healing. But even if the agent were potassium chloride, we couldn’t immediately conclude that the doctor was guilty of murder (which is killing someone, intending to kill them or to cause them serious bodily harm). The doctor might have injected the drug thinking that it was something benign. In that case we’d be thinking about gross negligence manslaughter, and we’d have to ask questions such as: Did the doctor draw up the drug himself? And if so did he check the ampoule sufficiently carefully? (It sounds as if we’d have trouble establishing that.) Was he handed the syringe, ready filled, by a nurse? If so, does the law say that it’s fine for him to assume that the nurse will have done her job properly, or is it grossly negligent (or merely negligent) to have delegated that duty? If the nurse handed him a syringe full of potassium chloride knowing that he would inject it into the patient with fatal consequences, is she guilty of murder?
If the syringe was full of morphine, some of the same questions would be asked, but there might, depending on the dose and the patient’s condition and clinical history, be some others. The doctor might be able to take refuge in the doctrine of double effect, which distinguishes between intention and foresight, saying that if you do an action with intention A, but knowing that B might happen too, and then you may be able to escape criminal liability for B.
Here, if the doctor injected morphine with the intention of relieving the patient’s pain, but knowing that the dose required for proper pain relief might tip the patient into the grave, he would not be guilty of murder.
Perhaps, though, the instrument that the doctor took out of his bag was simply a pair of scissors. Perhaps he had used them to cut the tape around a tracheostomy tube that connected the patient to the ventilator that was keeping her alive. Perhaps the doctor had then removed the tube, causing the patient to die.
Those facts would generate a host of other questions. Some we’ve met already. This might be murder or gross negligence manslaughter. It may be, though, that the doctor might have been guilty of assault if he had not removed the tube. The patient might have been entirely mentally capacitous, and might have insisted on the removal. Or the patient might have been incapacitous, but she might have made a binding advance directive (living will), saying that if she got into the state that she was in when the doctor arrived, she wanted to refuse all life-sustaining treatment. It may be that the doctor was withdrawing the feeding tube that was keeping her unlawfully alive, or the catheter through which she was getting the antibiotics that were staving off the deadly bacteria that were she capacitous, she would see as her merciful friends.
In short, death is a complicated business. Even saying what it is is difficult.
The definition of death
The heart of a hanged person sometimes continues to beat for 20 minutes after the plunge through the trapdoor. Some cells can continue to function for a long time after the body of which they are a part has ceased to ventilate them and pump blood. We all die slowly and incrementally. Some injuries can wipe out a person’s cerebral cortex. The person will never again be capable of pain, pleasure, communication, or any other sensation. Their relatives often talk about them as if they were dead. And yet the patient’s heart will be beating and their chest rising and falling as they breathe unaided. Is there anything wrong about burying such a patient?
The law has two broad concerns. It wants to ensure that people are irrevocably dead before they are buried, cremated, or their vital organs are harvested for donation. And it wants to protect the sensibilities of relatives and friends. However biologically certain it is that a person will never recover, the traumatized family is unlikely to be happy about shoveling earth onto a heaving chest.
The brain stem-the evolutionarily ancient, vegetative part of the brain-contains, among other things, the respiratory centres which drive ventilation. If the brain stem is knocked out, not only is it immensely unlikely that there will be any higher cortical function (making many people unwilling to distinguish between brain-stem death and whole-brain death), but unaided respiratory function is also impossible. The heart may, however, continue beating for a while. If the patient is ventilated, it may continue to beat for a long time.
There are undoubted advantages in adopting a definition of death based only on demonstration of brain-stem death. It may mean, for instance, that organs can be taken from a patient when they are still being perfused by the patient’s own beating heart, and when the organs are therefore in optimal condition for transplantation. It may mean that resources are not spent ventilating a patient who is certainly doomed and who has no conceivable interest in remaining, in a narrow, biological sense, alive.
These are the sort of considerations that have led the UK, amongst many other jurisdictions, to adopt a definition of death based on brain-stem death. The protocols to be followed in reaching the diagnosis of death are tightly controlled. They always involve demonstration of a patient’s inability to breathe spontaneously, and may be supplemented by other investigations such as cerebral angiograms. The difficulty with legislating in this area is that legislation has to cover all possible cases. We’ve already noted that the heart of a brain-stem-dead patient on a ventilator may beat happily for a long time. Legislators, therefore, have tended to hedge their bets. The US Uniform Determination of Death Act 1980 provides, for instance, that a dead person is one who has sustained irreversible cessation of either ‘circulatory and respiratory function’ or ‘all functions of the entire brain, including the brain stem’.
Deadly acts and deadly omissions
At the heart of much legal thinking about death and dying is the distinction between acts and omissions. Tony Bland was crushed in a football stadium. Much of his cerebral cortex was wiped out. He went into a persistent vegetative state (PVS). He was insensate, and always would be. He knew nothing of the devoted relatives who, for years, came to sit beside him in hospital. His heart worked, he could breathe, and he had a functioning gut. But that was about the limit of his life. He was kept alive by being fed and hydrated through a tube.
Eventually his family decided that enough was enough. It was time to acknowledge that the Tony they loved had gone. His doctors agreed. The best way to deal with this, they all decided, was to withdraw his feeding tube. Deprived of food and fluids he would soon be dead. By definition, if the diagnosis of PVS was right, he would have no idea what was happening to him. But there as a problem. If the withdrawal of food and fluids amounted to an act, it would be an act done with the intention of causing death. If it in fact caused death, his doctors would be guilty of murder. If, instead of pulling out a feeding tube, they, with identical intent, pressed the plunger of a syringe containing a lethal drug, they would certainly be guilty of murder. What was the difference? The difference, said the UK House of Lords (Airedale NHS Trust v Bland (1993)), was that the withdrawal of food and fluids was an omission.
This has caused lots of brow-furrowing. It doesn’t take much legal sleight of hand to transform an act into an omission and vice versa. If I starve a child to death by refusing to feed it, I should expect a frosty reception to my submission at my murder trial that I was only omitting to do something. And there are various thought experiments devised by philosophers that seek to indicate that there is no distinction of substance between acts and omissions. Perhaps the most famous is the ‘Trolley problem’: perhaps the most accessible is the story of the two wicked uncles. Uncle A stands to gain a huge inheritance if baby C dies. He offers to bath the baby. He pushes its head under the water. It drowns. He is guilty of murder.
Uncle B, too, will be massively enriched by baby C’s death. He too gives the baby its bath. Just as he is reaching out his hand to push C’s head beneath the water, C accidentally knocks her own head on the side of the bath and sinks beneath the water. It would require no effort at all for B to raise C’s head and save her. But of course he’s delighted by this windfall. He stands and watches, rubbing his avaricious hands, as C drown.
In most jurisdictions B will have committed no criminal offence at all. A few jurisdictions impose a duty of rescue in these circumstances, but they don’t regard failure to rescue as murder. And yet the action of Uncle A is ethically identical to the omission of Uncle B. Isn’t the law being absurd in failing to acknowledge it?
Well, possibly. But many feel that, however many anomalies can be pointed out by imaginative philosophers; there is a distinction of great emotional weight and intellectual utility between acts and omissions. One of the uses was demonstrated in Bland: well-meaning doctors don’t get locked up for refusing to continue pointless treatment.