JORGE RODRIGUES SIMAO

ADVOCACI NASCUNT, UR JUDICES SIUNT

(5) Medical Law

 ObamaCare for Dummies: The Affordable Care Act Explained

American Bar Association - Health Care Law

ML

Causation

The basic rule: ‘but-for’ causation

However rankly negligent a defendant has been, the claim will fail unless some loss has been caused. The basic test sounds common-sensical: it is for the claimant to show that, but for the defendant’s negligence, she would have been spared the injury that she has in fact suffered. So: a patient attends her family doctor, worried about a lump in her breast. The doctor examines the lump and wrongly reassures her that it is benign. In fact she should have been referred for further investigation. Had she been, breast cancer would have been diagnosed, and treatment would have been started. Her chances of complete cure at that point would have been 49 per cent. By the time her breast cancer is diagnosed, the chance of cure has dropped to 5 per cent.

The doctor admits breach of duty, but the claim, insofar as it is a claim for loss of cure, fails. Statistically the patient was doomed at the time of the first consultation. The doctor’s negligence is causally irrelevant.

 

 Loss of a chance

 

 This conclusion troubles many. Hasn’t the doctor deprived the patient of something that is capable of grounding a claim in tort? The law commonly compensates claimants who, because of breach of contract, lose a chance of gaining a benefit or avoiding a detriment. In an old English case, a girl paid money to a newspaper in order to enter a beauty contest. Due to the paper’s administrative incompetence (in breach of its contract with her), she was not entered into the contest, and was therefore deprived of her chance (which would not have been better than evens) of winning it. She was entitled to damages: Chaplin v Hicks (1911). If a solicitor’s incompetence robs a client of a (say) 30 per cent chance of succeeding in litigation, it is no answer for the solicitor to say: ‘You’d probably have lost anyway, so there’s no claim against me.’ And yet that is precisely the assertion that lets the negligent doctor escape.

Why the discrepancy? Some would say that there is no discrepancy, pointing to the fact that the newspaper and the solicitor are acting under a contract, and that one might construe those contracts as contracts to take reasonable care to give the client the very chance of which she has been deprived.

But many medical services are provided under contract. Can those contracts not be read in an identical way? And if so, is it acceptable to deprive a National Health Service patient of a remedy when a private patient, in identical circumstances, would get compensation? Some would contend that there’s a difference between the chance of obtaining a benefit (as in the newspaper and litigation case) and the chance of avoiding a detriment (as in the breast cancer case). But that doesn’t work either, for many reasons, one of which is that it is legal child’s play to transform a benefit into a detriment, and vice versa. Who would like to tell the cancer victim that she hadn’t really been deprived of the benefit of living and seeing her children grow up, but had instead merely failed to avoid the detriment of death?

Nothing of any moral substance (and surely the breast cancer case has moral substance in spadefuls) should turn on such distinctions. It brings the law into disrepute.

What is operating here is not logic, but policy, and it would be better for the law’s reputation if that were frankly admitted. The policy, in fact, is a sound, pragmatic one. We’ve seen it already in other contexts: the courts would be hopelessly clogged if damages for a lost chance were routinely allowed in clinical negligence cases. Many (arguably most) medical mistakes cause a patient to lose a chance of something. If loss of a chance were sufficient to allow a claimant to recover damages, one might effectively be doing away with the requirement to prove causation at all in clinical negligence cases: a breach of duty would be followed by judgment for damages to be assessed. Perhaps that sounds just.

But the vagaries of biology being what they are, quantifying loss is often nightmarishly difficult and expensive. Whether the difficulties of some cases should defeat justice in all is a literally moot point.

 

Material contribution to injury and risk of injury

 

The law doesn’t always run scared from biological uncertainty. Sometimes it recognizes that defendants shouldn’t always be able to shelter behind the statement ‘This situation is terribly complex’- effectively getting a windfall from the sometimes banal simplicity of the ‘but-for’ test.

Take an industrial disease case. Over a working lifetime, during the course of his work for several employers, the claimant has inhaled noxious dust. Some employers were negligent in letting him inhale it; some were not. He gets a disabling lung disease, and seeks compensation. He sues the negligent employers. They respond: ‘The but-for test applies. You cannot prove that but for our “guilty” dust you would not have the disease. Who knows exactly what the trigger was, or when the threshold was reached? The innocent dust might have triggered the disease.’

The law, generally, doesn’t like this response. Most jurisdictions have developed a way of compensating a claimant in this position-often by saying that it is sufficient for a claimant to prove that the defendant’s negligence has materially contributed to his injury (Bailey v Ministry of Defence (2009)). Sometimes it goes further, holding that, in some circumstances, a material increase in the risk of a condition will be enough (if the claimant has in fact developed that condition): McGhee v National Coal Board (1973).

Many situations in medical law are analogous to these industrial disease cases. A vulnerable brain might have been exposed to negligent and non-negligent periods of hypoxia, for instance. Or the state of medical knowledge might be such that the experts cannot say that a negligent act probably caused the injury, but are happy to say that it increased the risk of it. While the policy considerations that dictate caution in loss-of-chance cases make the courts wary of accepting analogies with industrial disease, the law in many places is evolving towards acceptance.

 

Consent cases

 

From the point of view of causation, consent cases might look easy. Suppose that a surgeon negligently fails to warn about the risks of a proposed operation. Had the patient been appropriately warned, she would not have consented to the operation. The operation goes ahead, and in the course of it, or afterwards, as a result of it, the patient suffers injury.

The but-for test has no problem with such a case. Whether or not the injuries were those about which the surgeon should have warned the patient, they wouldn’t have occurred but for the negligence, and causation is usually established.

But wait a moment. Take the following case. Its facts are not unusual. A patient goes to see a consultant spinal surgeon. She needs a laminectomy to decompress her spinal cord. The surgeon fails to warn her about a 1 per cent risk of urinary incontinence. She agrees to the operation. The operation is done entirely competently, but the 1 per cent risk eventuates, and the patient is left incontinent. She sues the surgeon. The court finds that had she been properly warned, she would have consented to the operation, but would have pondered for a bit before consenting, and accordingly would not have had the procedure on the day that she had it. Since the 1 per cent risk hovers over every patient undergoing the procedure (and just happened to alight on her when she in fact had the operation), a short delay would probably have meant that she avoided it. So: same surgeon, same counseling, same operation, same operating table, different day. Does she succeed? And if so, should she?

The but-for test, narrowly applied, says that she does and should. And indeed she did in a controversial English case: Chester v Afshar (2005) (although not simply on a ‘but-for’ basis). But many are outraged by this result. The connection between the surgeon’s negligence and the damage suffered by the patient is mistily metaphysical.

There’s a better way to see such cases. The patient did indeed suffer harm, but it was harm not to the nerves supplying her bladder, but to her right to be properly informed-her autonomy right. Violations of human rights, even when they don’t involve physical harm, are routinely compensated.

 

Hypothetical causation

 

Very often clinical negligence claims involve omissions rather than acts. A clinician negligently fails to attend a patient, or fails to arrange a particular investigation. The court then has to determine what would have happened had the clinician attended or the investigation been performed. Usually these questions turn on the but-for test, or a gloss on that test along the lines of material contribution. But commonly things are more complicated.

An example. A paediatrician negligently fails to answer her bleep. As a result a baby suffers an episode of hypoxic brain damage which leaves it irreversibly disabled. The expert evidence is that the only intervention that would have prevented the damage would have been the immediate insertion of a tube into the baby’s trachea. But the paediatrician says (and the court accepts) that, had she attended, she would not have intubated the baby. The court finds that a responsible body of paediatricians would not have intubated-in other words that it would not have been negligent for the paediatrician, had she come, to have failed to give the only treatment that would have prevented the injury.

Does the claimant succeed? In England, and many other places which hallow Bolam, she does not: see Bolitho v City and Hackney Health Authority (1998). The question is whether, but for the negligence, injury would have been avoided, and negligence is defined according to the Bolam test. The negligence here is causally irrelevant: nothing different would have happened had the doctor rushed diligently to the ward.

This is not the only way of looking at it. The pressure on the non-attending doctor to convince herself (and therefore the court) that she would not have given the only effective treatment must be intense. One might argue that the law should compensate for that pressure by assuming that the doctor would give effective treatment. That argument is easier in a jurisdiction that is not in thrall to Bolam. If Bolam rules absolutely in the law of breach of duty, it is hard to stop it holding sway over causation too.

 

Damage of a sort recognized by the law

 

There is a lot of negligence by doctors, but there are few clinical negligence cases. One of many important reasons for this is that most of the negligence doesn’t cause loss of a type that the law thinks should be compensated: it is usually hurt feelings, upset, or inconvenience.

A patient goes into hospital for a corneal graft. After the operation the surgeon comes to see him on the ward. The good news, says the surgeon, is that the operation went very well, and the patient’s sight will be restored. The bad news, however, is that the hospital’s tissue bank has just told him that the cornea came from a patient who, many years before, had had syphilis. It’s unfortunate that this wasn’t picked up by the tissue bank: it should have been. But, the surgeon goes on, there’s no need to worry.

There is no case reported in the literature of anyone contracting syphilis this way, but just to be absolutely safe, the patient will be given a course of prophylactic antibiotics.

The patient walks out of hospital (for the first time in years not bumping into things as he does so), and sues the hospital.

But what’s the loss? The patient doesn’t have syphilis, and will never get it. He’s just been rather shaken up by the whole experience of being told about the origin of the cornea. No psychiatrist will give that ‘shaking up’ the label of a recognized psychiatric illness.

In most jurisdictions this claim would fail. There’s negligence, the negligence has caused something, but the something is not in one of the categories of compensable damage.

This isn’t because upset, worry, and so on are too nebulous to be quantified. Pain, loss of amenity, and loss of reputation are no easier to value, but are routinely valued in the courts. Rather it is, again, policy. A minimum threshold of severity is arbitrarily imposed to stop the courts from being swamped by claims.

 

Assessing quantum

 

Compensation in negligence cases aims to put the claimant in the position in which she would have been had the defendant not been negligent. In a typical clinical negligence claim there are several elements.

‘Pain, suffering, and loss of amenity’ are valued by reference to published guidelines and reported cases. In England, for instance, quadriplegia was typically valued in 2013 between £255,000 and £317,500, and the loss of a leg below the knee at £177,000-£104,500.

Then there are the heads of claim that are, in theory, capable of more scientific quantification: loss of earnings, travel expenses, the cost of aids and appliances, and the cost of care. Life expectancy (assessed either in relation to the individual patient or, if the patient is likely to die at the time predicted by the actuaries, by reference to population mortality data) is used to calculate the number of years over which future loss will run-a number discounted to take account of presumptions about the amount of money that the sum of damages, invested, will yield. It’s a laborious process. Perhaps clinical negligence lawyers deserve their fees after all.

 

Research on human subjects

 

The world looked at what Mengele and the other Nazi doctors had done, and said ‘Never again.’ In fact the Nazis were (as some of them pointed out in their subsequent trials) in a long, dishonorable, and well-established tradition of abusive medical research. J. Marion Sims, in the 1840s, repeatedly operated on unanesthetized slave women; Leo Stanley, in the first half of the 20th century, implanted pig, goat, and sheep testicles into prisoners; and countless patients in many countries were deliberately infected with deadly and disabling diseases in the name of science.

 

From Auschwitz, to Helsinki, to the African bush

 

The immediate response to the revelations from Auschwitz, Buchenwald, and elsewhere was the Nuremberg Code (1947), which declared that the consent of all participants to all research on them was essential, and the Declaration of Geneva (1948), which set out doctors’ ethical duties towards their patients.

The Nuremberg Code gave way to the World Medical Association’s Declaration of Helsinki (1964). This has gone through six revisions. The latest was in 2008. It has no legal force in itself, but has had profound influence on national and international research ethics and law. As we’ve seen, authoritative ethical guidance has a way of becoming actual or de facto medical law. Most countries have given at least a nod to the Declaration-or at least to some version of it-and so the best way of identifying the international legal consensus is to look at it.

It has done a lot to change the thoughtlessly utilitarian or downright callous ethos of pre-war medical research. But not enough. Mengele-esque abuses continued. In Tuskegee, Alabama (1932-1972), syphilis patients were told that they were being treated for syphilis. They weren’t, although they could have been. Many died; many fathered children with congenital syphilis. In Staten Island, until 1966, mentally disabled children were secretly infected with viral hepatitis. Until 1971, women attending a contraception clinic in San Antonio were given placebos instead of effective drugs, without their consent. There were several pregnancies. Between 1960 and 1971 the US Department of Defense and the Defense Atomic Support Agency funded whole-body irradiation experiments on non-consenting patients. A 1994 study showed that a drug called zidovudine was extremely effective in reducing mother–infant HIV transmission. Some subsequent trials withheld the drug from patients in Third World countries, while ensuring that US patients had it. And so it went on.

The Declaration, in its present form, emphasizes the importance of autonomy and informed consent, and insists that the subject’s welfare outweighs the welfare of society or the march of science.

The Declaration is a well-meaning document. It’s an honest attempt to protect the rights of individuals while at the same time acknowledging that scientific progress is crucial, and that a communitarian perspective sometimes makes sense. But it’s hard to be philosophically consistent if you’re that ambitious, and the Declaration isn’t consistent. It’s a patchwork quilt, not seamlessly woven from the yarn of one principle. That in itself isn’t a criticism. Some of the best law is made from several philosophical materials. But since the Declaration does pay explicit lip service to autonomy, it’s worth observing that it outlaws research on humans unless the importance of the objective outweighs the inherent risks and burdens to the subject (Article 21). That’s a highly paternalistic restriction. If a pharmaceutical company wants to pay me a huge amount of money to participate in potentially dangerous research that might lead to the development of a new brand of shampoo, why shouldn’t it be able to do so? Isn’t my body my own (unless I choose to engage in rado-maroditic bondage?

The Declaration has evolved very significantly through its many revisions. Many of the revisions have generated hot and anxious debate. Here’s why. Imagine that you’re the head of a multinational pharmaceutical company. You have the patent for a very promising drug for the treatment of malaria. Malaria is very important. It kills millions each year, almost all of them in the poorer countries of the world. If the drug works, your share price and your bonus will rocket.

The efficacy of the drug needs to be established. That means clinical trials in hot, poor places. The most scientifically satisfactory results (which would lead to the fastest acceptance of the drug by the market) would be obtained by trials in which a large cohort of infected patients in a remote part of Africa, all of whom need treatment, is divided into two. Half would receive the drug; half would receive a pharmacologically inert placebo. It would be a ‘double blind’ trial, in which neither the patients nor the administrators of the drug/placebo knew who was getting the drug and who the placebo.

But there’s a problem. Patients receiving the placebo are likely to die. That’s a shame. It’s also avoidable. There are already drugs available which would stop them dying.

 

What should be done?

 

The answer’s not obvious. If this trial is vetoed on the grounds that it’s unethical, no one at all will get treatment. Many of them will die. Certainly (if the drug is as good as it is thought to be), more will die than would die if the trial happened. Wouldn’t the potential participants prefer a 50 per cent chance of salvation to a 0 per cent chance? And, since the alternative trial methods, not involving the ethically dubious placebo group, won’t produce such a clear result, there will be a longer delay before the drug is commercially available-so leading to the loss of more lives.

If those essentially utilitarian arguments hold water, is there anything to be said against entirely nonconsensual research on the same cohorts? Every participant might think that they’re receiving, say, a free drink of lemonade, but in fact they’re getting either the drug or the placebo. The science would be good, lives would be saved. What’s the problem?

I don’t seek to adjudicate.

The Declaration is unhappy about either version of the malaria trial. It disapproves of placebo-controlled trials except where there is no intervention that is known to work, or where, for ‘compelling and scientifically good methodological reasons’, a placebo control is necessary and patients who receive a placebo or no treatment will not be subject to any risk of serious or irreversible harm (Article 32).

The Declaration cautions that ‘[e]xtreme care must be taken to avoid abuse of this option’. The US Food and Drug Administration have refused to be bound by these limitations-essentially on the grounds that they hamstring science, which isn’t in the world’s ultimate interest.

Subsequent revisions sought to meet these concerns by providing that research is only justified if there is a reasonable likelihood that the ‘population or community’ in which the research is carried out stands to benefit from the results of the research (Article 17), and that when a study is completed participants should be provided with whatever the study has shown to be the best thing for their situation (Article 33).

 

When the only hope is in the Unknown

 

The Declaration is rightly conservative. It talks a lot about the proper communication of risks to research subjects. Usually, when a trial of a novel product begins, there will be some evidence to suggest that it will work. But that’s not always so: there’s still some genuine pharmacological mystery in the world.

A drug company has been working for years on a new drug for colon cancer. The drug has done exciting things in mice. It shrinks their metastases to nothing, bringing them back from the grave. But then we’re not mice, or not necessarily so. The next step is to test it in humans. But the pharmacologists are worried: it might cure, but it might kill or maim. Can it be tested? Or must the clinical trial go on hold unless and until the researchers can show that the risk of death or serious injury is as low as you’d expect it to be in a trial of healthy volunteers?

The Declaration is pragmatic. We’d want it to be. If the options are certain death from cancer or a 50 per cent chance of cure associated with a 50 per cent chance of death, many would unhesitatingly opt for the potential magic bullet. The Declaration deals specifically with this situation (Article 35), although it hardly needs to do so. It says that the trial can be done. The other Articles, taken together, produce the same result. Mix informed consent with the principle that the inherent risks should be outweighed by the importance of the research, add the centrality of the individual research subject, and you’ve got Article 35. If the subject will die anyway if you don’t give the magic bullet, the Declaration isn’t offended if the magic bullet ends up going through the roof of the subject’s mouth.

 

When should you stop?

 

You start your malaria study. The wonder drug works wonderfully-so wonderfully that it is very quickly clear that it’s better than the competition. You’re going to earn a lot of money from it, and so you’re anxious to rush it onto the market. But too much rush is a bad idea. To annihilate the competition you want to continue the study for a little longer. Then the statistics buttressing your product will be unassailable.

But again, there’s a problem. To continue the study in these circumstances might mean even more robust science (you’ll prove your case to an extreme level of probability), but it also means more deaths. People in the control group will die in order to refine your scientific (and hence commercial) case. Is it worth it?

The Declaration (Article 20) demands that a study is stopped immediately when ‘the risks are found to outweigh the potential benefits or when there is conclusive proof of positive and beneficial results’. This, of course, begs many questions. When, statistically, can one say that the risks outweigh the potential benefits? When does proof of a positive result become ‘conclusive’? The Declaration gives no guidance. Presumably if this issue were to be litigated by people who had been harmed by or failed to benefit from a trial, it would turn on the relevant domestic law. Was it, for instance, Bolam negligent to fail to stop the trial at a particular time? In theory such issues will have been decided in advance and built into the diligently vetted trial design. But it’s not always so.

Share

Pesquisar

Azulejos de Coimbra

painesiv.jpg