The drugs don't work: a modern medical scandal
The doctors prescribing the drugs don't know they don't do what they're meant to. Nor do their patients. The manufacturers know full well, but they're not telling.
Reboxetine is a drug
I have prescribed. Other drugs had done
nothing for my patient, so we wanted to try something new. I'd read the
trial data before I wrote the prescription, and found only well-designed, fair
tests, with overwhelmingly positive results. Reboxetine was better than a
placebo, and as good as any other antidepressant in head-to-head comparisons.
It's approved for use by the Medicines and Healthcare products Regulatory
Agency (the MHRA),
which governs all drugs in the UK. Millions of doses are prescribed every year,
around the world. Reboxetine was clearly a safe and effective treatment. The
patient and I discussed the evidence briefly, and agreed it was the right
treatment to try next. I signed a prescription.
But we had both been
misled. In October 2010, a group of researchers was finally able to bring
together all the data that had ever been collected on reboxetine, both from
trials that were published and from those that had never appeared in academic
papers. When all this trial data was put together, it produced a shocking
picture. Seven trials had been conducted comparing reboxetine against a
placebo. Only one, conducted in 254 patients, had a neat, positive result, and
that one was published in an academic journal, for doctorsand researchers to
read. But six more trials were conducted, in almost 10 times as many patients.
All of them showed that reboxetine was no better than a dummy sugar pill.
None of these trials was published. I had no idea they existed.
It got worse. The
trials comparing reboxetine against other drugs showed exactly the same
picture: three small studies, 507 patients in total, showed that
reboxetine was just as good as any other drug. They were all published.
But 1,657 patients' worth of data was left unpublished, and this unpublished
data showed that patients on reboxetine did worse than those on other drugs. If
all this wasn't bad enough, there was also the side-effects data. The drug
looked fine in the trials that appeared in the academic literature; but when we
saw the unpublished studies, it turned out that patients were more likely to
have side-effects, more likely to drop out of taking the drug and more likely
to withdraw from the trial because of side-effects, if they were taking reboxetine
rather than one of its competitors.
I did everything a
doctor is supposed to do. I read all the papers, I critically appraised
them, I understood them, I discussed them with the patient and we made a
decision together, based on the evidence. In the published data, reboxetine was
a safe and effective drug. In reality, it was no better than a sugar pill and,
worse, it does more harm than good. As a doctor, I did something that, on the
balance of all the evidence, harmed my patient, simply because unflattering
data was left unpublished.
Nobody broke any law
in that situation, reboxetine is still on the market and the system that
allowed all this to happen is still in play, for all drugs, in all countries in
the world. Negative data goes missing, for all treatments, in all areas of
science. The regulators and professional bodies we would reasonably expect to
stamp out such practices have failed us. These problems have been protected
from public scrutiny because they're too complex to capture in a soundbite.
This is why they've gone unfixed by politicians, at least to some extent; but
it's also why it takes detail to explain. The people you should have been able
to trust to fix these problems have failed you, and because you have to
understand a problem properly in order to fix it, there are some things
you need to know.
Drugs are tested by
the people who manufacture them, in poorly designed trials, on hopelessly small
numbers of weird, unrepresentative patients, and analysed using techniques that
are flawed by design, in such a way that they exaggerate the benefits of
treatments. Unsurprisingly, these trials tend to produce results that favour
the manufacturer. When trials throw up results that companies don't like, they
are perfectly entitled to hide them from doctors and patients, so we only ever
see a distorted picture of any drug's true effects. Regulators see most of the
trial data, but only from early on in a drug's life, and even then they don't
give this data to doctors or patients, or even to other parts of government.
This distorted evidence is then communicated and applied in a distorted
fashion.
In their 40 years of
practice after leaving medical school, doctors hear about what works ad hoc,
from sales reps, colleagues and journals. But those colleagues can be in the
pay of drug companies – often undisclosed – and the journals are, too. And so
are the patient groups. And finally, academic papers, which everyone thinks of
as objective, are often covertly planned and written by people who work directly
for the companies, without disclosure. Sometimes whole academic journals are
owned outright by one drug company. Aside from all this, for several of the
most important and enduring problems in medicine, we have no idea what the best
treatment is, because it's not in anyone's financial interest to conduct any
trials at all.
Now, on to the
details.
In 2010, researchers
from Harvard and Toronto found all the trials looking at five major classes of
drug – antidepressants, ulcer drugs and so on – then measured two key features:
were they positive, and were they funded by industry? They found more than 500
trials in total: 85% of the industry-funded studies were positive, but only 50%
of the government-funded trials were. In 2007, researchers looked at every published
trial that set out to explore the benefits of a statin. These
cholesterol-lowering drugs reduce your risk of having a heart attack and are
prescribed in very large quantities. This study found 192 trials in total,
either comparing one statin against another, or comparing a statin against a
different kind of treatment. They found that industry-funded trials were 20
times more likely to give results favouring the test drug.
These are frightening
results, but they come from individual studies. So let's consider systematic
reviews into this area. In 2003, two were published. They took all the studies
ever published that looked at whether industry funding is associated with
pro-industry results, and both found that industry-funded trials were, overall,
about four times more likely to report positive results. A further review in
2007 looked at the new studies in the intervening four years: it found 20 more
pieces of work, and all but two showed that industry-sponsored trials were more
likely to report flattering results.
It turns out that this
pattern persists even when you move away from published academic papers and
look instead at trial reports from academic conferences. James Fries and Eswar
Krishnan, at the Stanford University School of Medicine in California, studied
all the research abstracts presented at the 2001 American College of
Rheumatology meetings which reported any kind of trial and acknowledged
industry sponsorship, in order to find out what proportion had results that
favoured the sponsor's drug.
In general, the
results section of an academic paper is extensive: the raw numbers are given
for each outcome, and for each possible causal factor, but not just as raw
figures. The "ranges" are given, subgroups are explored, statistical
tests conducted, and each detail is described in table form, and in shorter
narrative form in the text. This lengthy process is usually spread over several
pages. In Fries
and Krishnan (2004), this level of detail was unnecessary. The
results section is a single, simple and – I like to imagine – fairly
passive-aggressive sentence:
"The results from
every randomised controlled trial (45 out of 45) favoured the drug
of the sponsor."
How does this happen?
How do industry-sponsored trials almost always manage to get a positive
result? Sometimes trials are flawed by design. You can compare your new drug
with something you know to be rubbish – an existing drug at an inadequate dose,
perhaps, or a placebo sugar pill that does almost nothing. You can choose your
patients very carefully, so they are more likely to get better on your
treatment. You can peek at the results halfway through, and stop your trial
early if they look good. But after all these methodological quirks comes one
very simple insult to the integrity of the data. Sometimes, drug companies
conduct lots of trials, and when they see that the results are unflattering,
they simply fail to publish them.
Because researchers
are free to bury any result they please, patients are exposed to harm on
a staggering scale throughout the whole of medicine. Doctors can have no
idea about the true effects of the treatments they give. Does this drug really
work best, or have I simply been deprived of half the data? No one can tell. Is
this expensive drug worth the money, or has the data simply been massaged? No
one can tell. Will this drug kill patients? Is there any evidence that it's
dangerous? No one can tell. This is a bizarre situation to arise in medicine, a
discipline in which everything is supposed to be based on evidence.
And this data is
withheld from everyone in medicine, from top to bottom. Nice, for example, is
the National Institute for Health and
Clinical Excellence, created by the British government to conduct
careful, unbiased summaries of all the evidence on new treatments. It is unable
either to identify or to access data on a drug's effectiveness that's been
withheld by researchers or companies: Nice has no more legal right to that data
than you or I do, even though it is making decisions about
effectiveness, and cost-effectiveness, on behalf of the NHS, for millions of
people.
In any sensible world,
when researchers are conducting trials on a new tablet for a drug company, for
example, we'd expect universal contracts, making it clear that all researchers
are obliged to publish their results, and that industry sponsors – which have a
huge interest in positive results – must have no control over the data. But,
despite everything we know about industry-funded research being systematically
biased, this does not happen. In fact, the opposite is true: it is entirely
normal for researchers and academics conducting industry-funded trials to sign
contracts subjecting them to gagging clauses that forbid them to publish,
discuss or analyse data from their trials without the permission of the funder.
This is such a
secretive and shameful situation that even trying to document it in public can
be a fraught business. In 2006, a paper was published in the Journal of the
American Medical Association (Jama), one of the biggest medical
journals in the world, describing how common it was for researchers doing industry-funded
trials to have these kinds of constraints placed on their right to publish the
results. The study was conducted by the Nordic
Cochrane Centre and it looked at all the trials given approval
to go ahead in Copenhagen and Frederiksberg. (If you're wondering why
these two cities were chosen, it was simply a matter of practicality: the
researchers applied elsewhere without success, and were specifically refused
access to data in the UK.) These trials were overwhelmingly sponsored by the
pharmaceutical industry (98%) and the rules governing the management of the
results tell a story that walks the now familiar line between frightening and
absurd.
For 16 of the 44
trials, the sponsoring company got to see the data as it accumulated, and in
a further 16 it had the right to stop the trial at any time, for any
reason. This means that a company can see if a trial is going against it, and
can interfere as it progresses, distorting the results. Even if the study was
allowed to finish, the data could still be suppressed: there were constraints
on publication rights in 40 of the 44 trials, and in half of them the contracts
specifically stated that the sponsor either owned the data outright
(what about the patients, you might say?), or needed to approve the final
publication, or both. None of these restrictions was mentioned in any of the
published papers.
When the paper
describing this situation was published in Jama, Lif, the Danish pharmaceutical
industry association, responded by announcing, in the Journal of the Danish
Medical Association, that it was "both shaken and enraged about the
criticism, that could not be recognised". It demanded an
investigation of the scientists, though it failed to say by whom or of what.
Lif then wrote to the Danish Committee on Scientific Dishonesty, accusing the
Cochrane researchers of scientific misconduct. We can't see the letter, but the
researchers say the allegations were extremely serious – they were accused of
deliberately distorting the data – but vague, and without documents or evidence
to back them up.
Nonetheless, the
investigation went on for a year. Peter Gøtzsche,
director of the Cochrane Centre, told the British Medical Journal that only
Lif's third letter, 10 months into this process, made specific allegations that
could be investigated by the committee. Two months after that, the charges
were dismissed. The Cochrane researchers had done nothing wrong. But before
they were cleared, Lif copied the letters alleging scientific dishonesty to the
hospital where four of them worked, and to the management organisation running
that hospital, and sent similar letters to the Danish medical association, the
ministry of health, the ministry of science and so on. Gøtzsche and his
colleagues felt "intimidated and harassed" by Lif's behaviour.
Lif continued to insist that the researchers were guilty of misconduct
even after the investigation was completed.
Paroxetine is a
commonly used antidepressant, from the class of drugs known as selective
serotonin reuptake inhibitors or SSRIs. It's also a good example of
how companies have exploited our long-standing permissiveness about missing
trials, and found loopholes in our inadequate regulations on trial disclosure.
To understand why, we
first need to go through a quirk of the licensing process. Drugs do not simply
come on to the market for use in all medical conditions: for any specific use
of any drug, in any specific disease, you need a separate marketing
authorisation. So a drug might be licensed to treat ovarian cancer, for
example, but not breast cancer. That doesn't mean the drug doesn't work in
breast cancer. There might well be some evidence that it's great for treating
that disease, too, but maybe the company hasn't gone to the trouble and expense
of getting a formal marketing authorisation for that specific use. Doctors can
still go ahead and prescribe it for breast cancer, if they want, because the
drug is available for prescription, it probably works, and there are boxes of
it sitting in pharmacies waiting to go out. In this situation, the doctor will
be prescribing the drug legally, but "off-label".
Now, it turns out that
the use of a drug in children is treated as a separate marketing
authorisation from its use in adults. This makes sense in many cases, because
children can respond to drugs in very different ways and so research needs
to be done in children separately. But getting a licence for a specific use
is an arduous business, requiring lots of paperwork and some specific studies.
Often, this will be so expensive that companies will not bother to get a
licence specifically to market a drug for use in children, because that market
is usually much smaller.
So it is not unusual
for a drug to be licensed for use in adults but then prescribed for children.
Regulators have recognised that this is a problem, so recently they have
started to offer incentives for companies to conduct more research and formally
seek these licences.
When GlaxoSmithKline
applied for a marketing authorisation in children for paroxetine, an
extraordinary situation came to light, triggering the longest investigation in
the history of UK drugs regulation. Between 1994 and 2002, GSK conducted nine
trials of paroxetine in children. The first two failed to show any benefit, but
the company made no attempt to inform anyone of this by changing the "drug
label" that is sent to all doctors and patients. In fact, after these
trials were completed, an internal company management document stated: "It
would be commercially unacceptable to include a statement that efficacy had not
been demonstrated, as this would undermine the profile of paroxetine." In
the year after this secret internal memo, 32,000 prescriptions were issued to
children for paroxetine in the UK alone: so, while the company knew the drug
didn't work in children, it was in no hurry to tell doctors that, despite
knowing that large numbers of children were taking it. More trials were conducted
over the coming years – nine in total – and none showed that the drug was
effective at treating depression in children.
It gets much worse
than that. These children weren't simply receiving a drug that the company knew
to be ineffective for them; they were also being exposed to side-effects. This
should be self-evident, since any effective treatment will have some side-effects,
and doctors factor this in, alongside the benefits (which in this case were
nonexistent). But nobody knew how bad these side-effects were, because the
company didn't tell doctors, or patients, or even the regulator about the
worrying safety data from its trials. This was because of a loophole: you have
to tell the regulator only about side-effects reported in studies looking
at the specific uses for which the drug has a marketing authorisation. Because
the use of paroxetine in children was "off-label", GSK had no legal
obligation to tell anyone about what it had found.
People had worried for
a long time that paroxetine might increase the risk of suicide, though that is
quite a difficult side-effect to detect in an antidepressant. In February 2003,
GSK spontaneously sent the MHRA a package of information on the risk of suicide
on paroxetine, containing some analyses done in 2002 from adverse-event data in
trials the company had held, going back a decade. This analysis showed that
there was no increased risk of suicide. But it was misleading: although it
was unclear at the time, data from trials in children had been mixed in with
data from trials in adults, which had vastly greater numbers of participants.
As a result, any sign of increased suicide risk among children on
paroxetine had been completely diluted away.
Later in 2003, GSK had
a meeting with the MHRA to discuss another issue involving paroxetine. At the
end of this meeting, the GSK representatives gave out a briefing document,
explaining that the company was planning to apply later that year for a
specific marketing authorisation to use paroxetine in children. They mentioned,
while handing out the document, that the MHRA might wish to bear in mind
a safety concern the company had noted: an increased risk of suicide among
children with depression who received paroxetine, compared with those on dummy
placebo pills.
This was vitally
important side-effect data, being presented, after an astonishing delay,
casually, through an entirely inappropriate and unofficial channel. Although
the data was given to completely the wrong team, the MHRA staff present at this
meeting had the wit to spot that this was an important new problem. A flurry of
activity followed: analyses were done, and within one month a letter was sent
to all doctors advising them not to prescribe paroxetine to patients under the
age of 18.
How is it possible
that our systems for getting data from companies are so poor, they can simply
withhold vitally important information showing that a drug is not only
ineffective, but actively dangerous? Because the regulations contain ridiculous
loopholes, and it's dismal to see how GSK cheerfully exploited them: when the
investigation was published in 2008, it concluded that what the company had
done – withholding important data about safety and effectiveness that doctors
and patients clearly needed to see – was plainly unethical, and put children
around the world at risk; but our laws are so weak that GSK could not be
charged with any crime.
After this episode,
the MHRA and EU changed some of their regulations, though not adequately. They
created an obligation for companies to hand over safety data for uses of a drug
outside its marketing authorisation; but ridiculously, for example, trials
conducted outside the EU were still exempt. Some of the trials GSK conducted
were published in part, but that is obviously not enough: we already know that
if we see only a biased sample of the data, we are misled. But we also
need all the data for the more simple reason that we need lots of data: safety
signals are often weak, subtle and difficult to detect. In the case of
paroxetine, the dangers became apparent only when the adverse events from all
of the trials were pooled and analysed together.
That leads us to the
second obvious flaw in the current system: the results of these trials are
given in secret to the regulator, which then sits and quietly makes a decision.
This is the opposite of science, which is reliable only because everyone shows
their working, explains how they know that something is effective or safe,
shares their methods and results, and allows others to decide if they agree
with the way in which the data was processed and analysed. Yet for the safety
and efficacy of drugs, we allow it to happen behind closed doors, because drug
companies have decided that they want to share their trial results discretely
with the regulators. So the most important job in evidence-based medicine is
carried out alone and in secret. And regulators are not infallible, as we shall
see.
Rosiglitazone was
first marketed in 1999. In that first year, Dr John Buse from the University of
North Carolina discussed an increased risk of heart problems at a pair of
academic meetings. The drug's manufacturer, GSK, made direct contact in an
attempt to silence him, then moved on to his head of department. Buse felt
pressured to sign various legal documents. To cut a long story short, after
wading through documents for several months, in 2007 the US Senate committee on
finance released a report describing the treatment of Buse as
"intimidation".
But we are more
concerned with the safety and efficacy data. In 2003 theUppsala drug
monitoring group of the World Health Organisation contacted GSK
about an unusually large number of spontaneous reports associating
rosiglitazone with heart problems. GSK conducted two internal meta-analyses of
its own data on this, in 2005 and 2006. These showed that the risk was real,
but although both GSK and the FDA had these results, neither made any public
statement about them, and they were not published until 2008.
During this delay,
vast numbers of patients were exposed to the drug, but doctors and patients
learned about this serious problem only in 2007, when cardiologist Professor
Steve Nissen and colleagues published a landmark meta-analysis. This showed a
43% increase in the risk of heart problems in patients on rosiglitazone. Since
people with diabetes are already at increased risk of heart problems, and the
whole point of treating diabetes is to reduce this risk, that finding was big
potatoes. Nissen's findings were confirmed in later work, and in 2010 the drug
was either taken off the market or restricted, all around the world.
Now, my argument is
not that this drug should have been banned sooner because, as perverse as it
sounds, doctors do often need inferior drugs for use as a last resort. For
example, a patient may develop idiosyncratic side-effects on the most effective
pills and be unable to take them any longer. Once this has happened, it may be
worth trying a less effective drug if it is at least better than nothing.
The concern is that
these discussions happened with the data locked behind closed doors,
visible only to regulators. In fact, Nissen's analysis could only be done at
all because of a very unusual court judgment. In 2004, when GSK was caught
out withholding data showing evidence of serious side-effects from paroxetine
in children, their bad behaviour resulted in a US court case over allegations
of fraud, the settlement of which, alongside a significant payout, required GSK
to commit to posting clinical trial results on a public website.
Nissen used the
rosiglitazone data, when it became available, and found worrying signs
of harm, which they then published to doctors – something the regulators
had never done, despite having the information years earlier. If this
information had all been freely available from the start, regulators might have
felt a little more anxious about their decisions but, crucially, doctors
and patients could have disagreed with them and made informed choices. This is
why we need wider access to all trial reports, for all medicines.
Missing data poisons
the well for everybody. If proper trials are never done, if trials with
negative results are withheld, then we simply cannot know the true effects of
the treatments we use. Evidence in medicine is not an abstract academic
preoccupation. When we are fed bad data, we make the wrong decisions, inflicting
unnecessary pain and suffering, and death, on people just like us.
• This is an edited
extract from Bad Pharma, by Ben Goldacre, published next week by Fourth Estate
at £13.99. To order a copy for £11.19, including UK mainland p&p, call 0330
333 6846, or go toguardian.co.uk/bookshop.
No comments:
Post a Comment