Bad Pharma: How Medicine is Broken, And How We Can Fix It. Ben Goldacre

Читать онлайн.
Название Bad Pharma: How Medicine is Broken, And How We Can Fix It
Автор произведения Ben Goldacre
Жанр Здоровье
Серия
Издательство Здоровье
Год выпуска 0
isbn 9780007363643



Скачать книгу

side effect to detect in an antidepressant, because people with depression are at a much higher risk of suicide than the general population anyway, as a result of their depression. There are also some grounds to believe that as patients first come out of their depression, and leave behind the sluggish lack of motivation that often accompanies profound misery, there may be a period during which they are more capable of killing themselves, just because of the depression slowly lifting.

      Furthermore, suicide is a mercifully rare event, which means you need a lot of people on a drug to detect an increased risk. Also, suicide is not always recorded accurately on death certificates, because coroners and doctors are reluctant to give a verdict that many would regard as shameful, so the signal you are trying to detect in the data – suicide – is going to be corrupted. Suicidal thoughts or behaviours that don’t result in death are more common than suicide itself, so they should be easier to detect, but they too are hard to pick up in routinely collected data, because they’re often not presented to doctors, and where they are, they can be coded in health records in all sorts of different ways, if they appear at all. Because of all these difficulties, you would want to have every scrap of data you could possibly cobble together on the question of whether these drugs cause suicidal thoughts or behaviour in children; and you would want a lot of experienced people, with a wide range of skills, all looking at the data and discussing it.

      In February 2003, GSK spontaneously sent the MHRA a package of information on the risk of suicide on paroxetine, containing some analyses done in 2002 from adverse-event data in trials the company had held, going back a decade. This analysis showed that there was no increased risk of suicide. But it was misleading: although it was unclear at the time, data from trials in children had been mixed in with data from trials in adults, which had vastly greater numbers of participants. As a result, any sign of increased suicide risk among children on paroxetine had been completely diluted away.

      Later in 2003, GSK had a meeting with the MHRA to discuss another issue involving paroxetine. At the end of this meeting, the GSK representatives gave out a briefing document, explaining that the company was planning to apply later that year for a specific marketing authorisation to use paroxetine in children. They mentioned, while handing out the document, that the MHRA might wish to bear in mind a safety concern the company had noted: an increased risk of suicide among children with depression who received paroxetine, compared with those on dummy placebo pills.

      This was vitally important side-effect data, being presented, after an astonishing delay, casually, through an entirely inappropriate and unofficial channel. GSK knew that the drug was being prescribed in children, and it knew that there were safety concerns in children, but it had chosen not to reveal that information. When it did share the data, it didn’t flag it up as a clear danger in the current use of the drug, requiring urgent attention from the relevant department in the regulator; instead it presented it as part of an informal briefing about a future application. Although the data was given to completely the wrong team, the MHRA staff present at this meeting had the wit to spot that this was an important new problem. A flurry of activity followed: analyses were done, and within one month a letter was sent to all doctors advising them not to prescribe paroxetine to patients under the age of eighteen.

      How is it possible that our systems for getting data from companies are so poor that they can simply withhold vitally important information showing that a drug is not only ineffective, but actively dangerous? There are two sets of problems here: firstly, access for regulators; and secondly, access for doctors.

      There is no doubt that the regulations contain ridiculous loopholes, and it’s dismal to see how GSK cheerfully exploited them. As I’ve mentioned, the company had no legal duty to give over the information, because prescription of the drug in children was outside of paroxetine’s formally licensed uses – even though GSK knew this was widespread. In fact, of the nine studies the company conducted, only one had its results reported to the MHRA, because that was the only one conducted in the UK.

      After this episode, the MHRA and the EU changed some of their regulations, though not adequately, and created an obligation for companies to hand over safety data for uses of a drug outside its marketing authorisation, closing the paroxetine loophole.

      This whole incident illustrates a key problem, and it is one that recurs throughout this section of the book: you need all of the data in order to see what’s happening with a drug’s benefits, and risks. Some of the trials that GSK conducted were published in part, but that is obviously not enough: we already know that if we see only a biased sample of the data, we are misled. But we also need all the data for the more simple reason that we need lots of data: safety signals are often weak, subtle and difficult to detect. Suicidal thoughts and plans are rare in children – even those with depression, even those on paroxetine – so all the data from a large number of participants needed to be combined before the signal was detectable in the noise. In the case of paroxetine, the dangers only became apparent when the adverse events from all of the trials were pooled and analysed together.

      That leads us to the second obvious flaw in the current system: the results of these trials – the safety data and the effectiveness data – are given in secret to the regulator, which then sits and quietly makes a decision. This is a huge problem, because you need many eyes on these difficult problems. I don’t think that the people who work in the MHRA are bad, or incompetent: I know a lot of them, and they are smart, good people. But we shouldn’t trust them to analyse this data alone, in the same way that we shouldn’t trust any single organisation to analyse data alone, with nobody looking over its shoulder, checking the working, providing competition, offering helpful criticism, speeding it up, and so on.

      This is even worse than academics failing to share their primary research data, because at least in an academic paper you get a lot of detail about what was done, and how. The output of a regulator is often simply a crude, brief summary: almost a ‘yes’ or ‘no’ about side effects. This is the opposite of science, which is only reliable because everyone shows their working, explains how they know that something is effective or safe, shares their methods and their results, and allows others to decide if they agree with the way they processed and analysed the data.

      Yet for the safety and efficacy of drugs, one of the most important of all analyses done by science, we turn our back on this process completely: we allow it to happen behind closed doors, because drug companies have decided that they want to share their trial results discreetly with the regulators. So the most important job in evidence-based medicine, and a perfect example of a problem that benefits from many eyes and minds, is carried out alone and in secret.

      This perverse and unhealthy secrecy extends way beyond regulators. NICE, the UK’s National Institute for Health and Clinical Excellence, is charged with making recommendations about which treatments are most cost-effective, and which work best. When it does this, it’s in the same boat as you or me: it has absolutely no statutory right to see data on the safety or effectiveness of a drug, if a company doesn’t want to release it, even though the regulators have all of that data. For ‘single technology appraisals’, on one treatment, they ask the company to make available to them the information the company thinks is relevant. For ‘guidelines’ on treatment in a whole area of medicine, they are more vulnerable to what is published in journals. As a result, even NICE can end up working on distorted, edited, biased samples of the data.

      Sometimes NICE is able to access some extra unpublished data from the drug companies: this is information that doctors and patients aren’t allowed to see, despite the fact that they are the people making decisions about whether to prescribe the drugs, or are actually taking them. But when NICE does get information in this way, it can come with strict conditions on confidentiality, leading to some very bizarre documents being published. On the opposite page, for example, is the NICE document discussing whether it’s a good idea to have Lucentis, an extremely expensive drug, costing well over £1,000 per treatment, that is injected into the eye for a condition called acute macular degeneration.

      As you can see, the NICE document on whether this treatment is a good idea is censored. Not only is the data on the effectiveness of the treatment blanked out by thick black rectangles, in case any doctor or patient should see it; absurdly, even the names of some trials are missing, preventing