E-Cigarettes & Smoking Cessation, The Real World According to an Aeronautical Engineer

By now you’ve probably heard of, or seen the latest attempt from Stanton A. Glantz to discredit e-cigarettes as a viable method for cessation. He, and co-author Sara Kalkhoran performed a systematic review and meta-analysis of research published for a set period of time to try to identify if e-cigarettes are indeed a viable tool for cessation. So what did our illustrious aeronautical engineer come up with?

Well before I begin, it’s worth pointing out two key phrases:

  • Systematic Review

A systematic review is a research study that collects and looks at multiple studies. Researchers use methods that are determined before they begin to frame one or more questions, then they find and analyse the studies that relate to that question.

So perform a search for a particular topic such as e-cigarettes, then filter out potentially erroneous research based on a specified set of criteria. That bit is important. The researchers set some parameters on exactly what to look for in the studies they are going to review.

  • Meta Analysis

Meta-analysis is a statistical technique for combining the findings from independent studies. Meta-analysis is most often used to assess the clinical effectiveness of healthcare interventions; it does this by combining data from two or more randomised control trials.

So using some statistical fudgery from the filtered studies, combine the results to give some fancy looking tables and graphs then try to understand them. Here’s the key thing, meta-analysis is the most popular method for assessing clinical effectiveness of healthcare interventions. You know, medicines and pharmaceuticals. Remember that randomised control trials are the “gold standard” for determining if a drug is effective, meta-analysis combines studies into such drugs to give a wider overview of that effectiveness (or non-effectiveness).

Although in this case the term “synthetic meta-analysis” would be more appropriate – that is, the results are synthesised into a single result. A meta-analysis alone just means an analysis of analyses.

Now that you are familiar with these terms, and I’ll freely admit I am not a scientist nor am I a statistician, I leave those tasks to folks that are far cleverer than me; let us delve a little deeper into this analysis which the media are calling a study.

Background

Smokers increasingly use e-cigarettes for many reasons, including attempts to quit combustible cigarettes and to use nicotine where smoking is prohibited. We aimed to assess the association between e-cigarette use and cigarette smoking cessation among adult cigarette smokers, irrespective of their motivation for using e-cigarettes.

In summary, Glantz & Kalkhoran aimed to assess whether e-cigarettes are used by smokers for quitting combustible cigarettes, while disregarding the motivations for the use of e-cigarettes. So if a study concluded that the primary motivation for e-cigarette use was “I wanted to quit smoking”, that study was most likely filtered in the systematic review as “not matching the specified criteria”.

Methods

PubMed and Web of Science were searched between April 27, 2015, and June 17, 2015. Data extracted included study location, design, population, definition and prevalence of e-cigarette use, comparison group (if applicable), cigarette consumption, level of nicotine dependence, other confounders, definition of quitting smoking, and odds of quitting smoking. The primary endpoint was cigarette smoking cessation. Odds of smoking cessation among smokers using e-cigarettes compared with smokers not using e-cigarettes were assessed using a random effects meta-analysis. A modification of the ACROBAT-NRSI tool and the Cochrane Risk of Bias Tool were used to assess bias. This meta-analysis is registered with PROSPERO (number CRD42015020382).

Here we see a top-level overview of how the dynamic duo approached this research. Using a small time-frame (52 days to be precise, and we can most likely discount the weekends as papers are rarely published over a weekend, so only 38 working days) the pair performed the scientific equivalent of a Google search using two tools – PubMed and Web of Science. The key factor they were looking for was smoking cessation – as in, how many participants of each study actually quit smoking using an e-cigarette and using the comparison of smokers not using the devices.

Each study was subsequently assessed for “bias” using a well established method used by the Cochrane Collaboration (specifically the Cochrane Bias Methods Group and the Cochrane Non-Randomised Studies Methods Group), but this method was used with a “modification”; which is not explained in detail in the paper. As specified in the ACROBAT-NRSI information:

risk of bias is assessed within specified bias domains, and review authors are asked to document the information on which judgements are based.”

No such documentation appears with the paper, no doubt it needs to be requested from the authors.

Using two Cochrane developed methods for assessing risk of bias would normally add a certain weight to any meta-analysis as the Cochrane Group is renowned for impartiality in their assessment of evidence.

Interpretation

As currently being used, e-cigarettes are associated with significantly less quitting among smokers

Based on the studies selected, a combination and analysis of a large, distorted pool of information, the authors interpret the data as being negative towards e-cigarette use as a method of tobacco cessation.

But is that interpretation really correct? As many former-smokers turned vapers know, it is entirely incorrect. But let’s not be too blasé here, there are indications that e-cigarettes don’t work for everyone they are, after all only one option among many. But to single out e-cigarettes in the manner that Glantz has done makes it appear that they are they only method people are using to stop smoking. Which we know is not true at all.

The problem with this meta-analysis, and meta-analysis in general is selection bias and compounding. In this case:

One investigator (SK) did the search, data extraction, and risk of bias assessment, which was subsequently reviewed by a second investigator (SAG)

Sara Kalkhoran is an internist. Internists are Doctors of Internal Medicine. You may see them referred to by several terms, including “internists,” “general internists” and “doctors of internal medicine.” Internal medicine physicians are specialists who apply scientific knowledge and clinical expertise to the diagnosis, treatment, and compassionate care of adults across the spectrum from health to complex illness.

So the “Doctors doctor” ran the search of PubMed and Web of Science, pulled the relevant studies that the search turned up, and performed the risk of bias assessment. All of that was then ‘reviewed’ by Glantz. To be fair, this happens a lot, one or two researchers perform the search and data extraction which is then reviewed by another researcher.

An interesting little tid-bit of information:

We included studies of participants who were interested in quitting cigarette smoking and studies of all smokers irrespective of interest in quitting.

Yet they excluded one study because the measured outcome was e-cigarette use and nothing to do with cessation. That particular study measured the effects of alternative tobacco use by smokers, and concluded that while alternative tobacco products are “attractive to smokers” they didn’t “promote cessation”. Which is a little odd:

Smokers (and particularly those who tried unsuccessfully to quit) are especially interested in using e-cigarettes. Those trying to quit smoking and younger smokers were most interested in alternative tobacco products, but use of these products was not associated with having made a successful quit attempt. This result calls into question whether these products aid cessation (as some claim) and whether the pattern of use is consistent with harm reduction (when one would expect use by inveterate smokers, not those interested in quitting).

This would suggest that the use of e-cigarettes is more prevalent in those who had tried (and subsequently failed) to quit smoking using other methods (such as NRT for example), and that other alternative products (snus for example) are not specifically associated with cessation attempts.

That study was excluded because, as stated in the next quote, smoking cessation was not the primary measured outcome, yet quit attempts both successful and failed were indeed measured in the paper.

Studies that included cigarette smoking cessation as a primary outcome were evaluated for inclusion

Many studies on e-cigarettes, especially in the last year have focussed on their effects on the human body rather than investigating their use as a cessation method, but that doesn’t deter the intrepid duo who carry on regardless, going as far as mis-representing data:

The authors of this meta-analysis had been previously informed by the authors of the Adkison paper that they were misreporting the findings.

So not only do we have a bad case of selection bias on the part of Stanton A. Glantz, we also have mis-representation of study findings.

But it goes further than that. The PubMed and Web of Science search identified 577 studies containing the keywords “electronic cigarette”, “e-cigarette”, “electronic nicotine delivery”, “stop”, “quit”, “cessation”, “abstain”, and “abstinence” (among others), but only 38 were included as part of the systematic review with only 20 included in the analysis.

According to the dynamic duo studies excluded from analysis included studies “lacking a control group that did not use e-cigarettes” and because they used the same dataset as other studies already marked for inclusion in the analysis. Of the final 20 analysed, 15 were longitudinal (long-term) with 10 of those assessing e-cigarette use at the start of the study (baseline), 5 of the 15 assessed e-cigarette use only at follow-up.

Three of the analysed 20 were cross-sectional, the study most preferred by Glantz as he tends to take snapshot “studies” and then bludgeon them into something resembling cause & effect which he can then use as part of his “activism” which he is so infamous for. The final two studies included in the analysis were clinical trials.

True, the studies by themselves may have poor measurements, small sample sizes and biases of their own, but the meta-analysis conducted by the dynamic duo doesn’t account for any of that, it simply lumps together the errors making them more pronounced.

Take this study as an example (thanks to Clive Bates for the detail):

In this study, the authors divided a sample of smokers at baseline into those who had ever used e-cigarettes (even just one) and those who said they never would use e-cigarettes. Amazingly, this was somehow regarded as a reliable proxy for trying to quit with and without e-cigarettes. They then measured smoking behaviour 12 months later and drew conclusions about the impact of e-cigarettes on quitting behaviour. They didn’t check whether e-cigarettes actually had been used during the 12 months or whether the smokers were actually trying to quit. So we (and the authors) have no idea who was actually using e-cigarettes, and how much, if at all, or whether they were trying to quit, and if they were, whether they were using e-cigarettes in the attempt. Apart from that, it is perfect!!

But this has found its way into an analysis of studies that purportedly tell us something about whether e-cigarettes help people to quit. Then it has been aggregated with studies with completely different designs, with different but equally misleading inferences drawn from them.

The fact remains, the primary overriding flaw with the analysis is one of selection bias, as clearly highlighted (with a great analogy) by Peter Hajek:

Imagine you recruit people who absolutely cannot play piano. There will be some among them who had one piano lesson in the past. People who acquired any skills at all are not in the sample, only those that were hopeless at it are included. You compare musical ability in those who did and those who did not take a lesson, find a difference, and report that taking piano lessons harms your musical ability. The reason for your finding is that all those whose skills improved due to the lessons are not in the sample, but it would not necessarily be obvious to readers.

What this analysis really tells us is that those who decided not to quit using any means available are those most committed to continuing to smoke with a large dash of wilful data misinterpretation thrown in for good measure.

Really, I shouldn’t be taking the time to write about this here I should be contacting The Lancet editorial team to point out the flaws, but as I have said before I am not a scientist and likely any comment I make to the Lancet will likely go ignored. In the meantime, the immediate furious reaction from real experts with real qualifications suggests the Glantz paper is a bad, and possibly fraudulent, study. But what will the world’s so-called “science” correspondents do? Will they investigate its claims, or just uncritically repeat its click-bait conclusions? Sadly, I think we can guess.

Related Post