Cysteine and hangover: Finland fakes it first

Two marketing-related studies about the effects of “hangover cure”-products containing vitamins and l-cysteine were published just weeks apart. The first showed, that a certain Australian combination called Rapid Recovery does not alleviate hangover. The second allegedly showed that a Finnish product (Catapult Cat) works, but for some reason the authors claimed to have proven an effect and mechanism for l-cysteine.

There was quite a media buzz about the latter, mostly BS, so I’ll analyze these claims critically – the “upside” was already presented to the lay public. The news stories had no foundation, instead here we have a typical case of junk that passed through peer-review.

But now in chronological order:

Rapid Recovery

The Australian study was published in the Journal of Clinical Medicine, which is quite new but has already an impact factor of around 6, which is remarkable, and accelerating.

 

It enrolled 23 participants (planned: 25), 3 of which discontinued, leaving a sample of 20, 65% (13) of them female. For design details: see article. It was a frustrating task to find out which l-cysteine dose was given. According to the corresponding trial registration (>>), it had been 320mg, which is substantially lower than in the finnish trial below.  However, it’s described as a trial of a certain supplement, and therefore correct to report it that way.

A bunch of hangover scales, including the standard Hangover Severity Scale (HSS) and multiple laboratory parameters were used mainly on the mornings after two drinking sessions, one concluded with ingestion of the supplement, the other with placebo.

Because l-cysteine is often easily distinguishable from inert placebo (smell! stomach!), the participants were asked afterwards, if they guessed which treatment they received. This procedure is a quality marker for clinical trials with “soft” outcomes, i.e. such as questionnaires about mood, stress, etc. These are highly susceptible to the “amplified placebo” effect: If the person knows, which treatment was given, the answer is probably biased.

There was no significant difference between placebo and the supplement in hangover severity or any other measure. 60% (12 participants of 20) guessed correctly, what treatment they were given, but that’s well within what is expected by chance and had not influenced the results.

The authors reminded, that the physiology of hangover is largely unknown, and concluded, that this specific supplement does not mitigate any of its symptoms. Accordingly, either “administration of l-cysteine combined with B and C vitamins does not improve acetaldehyde metabolism”, or acetaldehyde does not cause alcohol hangover. I would add, it was at least not detectable with such a study.

Catapult Cat

Six weeks later, another study about a vitamin-cysteine supplement was published, this time in the smaller, lower-impact Alcohol and Alcoholism journal, which has nonetheless a good standing in the field. Its impact factor has dwindled over the years down to ~2. I certainly liked it for its short name in citations: Alcohol Alcohol looks reliably strange.

Interestingly, one of the “equally contributing” authors – Maria Palmen – is not a researcher, but a professional writer from an agency in Helsinki, where she’s listed as “editor, medicines”.

click for full-text article

The supplement tested here contained even more vitamins than Rapid Recovery, which had thiamine, pyridoxine and vitamin C added to l-cysteine. Catapult Cat is also made of riboflavin (B2), niacin (B3), biotin (B7), folic acid (B9) and cobalamin (B12).

The main author, CJ Peter Eriksson, is neither a psychiatrist nor physiology or addiction expert but an associate professor for public health at the University of Helsinki, who started his career in the labs of state-monopoly ALKO chain. Erikssons has maintained for decades, that acetaldehyde is the main culprit for hangover. Thus he was the natural choice for Catapult Cat Oy, the maker of this supplement. Eriksson announces straight away to know “the truth”, as opposed to mainstream research, and the source he cites for this bold claim is: the man himself.

Industry-sponsored trials are known to produce results which pleasure the sponsor. In this case, Eriksson et al have employed so unusual measures to get their output looking “positive”, that they had to explain it away one after the other – or at least to try to. With a bit of background in clinical trial design, this makes it relatively easy to find the weaknesses.

The first difference to the Australian trial is the lack of measurements. No use of the standard scales, just one Likert scale had to do the job. Acetaldehyde was to be detected with a breath test, but this somehow failed (see full text).

The sample size was similar to the Rapid Recovery trial above, which means both were severely underpowered and at the same time prone to type I errors (aka false positive). The most eye-popping difference is that the finnish study excluded women – something they “forgot” to tell the media, and it’s also not in the abstract. One has to read the paywalled full-text version to know this.

The reasons for this shortcoming are unclear, the authors give a number of post-hoc considerations, which are impossible to check because the protocol is not available and the trial was not registered. It seems odd that “participating in different phases of the menstrual cyclus” was sufficient to exclude a female afterwards, and the exclusion of at least two women (possibly six) was not justified by any criteria. So the trial reports only on 19 men, further diminishing the statistical power and casting doubt on the validity: Why not report the male-female ratio, even when it’s 90%, but instead drop a third of the data?

One could reckon, also in the light of the negative trial above (which had 65% female), that inclusion of the female data would have rendered the results non-significant and prompted Eriksson’s team to drop this part, after the data came in. Left with so few data points, they came up with another bad idea: One-tailed testing.

This problem is best described here by the University of California, in short:

 

The researchers explain their choice of a one-tailed test with the expected direction of the results. They “knew” beforehand that Catapult Cat works by lowering acetaldehyde levels – but exactly THIS is what they had to prove first. The acetaldehyde levels were difficult to measure with the equipment they used.

But what data did they really analyze? That’s, well, confusing.

 

There is no explanation of “response”, so one is left with guessing. A closer look at the graphics makes me lose the last illusions about this study.

 

The individual data used for the different analyzes change with every item. It’s always another group of participants that is statistically tested. The researchers select from item to item the fitting data. And even by dropping again such a great portion of data, Eriksson et al managed to get one significant result, barely below the 0.05 threshold. The other isn’t even significant.

The same again with the rest of the items:

To use a simple picture: If you spray a playground full of children with water, and then only ask the smiling children if that’s a great thing, you aren’t testing the hypothesis “all children like mud”. The results are useless, although there is a plenty of anecdotal evidence and “experience” that children actually like to be dirty and jump into puddles. Some may dislike it, some maybe fear the reaction of adults. But if you are not testing for the whole picture but look the other way, it’s a worthless effort.

Needless to say, Eriksson et al did not ask their research subjects if they knew which treatment they received. So this possibility is not ruled out: The probands could have guessed when Catapult Cat was given, and responded in the expected way.

If that wasn’t enough failure for one study article, I did something I shouldn’t have done. I looked up the Editorial Board of the Journal. I did not expect to find anything like this.

Eriksson is on the “Editorial Advisory Board” and his long-time colleague and often co-author Anders Helander is the Associate Editor.

Before this last point, I thought maybe the dwindling journal wanted to land a scoop with this publication (and maybe that was also a motivation). Or, otherwise, there have been faked studies going through in the most prestigious journals on the planet (e.g. the Covid-19 articles in the Lancet and the NEJM). Why should a small journal do better?

But, unfortunately it looks like Eriksson sneaked it into the publication via well-worn channels, and a buddy helped with this to make sure it goes through.

Ironic footnote:

The sales boost from this study won’t probably rescue the company, which manufactures the supplement. It freshly appeared on the public list of insolvent debtors (protestilista.com) and owes especially to Pharmia Oy, its contract manufacturer. Catapult Cat Oy has not filed any balance for the last 2,5years and is set to be struck off the trade register, or to go into liquidation by November 2020.

Leave a Reply

Your email address will not be published. Required fields are marked *