This latest news story has been making its rounds, as it has been released at a time when pro-vaccine propaganda/hysteria has reached a fever pitch.
Fortunately, not everyone has been swept up by the propaganda, as detailed by this excellent response to Big Pharma’s propaganda machine:
The night before a Congressional Hearing we have another “Danish Study” that will invariably be all the talk tomorrow. It’s too bad no one reads (or understands) the details about these studies that are both funded and researched by vaccine companies (this one is funded by the Novo Nordisk Foundation and research completed by Danish vaccine maker Statens Serum Institut).
I won’t bore you with how consistently this particular vaccine maker has helped publish bogus studies used here in the U.S. to prove “vaccines don’t cause autism”, but it’s a long, sordid history.
So, I’m just going to make five quick points about why this study doesn’t change anything about the debate (but will most certainly be used by Paul Offit and others to “slam the door” once again):
ONE: We’ve still only studied a single vaccine. Even though children receive 11 vaccines, and MMR isn’t given until 12 months old, long after many other vaccines. Here’s the table. This has been true for 20 years now. The media will say, over and over, “proves vaccines don’t cause autism.” No, it doesn’t. If you prove Vioxx causes heart attacks, that doesn’t mean all drugs cause heart attacks.
TWO: The most compelling data in the study will never get covered: why is the autism rate in this study only 1 in 100? Here in the U.S. we’re at 1 in 36! Shouldn’t CDC researchers rush to Denmark to figure out why their autism rate is so much lower than ours? For every 1,000 Danish kids, only 10 have autism. But here in the U.S., we have 28 per 1,000, that’s 177% more autism! I thought Paul Offit wanted everyone to believe the autism rate was the same everywhere? What gives? This chart is from the study
My own personal theory as to why the Danes have a lower autism rate (and it’s just a theory): they do not give the Hepatitis B vaccine. No kids in this study received that vaccine, and the Chinese recently showed Hep B vaccine causes brain damage in mice. I personally think it’s a huge part of the problem. (The Danes also do not give Rotavirus vaccine or flu vaccine, the much lower vaccination requirement of Danish children versus American children is NOT mentioned anywhere in the study.)
[Quick note: Paul Offit, who was cited by the CNN article, is one of the most notorious pro-vaccine apologists around, and personally profited millions of dollars from his rotavirus vaccine…the conflict of interest in repeatedly touting him as an unbiased vaccine "expert” is revolting.]
THREE: They abuse the word “unvaccinated” and the media will, too. The study authors throw around the word “unvaccinated” but at least in the study, they make clear this ONLY means “didn’t get the MMR.” Said differently, the children received EVERY OTHER vaccine. Watch as people try to say this is a vaccinated versus unvaccinated study. It isn’t. Here’s an example.
FOUR: This doesn’t tell us anything about aluminum adjuvant as a trigger for immune activation events
I wrote an entire book about the emerging science showing how the aluminum adjuvant used in vaccines is likely triggering immune activation events of the babies and, in certain vulnerable children, causing autism. Obviously, this study does nothing to prove or disprove that theory. Nothing.
FIVE: Finally, and this is actually the most important point but also the most confusing: the study doesn’t take into account “Healthy User Bias”
This is the most important and most confusing point, and it’s the same trick The Lewin Group used when doing their MMR study a few years ago. Luckily, my favorite website, Vaccine Papers has discussed the abuse of “HUB” so I will use them to explain it first:
“Healthy user bias (HUB) is a serious problem in studies of vaccine safety. HUB is created when people with health problems avoid vaccination. When this occurs the unhealthy, unvaccinated subjects are used as controls. Consequently, the vaccinated group has better health at the outset. The better health of the vaccinated is erroneously attributed to the vaccine. The vaccine gets credit for improving health, when in fact it is causing harm.”
Let me try to explain. A hypothetical Danish kid (“Kid B”) has an older brother with autism. Kid B gets all his vaccines before the MMR (not given until 15 months in Denmark) and by age 12 months this Danish kid is not doing well, missing all his milestones (remember, his brother has autism, he’s likely way more at risk, but he’s gotten his shots so far.) The parents are now really worried, so they skip the MMR vaccine. They stop vaccinating.
But, it’s too late, he goes on to develop autism. But he never got the MMR. In this study, he proves the “MMR does not cause autism.” Get it? The parents avoided MMR because he was already doing so poorly. But he becomes the data these authors want most to find: a kid with a sibling with autism, who didn’t get the MMR, who still has autism. If you don’t account for this healthy user bias, your data starts to become meaningless, of course the CDC knows this because they have written all about it.
Vaccine safety studies typically compare health outcomes in vaccinated and unvaccinated people. In order to obtain accurate results, the two groups must be ‘matched’, meaning they have similar health and lifestyle characteristics. Matching groups is straightforward if the researchers have control over who gets the vaccine and who doesn’t. If researchers do not have this control (known as an ‘observational’ study), it is impossible to assure the groups are matched. The resulting group differences can cause biases that severely distort the study outcome. Poor matching can cause the study to be totally wrong.
Most vaccine safety studies are observational, and accordingly, do not include researcher control of vaccine exposure. For example, studies are often performed with “administrative data”, which is health data collected by insurance companies or governments. Researchers can use administrative data to compare health outcomes in vaccinated and unvaccinated people. A big problem is that vaccinated and unvaccinated people are not matched. Critical differences include:
1) Healthy people are more likely to choose to be vaccinated. People with chronic diseases or health issues tend to avoid the risk of vaccination.
2) People that choose vaccination tend to have other “health seeking” behaviors, such as having a better diet and exercising, or getting regular screenings and medical tests.
These differences create “healthy user bias” (HUB) or the “healthy user effect” in vaccine studies. Flu vaccine studies appear to be strongly affected by healthy user bias. People that receive the flu vaccine have dramatically lower (50% lower) mortality and better health when its NOT flu season (i.e. in the summer).
This is not plausibly due to the vaccine; rather it is because people that choose to receive the flu vaccine have better baseline health and more “health seeking” behavior. Dr Peter Doshi of Johns Hopkins University describes the healthy user bias problem in the British Medical Journal:
“Since at least 2005, non-CDC researchers have pointed out the seeming impossibility that influenza vaccines could be preventing 50% of all deaths from all causes when influenza is estimated to only cause around 5% of all wintertime deaths. So how could these studies—both published in high impact, peer reviewed journals and carried out by academic and government researchers with non-commercial funding—get it wrong?
“Consider one study the CDC does not cite, which found influenza vaccination associated with a 51% reduced odds of death in patients hospitalized with pneumonia (28 of 352 [8[8%]accinated subjects died versus 53 deaths among 352 [1[15%]nvaccinated control subjects).
“Although the results are similar to those of the studies CDC does cite, an unusual aspect of this study was that it focused on patients outside of the influenza season—when it is hard to imagine the vaccine could bring any benefit. And the authors, academics from Alberta, Canada, knew this: the purpose of the study was to demonstrate that the fantastic benefit they expected to and did find—and that others have found, such as the two studies that CDC cites—is simply implausible, and likely the product of the “healthy-user effect” (in this case, a propensity for healthier people to be more likely to get vaccinated than less healthy people).
“Others have gone on to demonstrate this bias to be present in other influenza vaccine studies. Healthy user bias threatens to render the observational studies, on which officials’ scientific case rests, not credible.” -Dr Doshi of Johns Hopkins U., 2013
Healthy user bias is a specific type of “selection bias.” Selection bias is well known. For example, a commonly used textbook on epidemiology and statistics states the following:
“Selection bias results when subjects are allowed to select the study group they want to be in. If subjects are allowed to choose their own study group, those who are more educated, more adventuresome, or more health-conscious may want to try a new therapy or preventive measure. Differences subsequently found may be partly or entirely due to differences between the subjects rather than to the effect of the intervention. Almost any nonrandom method of allocation of subjects to study groups may produce selection bias.”
CDC Researchers Study Healthy User Bias: In 1992, CDC researchers Dr. Paul Fine and Dr. Robert Chen published an important paper describing evidence for HUB in studies of the DPT vaccine and sudden infant death syndrome (SIDS). They derived a mathematical model for calculating the strength of HUB. Their paper states:
“…individuals predisposed to either SIDS or encephalopathy are relatively unlikely to receive DPT vaccination. Studies that do not control adequately for this form of “confounding by indication” will tend to underestimate any real risks associated with vaccination.”
“Confounding…is a general problem for studies of adverse reactions to prophylactic interventions, as they may be withheld from some individuals precisely because they are already at high risk of the adverse event.”
“If such studies are to prove useful, they must include strenuous efforts to control for such factors in their design, analysis and interpretation. Whether this is possible at all may be open to discussion. The difficulty of doing so is indisputable.”
So, simple question about this new MMR study: Is the word(s) “Healthy User Bias” anywhere in the study? No, of course not, because this isn’t real epidemiology, this is corporate epidemiology to generate a headline the night before a Congressional Hearing, and it will probably work.
They don’t take Healthy User Bias into account in a situation where that behavior will massively impact the results. Without it, the data really is meaningless. The authors brush by this topic in the conclusion, but don’t give it anywhere near the attention an honest vaccine epidemiologist knows it deserves.
As you already learned, “HUB” will have a massive impact on results, especially when only 1% of subjects have the thing you are measuring for (autism). But why let details get in the way of a good story?
My detractors love to point out that I’m not a scientist, so how dare I write these essays talking about science? So, here’s a great blog post about this study from a scientist, Dr. James Lyons-Weiler. If you have a friend who is an epidemiologist, I’d send them to this link:
Here’s a response to the study from a medical professional:
JUST IN TIME to be sandwiched between two one-sided Senate Hearings, a new cohort study by Hviid et al. has all of the hallmarks of a completely well-done study. Well done as in overcooked. Here is my initial assessment.
The burnt ends on this brisket are obvious. Just like all the past studies on the MMR/autism question, the study focuses on one vaccine. This is a problem because the variable they call “genetic risk” (having an older sibling), which is the most significant variable, is confounded with health user bias (there is no control over vaccine cessation).
It’s an important variable, but genetic risk of what? Of autism? Or of autism following vaccination? It’s impossible to tell because the study never tests a VACCINE x FAMILY HISTORY interaction term. Or any other interaction term that includes vaccines.
Were it not such an important question for which so much “science-like activities” have occurred, we could just shrug our shoulders, one could argue that defining the data analysis strategy is just about how one like to season their meat. But there is real evidence Hviid (who did the data analysis) appears to be up real data cookery here.
(1) The smoking gun is the study-wide autism rate of 0.9-1%. The rate of ASD in Denmark is 1.65%. Where are the missing cases of ASD? Given past allegations of this group’s malfeasance and fraud, the rest of the study cannot be accepted based on this disparity alone: the study group is not representative of the population being studied.
(2) They did not consider anything about >1 vaccine per visit when the MMR was given. Cumulative vaccine exposure is the variable that might reflect risk better, as would “>1 vaccine received on date of MMR vaccination”. It is meaningless to study a single vaccine exposure in a population that is being vaccinated so many times before the MMR.
(3) Apparently vaccine risk in immigrants do not matter because the study required that individual have a valid entry in the Denmark birth registry. Why would that matter? Because the odds of receiving many vaccines at once upon entry into Denmark is very, very high. Oddly, without explanation, the study excluded 11 people with autism. To avoid translational failure, the MMR should not be used on any of the clinical groups that were excluded from the study.
(4) While they appear to have learned how to combine risk variables into risk covariates, they did not test models that combine different risk variables (such as vaccine and parent’s age). Single-variable, 2-variable, 3-variable… etc models should all have been trained on a training set (66% of the data), optimized via internal cross-validation to maximize prediction accuracy, ROC curves produced, and then the generalizability tested on a set-aside (RANDOMLY set-aside) training set (33%) of the data using my Weighted ACE optimization given the high imbalance in the two study groups (ASD vs. no ASD) (see Cures vs. Profits, it’s published in there and will prevent nonsense results).
(5) Association studies do not test causality. Had this study reported a positive association, it would have fallen short under IOM standards, of providing sufficient evidence for causality. Thus, it cannot be used rule out causality. It’s not testing that hypothesis.
(6) COIs abound: “Jaya K. Rao, MD, MHS, Deputy Editor, reports that she has stock holdings/options in Eli Lilly and Pfizer. Catharine B. Stack, PhD, MS, Deputy Editor for Statistics, reports that she has stock holdings in Pfizer, Johnson & Johnson”.
Once again, epidemiology is the WRONG TOOL for studying vaccine risk.
To read my objective evaluation of past association studies, see: