After applying various statistical models to subsets of 2016 primary voting data several academic researchers conclude Hillary Clinton’s win was only possible through widespread vote fraud.
Widespread allegations of election fraud and voter suppression across the United States during the 2016 Democratic Primary has sparked the interest of several academic researchers and what they discovered in their research is disturbing.
The researchers each performed independent studies in which a few different statistical was applied to analyze various subsets of vote data and of the studies came to the same conclusion.
Namely that Hillary’s win was could have only been possible a result of widespread election fraud.
In fact, one of the statistical models applied by Stanford University researcher Rodolfo Cortes Barragan to a subset of the data found that the probability of the “huge discrepancies” of which “nearly all are in favor of Hillary Clinton by a huge margin” was “statistically impossible” and that “the probability of this this happening was is 1 in 77 billion”.
Furthermore, the researchers found that the election fraud only occurred in places where the voting machines were hackable and that did not keep an paper trail of the ballots.
In these locations Hillary won by massive margins.
On the other hand, in locations that were not hackable and did keep paper trails of the ballots Bernie Sanders beat Hillary Clinton.
Analysis also showed repeatedly irregularities and statistically impossible reverses in reported live votes in several locations across the country.
In commenting on the research, Barragan stated that some of the models are rock solid and 59 years old and the results seen here have never been witnessed in non-fraudelent election during that time period.
To summarize, at least four different independent studies were conducted with various statistical models applied.
The researchers applied the different statistical models to:
- Actual vote counts as they were reported
- Discrepancies in polling data verse actual counts.
- Various subsets of demographic polling data verse actual vote counts
The results of each study corroborated the with the results of the others and some of the researchers have review the work of the others’ and go onto to confirm the findings in those studies.
It will take months for the studies to undergo peer review.
However, all of their research statistically proved there there must of been widespread fraud to create the discrepancies in the vote counts that exist in all 3 subsets of the data analyzed.
The research of Barragan, done collaboratively with Axel Geijsel of Tilburg University in The Netherlands.
That research corroborates independent mathematical research conducted by Richard Charnin.
Further independent research was conducted by Beth Clarkson of Berkeley who also not only corroborated the two previous studies but reviewed them and after her research was done and confirmed their results.
A PDF Summary of the Barragan/Geijsel study “Are we witnessing a dishonest election? A between state comparison based on the used voting procedures of the 2016 Democratic Party Primary for the Presidency of the United States of America” can be found here.
Looking at the discrepancies between the exit polls and the final tally, nearly all are in favor of Hillary Clinton by a huge margin. This is statistically impossible (“The probability P of this happening is 1 in 77 billion”).
“A discrepancy between the declared vote (recorded vote) and the vote extrapolated from the exit polls is an indication of fraud when it is above a margin of error of 2% within a confidence level of 95%.
Here is how it works. When statisticians try to measure the ‘real vote’ they not only estimate the final vote count but they also analyze the entire distribution of the data they gathered from the exit poll voter sampling in order to determine the reliability of their final determination. When fluctuations in the data are due to randomness they will follow a statistical distribution that follows the shape of a bell curve, the Gaussian curve. The reliability or unreliability of the sample data doesn’t depend so much on the trustworthiness of those who collect the exit poll voter sampling, but it’s rather intrinsic to the shape of the distribution. From this shape an ‘interval of confidence’ is determined within which we can unquestionably claim our confidence that we got it right with a probability of 95%–always 95%. This interval of confidence is also called ‘margin of error’ (MoE).
Poorly informed ‘experts’ frequently argue that the statistical analysis of exit polls can be misleading because it assumes that real life data is randomly distributed (as in the Gaussian curve) when that’s not always the case. And here is where they are missing a central point. The expectation that sample data will be randomly distributed ALREADY takes into account all possible relevant factors in a practical observation in real life. When extraneous factors intervene, a discrepancy will make the recorded value fall outside of the interval of confidence signaling only one possibility: a systematic error. When this occurs statisticians make further analysis to determine the causes, and either remove the cause or include it into the ‘margin of error’. After 59 years of fine-tuning this process in countless elections around the world statisticians have reached a point where exit polls have become extremely reliable. If the final ‘Recorded Vote’ falls outside the interval of confidence one can assume with a high degree of certainty that the systematic error is intentional. This is why we say that we have a high probability of fraud.”
Retrieved from : www.democracyintegrity.org/ElectoralFraud/just-doing-the-math.html
– by Giovanni and Marcello Pietrobon; Berkeley, June 3rd, 2016
Top Pollsters Luntz & Caddell Agree Trump Can Win This Thing
Nigel Farage’s comments on Election Fraud (October 20th, 2016)