Friday, August 16, 2013

Adjusting Exit Polls? Assumptions Make All The Difference

I was sent this post by a friend of mine from Healthcare 4 All PA on how the unadjusted exit polls in the Presidential Elections from 1988-2008 show voter fraud in favor of the Republican Candidate. It argues that the polls were adjusted to reflect the official election results.  The post can be read here:

Election Fraud: An Introduction to Exit Poll Probability Analysis

Blog author mathematician Richard Charnin claims that the graph above proves that there is massive election fraud as the size of the margin of error in local election exit polls seems to have decreased over the last 20 years.  There were a total of 126 exit polls that exceeded the margin of error over this period and 123 of them were won by the GOP.  He also argues that the popular vote percentage is consistently greater for the Republican winner than the Democrat in the exit poll.  

He relies on Richard Roeper's Roper Center Public Opinion Archives for the raw poll data which are unadjusted.  He seems to assume that the raw data are more reliable are more accurate than the adjusted numbers.  Charnin does a decent job of discussing the normal, binomial, and Poisson probability distributions and uses them to argue that the Democrat should have won every Presidential election since 1988-2008.   The author does not seem to consider that the totals are adjusted for population differences and/or sampling error.  For some this sounds like fudging of the numbers but there are theoretically valid ways to adjust sample values to estimate population values.

I have written many posts which have taken exit poll results at face value except for one in the Healthcare 4 All PA/PUSH where I noticed an inconsistency between national, PA exit poll results in 2012 and previous poll results on whether the public wants a better health care law.  Does this support Charnin's claim that exit polls are skewed to mask voter fraud?  Not necessarily,  It takes a lot more data to prove that there is systematic skewing of the data.  The assumptions one makes can invalidate the best of statistical methods and the most beautiful of graphics.

Dean Chambers of argues exactly the opposite of Charnin with Romney winning with 51% and now insinuates that there was massive voter fraud against him.  His method page was taken down now but I saw it before and critiqued it here.  He made similar assumptions to Charnin and used methods to fit his beliefs.  All of this may confirm to the layman the adage "there are lies, damn lies, and statistics."  A more careful reading of the numbers can separate the signal from the noise (as Nate Silver would say)  to find the relevant info.

**Related Posts**