Numerous polls are coming out in the final weeks of the presidential election, and it can be hard to make sense of what they’re all telling us about the state of the race. One way we at The Washington Post look at these polls is by averaging the highest-quality ones, which gives us a gauge of candidates’ support both nationally and in contested states.
The Post will continue to conduct surveys with ABC News in the battleground states (see recent polls in Pennsylvania, Florida, Arizona, Wisconsin and Minnesota), which provide a deep look at voters’ opinions on key issues and how those are shaping their choices for president. The ongoing averages offer a continuously updated gauge of overall support for each candidate. We will use only polls that follow the best practices (more on that below), but even among these surveys, there can be a wide range in results, and an average helps to smooth those out. It’s important to remember that neither individual polls nor poll averages are a forecast or prediction of a candidate’s chance of victory — they’re a snapshot of voters’ support in recent days or weeks.
There are various ways to average polls over time, and results can differ depending on the types of polls that are included, the time frame over which polls are averaged and statistical adjustments aimed at correcting for systematic differences between pollsters.
The Post’s average is unique in that it is curated to focus on polls that are transparent about how they were conducted and employ research methods that have demonstrated accuracy.
What types of polls are included?
Transparency is the first and most critical threshold. When an election poll is released, The Post gathers a variety of details about how it was carried out, such as how the sample was drawn, how interviews were administered, how questions were asked and how the sample was weighted. If details are unclear in a survey’s release, The Post contacts pollsters to gather this information. Post averages do not include surveys from pollsters that fail to provide details considered mandatory by the American Association for Public Opinion Research (AAPOR). Similar details for The Post’s surveys are available in our polling archive.
The Post considers each survey’s methods in detail. Did the survey rely on a random or probability-based sample in which the vast majority of voters had a chance of being selected? Research has found such methods are generally more accurate than other methods, though some non-probability samples — which draw from a broad array of sources — have established a track record of accuracy in pre-election polls and are included in Post averages.
The method of interviewing is also important, particularly as it can impact a poll’s ability to interview a representative sample. For instance, 6 in 10 U.S. adults used only cellphones in late 2019 — not landlines — and an additional 19 percent relied mostly on cellphones, so telephone surveys that rely heavily or entirely on calling landlines are at a disadvantage. Web-based surveys face their own challenges in ensuring respondents are not dominated by frequent Internet users, with some surveys completing a portion of interviews by phone.
Another important consideration is how a survey is weighted or stratified to accurately represent population groups with differing voting patterns. Unweighted samples rarely match the population exactly, and weighting samples to match population estimates from sources such as the Census Bureau has long been a best practice in surveys because it ensures that demographic groups are not over- or underrepresented. The 2016 election demonstrated the importance of careful weighting protocols, as many state-level surveys did not weight samples by educational attainment. This led surveys to underrepresent White voters without college degrees, a group that favored President Trump by a wide margin.
The Post also considers the sponsors of surveys, including results only from nonpartisan outlets. While many private and campaign polls use sound methodologies, results are often released in a selective manner and have historically exhibited a sizable bias toward the sponsor’s party.
How is the average calculated?
The Post’s polling averages are calculated as a simple average of polls that meet standards over a set time frame, which may vary from the past seven days to one month, depending on the number of polls available. A minimum of three polls are used for any one average; if a polling firm has released more than one poll in that period, only its most recent poll will be included. In states where fewer polls have been conducted, the average will reflect a longer time period. As the frequency of poll releases rises closer to Election Day, the time frame for the average will decrease.
Our graphics will note the specific polls included in each state and national average.
How The Post describes differences in the average
Poll averages can reduce random error that occurs in every survey, but they are far from precise instruments. Given this, our graphics will say a candidate has a “lead” in a state only if the average shows them with an advantage of six points or more; a four-to-five-point race will be described as a “slight lead,” while any margin closer than that will describe the race as close.
These descriptions might seem overly cautious, considering that if Trump or Democratic nominee Joe Biden were truly leading by five points in a critical battleground state, that would be a strong position. Yet state polls have been less accurate historically — in part because they are conducted less frequently and population benchmarks are less precise — which should temper the confidence in the size of any candidate’s advantage.
In 2016, our analysis found final-week national polls overall underestimated Trump’s vote margin against Hillary Clinton by one percentage point on average but by larger margins in states: Wisconsin polls underestimated Trump’s margin by seven points, North Carolina polls underestimated Trump by four points, and Michigan and Pennsylvania polls did so by three points each.
The way polls miss is unpredictable from election to election. While state polls tended to underestimate Trump in 2016, they overestimated Republican nominee Mitt Romney in 2012, and in 2008, state polls showed little systematic error in either party’s direction, according to an AAPOR task force report.
Whatever its limitations, The Post’s average provides a simple way to understand where voters stand at this point in time.