Skip to content

Files

Latest commit

3cbd4a9 · Oct 31, 2014

History

History
28 lines (28 loc) · 48.6 KB

poll-of-pollsters-3.tsv

File metadata and controls

28 lines (28 loc) · 48.6 KB
1
Please enter your information, and your polling organization's information.How do you determine how likely one of your respondents is to vote? Do you weight them, or is each respondent either a likely voter or not one? Please provide as much detail as possible.Do you weight by party affiliation? If so, how do you determine what weights to use? Please provide as much detail as possible.Do you ever poll from registered voter lists rather than call at random?Why?When you weight poll results, is there a maximum weight you use to increase the count of a demographic subgroup?Why or why not, and if yes, what is that weight?Do you weight by race and party together? (Example: weighting African-American Democrats instead of African-Americans and Democrats separately.)In what circumstances do you do so, and why?Do you ever deliberately call back prior poll takers [http://fivethirtyeight.com/features/oct-16-can-polls-exaggerate-bounces/]?If you do sometimes or always do that, under which circumstances, how do you do so, and why? If not, why not?Do you have off-the-record conversations with other pollsters to compare results before publishing them?Why or why not?Do you use Bayesian methods or frequentist methods?Why?Do you find traditional reporting of statistical margin of error to be credible?If so, why? And if not, how do you think margin of error should be reported?In what languages other than English do you ever ask your political polls?What percentage does it add to the cost of a poll to add one language?If you field polls in Spanish, how do Hispanic respondents who choose to answer in Spanish differ from those who answer in English?How much does it cost you to poll one stateÍs Senate race, on average?How much did it cost you to poll one stateÍs Senate race, on average, in 2010?How do you account for the change, if any?Do you always disclose who is funding your polls?Why or why not?Do you ever conduct and publish political polls without sponsors?If not, why not and would you ever do so? If so, under what circumstances and why?How many total employees (full-time or part-time) does your polling organization have?How many are men?How many are women?How many are white?How many are African-American?How many are Hispanic?How many are Asian American?Any comments on the demographics of your staff?Do you ever poll using an online panel?Do you cap the number of polls that panel members can take in a given time period?Why or why not? And if you do, at what level do you cap it?What percentage of your panel members leave or become inactive annually?Any comments on your panel turnover?Do you ever poll by phone?Have decreasing response rates required you to change your techniques by using increased weighting or supplementing with different technologies?Please elaborate on your answer above.How much do you pay your interviewers per hour?What percentage of your interviewers are male?What is your interviewers' average age?How many hours of training do you require them to have before they can conduct interviews?How are your interviewers trained to handle invective from people who hate being called?What percentage of interviews do you monitor for quality?Do you ever interview by phone using Interactive Voice Response (IVR)?For what percentage of IVR polls do you use male voices?For what percentage of IVR polls do you use female voices?Whose voices do you use? i.e. actors, local TV personalities?How much do you pay the people whose voices you record, by poll or by hour?How many seats do you expect Republicans will control in the Senate in 2015? (Yes, we're asking again.)Why?What question or questions would you want us to ask your fellow pollsters in future rounds of this poll?
2
NamePolling OrganizationOpen-Ended ResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseSometimes or Other (please specify)Open-Ended ResponseResponseOpen-Ended ResponseSpanishMandarin or other ChineseTagalogFrenchVietnameseGermanKoreanRussianArabicNoneOther (please specify)Open-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseResponseOpen-Ended ResponseResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseResponseResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseResponseResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended ResponseOpen-Ended Response
3
Shachi KurlAngus Reid GlobalAngus Reid Global reports two perspectives on the Canadian electorate. The first, as has been our practice for almost 40 years, involves examination of the intentions and attitudes of eligible Canadian voters. The second, in light of declining electoral turnout rates, particularly among younger voters, includes a separate commentary in our analysis on the intentions of Canadians most likely to vote. Our data has been analyzed through two sets of filters: those of all respondents who are eligible Canadian voters, and those of survey respondents who are most likely to vote. The data from all respondents uses standard census-based targets to ensure a national sample that is representative of the adult Canadian population as a whole by key demographics such as gender, age, education and region. Data from likely voters applies a weighting structure that further adjusts our sample to reflect known variations in voter turnout _- specifically across age groups -_ while also filtering based on respondents' own self-reported past voting patterns and habits. We have developed this approach because we feel strongly that it is the responsible thing to do when reporting on electoral projections. With declining voter turnout, there exists an increasingly important divergence between general public opinion -_ which still includes the still valid views of the almost 40 percent of Canadian adults who donÍt vote -- and the political orientation of the 60 percent of likely voters whose choices actually decide electoral outcomes.YesTransparency is absolutely 100% key.YesNo
4
John AnzaloneAnzalone Liszt Grove ResearchFirst off, we are driving our samples based off of voter history. We have an equation of what percent of the sample is only 2010 voters and then allow in a certain but small percent of new registrants since 2011 who voted in 2012 primaries and general, as well as new registrants since the 2012 election. But we start with 80% hardcore mid-term voter history. Then we screen down verbally on the interview between very likely and somewhat likely to vote.Naturally we ask self-identified party affiliation and we record party registration if the state has that on the voter file. Early in the cycle, we will let party ID float, but when it gets late in the cycle, you should not be seeing big shifts in party ID, and we will at times weight it depending on what we see with the other demos. You have a model and you need to be consistent with that model.YesWe almost exclusively use sample from voter files that have voter history and have been very successful with both our modeling and predictions. In midterms you have to use a voter file because it is such a restricted voting universe. You can use RDD [random-digit dialing] in a presidential cycle but voter files update new voter registration so fast these days that using a voter file is superior to RDD because with RDD you are counting on the respondent to be truthful about whether they are going to vote. And that is not realistic. We see in our drop-off studies (those who voted in 2012 but not 2010) in 2014 that so many of those who did not vote in midterms say they are going to vote in 2014. We know that is not true, and it is not true when using an RDD of midterms.YesFirst of all, we run partial data every day and then make sure the phone bank has quotas on demographic groups if they get out of whack. Our goal is to do as little weighting as possible and get the right interviews. You can do that by reviewing partials and seeing where you are low on groups. You then have to double-down on getting those interviews. It may be more expensive, but it is the right way to do it. You can also help this by calling more evenings. Too many pollsters (and clients) are in a rush to get data. With caller ID, no call list, cellphones, etc., you need to take more days doing proper call-back procedures and that will help you get the correct demographics instead of just overly weighting.YesIt just depends. If you are in a state like Florida where you have whites, blacks and Hispanics (and Cubans and non-Cuban Hispanics), you may have to do multivariable weighting. You don't usually have to do that in homogenous Iowa. In Florida you have white progressive Democrats in south Florida but conservative cultural Democrats in the Panhandle who call themselves Democrats and vote Republican. It is never as simple as it seems.SometimesPanel backs [i.e. call backs] are difficult to pull off unless you start with a very large sample size. We rarely do it.NoJust don't.Sometimes or Other (please specify)Are you fucking kidding me?Are you fucking kidding me?YesThat probably should be a yes-and-no answer reserved for a two-hour panel discussion.SpanishMandarin or other ChineseWhen using a bilingual bank for Hispanics, you also have high cost, because Hispanic households are at nearly 50% cellphone-only. You also have translation cost. In total, it can add 20%.Spanish-speaking respondents are much more likely to be Democrats. But again, it depends. In Florida, we have a universe who take interviews in Spanish who are older Cubans and then younger, non-Cuban Hispanics.There is no average because there are differences in sample size, length of questionnaire and whether or not you are using bilingual banks.again no avgPhone-bank cost are definitely going up and they have been for over a decade. The combination of Caller ID, no call list, cell phones, marketing calls, robo calls reminding people of appointments, etc. all contribute to costs.NoDepends on the state laws. We follow the laws.YesYes, we internally do test polling and on occasion put parallel polls in the field to check our numbers on important race. But we never publish them.2291318202We are proud of our diversity. We also have one LGBT staffer. You should ask that.YesYesThe panel company does that as wellYesYesWe really are strict on our call-back procedures and extending the number of days of our dialing. Again, we look at partial results every day and if any demographics get too out of whack, we will put a quota on it for the phone bank.No
5
Gabriel JosephccAdvertisingNo weight. We just ask them all. For example, we are surveying 107,555 homes in a county for a client today.No weight. We just survey a very big universe. Once the sample is big enough, it does not matter on weight. We also target the mobile-phone channel. There are more mobile phones than people. This automatically provides us with a diverse and balanced response.YesClients want us to.NoWe do not weight surveys. We survey the entire universe.NoWe do not weight polls.NoWe call an entire universe. If that includes respondents from past surveys, that is great. If not, it does not matter. When the sample size if big enough, no weighting is needed.NoNever have been approached by one.Sometimes or Other (please specify)Our process does not require methods.Our process does not require methods. When you talk to everyone, why do you need a method?NoWhen your sample size is big enough it makes margins of error ridiculously low. Why mention something no one would believe.SpanishMandarin or other ChineseFrenchGerman25%YesMinimum fee $1,500Minimum fee $1,500N/ANoWe always disclose who is responsible for the survey.YesBecause we can. It feeds a thirst for information. The public and our clients want to know. If we see a need sometimes it helps us get business as well.NoYesNoWe have not seen decreasing response rates. We target the mobile-phone channel that has increasing response rates.Yes50%50%All of the above, including clients.They are staff.54DataHow off are your polls from what happens on Election Day?
6
Darrel RowlandColumbus DispatchWe are the weird one -- a mail poll. Over the past several decades, we have determined that those who choose to fill out and mail back our poll form make up likely voters. If the demographic composition of those respondents is significantly different than U.S. census demographics, we will consider weighting the results. Again, with a mail poll, our weighting technique is somewhat unique. The polls are coded for each of Ohio's media markets. If one of those markets is heavily over- or under-represented, we will first weight on a geographical basis and determine whether that pulls the other demographic factors into line. It usually does.No. Party affiliation has been shown to be highly volatile, especially in a swing state such as Ohio. We regard it as a variable, not a constant.YesOur mail-poll sample is drawn from the state's official list of seven-million-plus registered voters.NoObviously, the higher the weights, the more reluctant we become to use them. Far more often than not, we use unweighted figures.NoAgain, we never weight on party.YesWe call respondents who volunteer contact information to get comments elaborating on their answers. We use some of those comments in the stories we publish, and put the remainder on the web to give us a qualitative element along with the quantitative element of a standard poll.NoNot to polish our own apple, but our poll has been demonstrated to be the most accurate in Ohio in recent years plus we poll on far more races and issues than anyone else.YesGenerally so. I would be more comfortable if pollsters would refer to a plus-or-minus "sampling" error, since other types of error are always possible.noneN/AN/AYesIt's a simple matter of transparency that adds credibility.YesThis is the ONLY way we do it -- we conduct and fund the poll totally ourselves (The Columbus Dispatch).We are a newspaper that conducts polls. All employees are part of our operation.N/AN/AN/AN/AN/AN/ANoNoFor those who use party ID [identification] in their LV [likely-voter] turnout scenarios how can you project this in a statistically -- even historically -- valid way?
7
MaryEllen FitzGeraldCritical InsightsWe have a battery of four questions, gauging past participation in on- and off-year elections. We then score the respondents based on that.
8
Stuart ElwayElway ResearchWe sample from registered-voter lists, which has the vote history on the record. In off years, we define a likely voter as one who has voted in at least two, sometimes three, of the last four elections. Depending on the proximity to the election, we also include a screening question about certainty to vote. In presidential years, we don't typically screen. Our objective is to explain the entire electorate, including those choosing not to vote -- not necessarily to predict the election outcome.No. There is no party registration in Washington state, and party identification fluctuates month-to-month. There is no reliable anchor to weight to.YesIt a more efficient and reliable way to contact registered voters. Plus, there is other information available, such as voting district, vote history and party registration that is useful in both the sampling and the analysis.NoNoI can't recall having done that in a election poll, which I presume is what you are asking about. When we do polls for media outlets, we get numbers for reporters to call back respondents for in-depth interviews, but I don't think that is what you are asking.NoI have had conversations after publication in some instances (rarely), to discuss variance and possible reasons (timing, too many older voters, etc.).Don't use them.YesBy "sponsors" I assume you mean single entities. All of my political polls are done for The Elway Poll, which is funded by subscribers, or by media outlets.NoYesNoNot for election polling, except that we have to order larger samples. We sometimes supplement non-election surveys with an online component.Yes
9
Bernie PornEPIC-MRANoYesYes
10
Mark DiCamilloField Research Corporation (Field Poll)We use a scaled intention-to-vote question and couple this with their actual voting history from their voting record. (This information is available to us since we sample voters from the registered voter rolls.)No. We weight by party registration, which has a population value to which we can project the sample. Each voter's registration comes directly from the voter file (again since we are sampling from the voter rolls). It is not a question asked in the survey.YesThis is the standard method used by The Field Poll in its pre-election surveys measures.YesYes, we will trim weights when necessary, although this is fairly uncommon. When trimming is employed, we usually limits individual weights to 3.SometimesWhen we do so, we weight to the party registration of the ethnic subgroups (not party ID), again since the registered-voter population value of each ethnic subgroup is known.SometimesWe did this once when one of the candidates included in our pre-election poll dropped out and we called back voters who chose that candidate to ask who their second choice would be.NoWe keep our poll results strictly confidential prior to their publication. After publication, we are open to discuss them with other pollsters and polling analysts.We calculate and report each poll's margin of sampling error in all reports mainly because it is the usual practice in our industry. But I must confess that when reading the polls of other pollsters, I don't really pay much attention to its reported margin of error.SpanishMandarin or other ChineseTagalogVietnameseKoreanKoreanIncluding Spanish is standard when polling Californians. Since all of our polls over the past 15 years have included Spanish, I can't really say what our cost would actually be without Spanish. Over the past four years, we have increasingly included Asian languages, and probably have conducted about a third of then in the Asian languages noted above. When this is done, we typically include an over-sampling of each Asian American population to enable the poll to compare and contrast each of the Asian American voter segments, since they are quite distinct. This typically adds 20% to 25% to the total cost of the poll.Spanish-speaking Latinos are quite different demographically than English speakers. They tend to be much more downscale, with lower levels of income and education. This creates greater affinity to the policy positions of the Democratic Party, since they place greater value on government-provided services, like health care, education and the schools, jobs, and, more recently, the minimum wage. However, we have found them to be more conservative in their views on many hot-button social issues, like same-sex marriage, marijuana legalization and abortion.Not applicable since each of our polls covers numerous poll topics and multiple election contests.Same answer.YesWe tell voters that our polls are nonpartisan and are funded in part by many of the state's leading news organizations.NoYesYesIncreasingly, we stratify our voter samples by age to ensure a proper representation of voters of all ages. Sampling from the voter rolls makes this relatively straightforward since the age of the voter is included on their voting record. We believe stratifying the sample in this way is superior to applying larger weights to the younger voter segments that can be otherwise be underrepresented in our surveys.No51The odds still favor the Republicans winning enough of the contested races to put them over the top.To the automated voice robopollsters: Since 90%+ of Americans now have cellphones and more Americans choose to cut the cord and do without access to a residential landline phone (it now includes fewer than two in three), what are the long-term prospects of this interviewing method since automated voice message calls are prohibited by law from dialing cellphones?
11
Berwood YostFranklin & Marshall CollegeWe use several different models, one based on prior voting history and the other based on perceived electoral interest and self-described intention to vote. These are discussed in our releases.We do weight by party affiliation (as well as region and gender) since all of our samples are designed to include all registered voters. Likely-voter determinations are then made as discussed previously. The state of Pennsylvania provides detailed voter-registration files that we use to determine the appropriate proportions. PA voters must be registered to vote 30 days prior to the election, so we use the final registration file for surveys that take place after that date.YesWhy not? There are several considerations. First, the voter files include data that can be used to assess the quality of the interviewing and how representative the sample is. Second, voting history can be used to build likely voter profiles. Finally, it is more cost-efficient to use voter lists than RDD [random-digit dialing] samples. We switched from RDD samples to RV [registered-voter] lists in 2012.YesWe use a raking algorithm to adjust the sample results. If any weight is >= 3, it would be trimmed.NoNoNoFrequentistNoneYesI don't have that information immediately available. I'll try to pull it together before your deadline.NoYesYesNo
12
Doug KaplanGravis MarketingWe ask them if they plan on voting. We weigh likely, very likely, and somewhat likely.YesYes, we constantly update our lists to cover new voters. We call registered-voter lists unless otherwise noted.SometimesNoYesYesOn larger races yes. On smaller races with low populations it becomes more of an issues. Example only 200 completes in a school board race.YesAAPOR rulesYesYesYes$10/hour20%2310They have a manager always available and an internal do-not-call list.15%Yes75%25%Professional voice actors$5 per message. We record thousands of messages55The Republicans will win Kentucky, Georgia, Iowa, Montana, Arkansas, Colorado, Alaska and North Carolina. The Republicans will hold Kansas (possible they lose governor's race there). I am less confident in New Hampshire but believe the Republicans will win there, as well.
13
Matt ToweryInsiderAdvantageOur samples are from Aristotle's registered voter files. [http://aristotle.com/] We then have questions to screen for likely voters and only include and weight responses from those who indicate they are likely to vote (or in the instance of early voting, if they have already done so).NoAbsolutely not!YesYesYes51There is a slight GOP wave in non-southern contested states such as Colorado and enough southern states will either hold GOP or go GOP to give Republicans a slim majority. But perhaps not until January (Georgia runoff possible).Do you seriously believe that legitimate and representative voters answer 70 questions on a cellphone?
14
Julia ClarkIpsosWe utilize a likely-voter model. This contains questions on: registered voter, past vote at various election, self-reported likelihood of voting on a 10-point scale, and interest in the election. They are turned into a summated index from which we make cuts based on turnout assumptions. For more granularity, we also run a regression index. Weights are applied to the whole population (either at the total level or registered-voter level, depending on the type of survey) before the likely-voter model is applied.NoNoWe do not do campaign polling, and so we almost never need to target very small geographic regions over short periods of time, which is why lists are generally used. Plus we do primarily online polling, and lists do not have email addresses.NoWe do not set fixed max weights. If a weight is high, we assess it and determine the best way to address the situation.NoNoWe don't do phone polling.NoIt would seem like a very strange thing to do. And there is never time, even if we were so inclined... the data needs to get out the door very quickly.BayesianFor reporting ONLINE poll results, we use Bayesian methods to account for the non-probability nature of our recruiting processes for online samples. In many studies, we use outside information to help create informed priors to calibrate our online results. (Generally speaking for our business as a whole, we use a combination of methods depending on mode and client.)YesOur real response is ïyes and noÍ ! Traditional reporting of margin of error accounts for sampling error, and it does not account for other sources of error that may affect a pollÍs results and its variability far more than the sampling error calculation using the sample variance. Sampling error does not adequately account for the changing nature of telephone research in the age of cell phones, callerID, and respondent disaffection with participating in surveys. For the MoE to be credible, it would need to account for total survey error within a mean square error calculation. Traditional use of margin of error for online surveys is often noted as inappropriate for online surveys, and this is a simplification. Traditional use of margins of error can be used for online surveys with the use of an underlying model. Very often, this argument is dismissed. Our organizationÍs response has been to adopt a Bayesian framework/ model. This allows us to calibrate our results using informed priors to account for the differences in the population for technologically adept versus non-adept people. We have promoted this approach in print and at conferences to work within the system rather than outside of it. In all honesty, we still report traditional margins of error for telephone research, and we use Bayesian for our online polling work. To report our telephone results using traditional margins of error, we too have to implicitly assume an underlying model _ that the percent of respondents we can reach and get to cooperate respond similarly to those that we could not contact or refused to cooperate. For our online pollsÍ results, we provide Bayesian Credibility Intervals. This provides a probabilistic measure of error in some ways analogous to Confidence Intervals and a margin of error.YesWe are supporters of the AAPOR Transparency Initiative [http://www.aapor.org/Transparency_Initiative.htm], and also fundamentally believe that this is important information for a poll consumer to have.NoPolling is an important part of Ipsos's identity in the U.S. and globally. If we did not have a media partner, it is likely we would continue to publish political polls under our own brand. Happily, we have always had the fortune to work with media partners on our polling work.Our polling team is a small group of people within Ipsos Public Affairs, which is a part of Ipsos Group (2100+ U.S. employees and 15,500+ global). At any given point in time, between two and 15 people work on the U.S. polling work, and no one works on it exclusively (even me). Our core team is more women than men, but it varies enormously depending on the work at hand and the time of year.YesYesWe want to avoid frequent responders for quality reasons. No more than once a month. Non-panel participants are tracked by IP address.YesYesAll our political polling this year and in 2012 has been online, although we do a great deal of phone research too (just not for political work). Across the industry, it is now standard to include at least 25% -- we usually do more -- cellphones in a phone sample.No
15
Barbara CarvalhoMarist CollegeLikely voters are defined by a probability turnout model. This model determines the likelihood every respondent who indicates they are registered to vote (or plan to register if the poll is being done prior to the deadline for registration in their state) will vote based upon their chance of vote, interest in the election, and past election participation.NoNoWe prefer RDD [random-digit dialing]. Lists add another layer of uncertainty and non-response given the quality of registered-voter lists is not consistent across all states. We also try to measure new voters.YesMultipliers are capped to be no greater than 2. We prefer not to make statistical adjustments to make up for sampling that is not representative. Increasing the proportion of cellphone sample, increasing callbacks, scheduling callbacks, allowing respondents from the sample to return call the survey center at their convenience, contacting respondents at different times of the day, training and monitoring of interviewers -- all improve representativeness. The need for adjustments greater than 2 for our weighting demographics in samples are rare.NoWe never weight by party ID.Sometimes1 -- For research purposes, we have called back respondents to pre-election polls after Election Day to compare their intention to support a candidate with whom they voted for. 2 -- We ask permission for recontact at the end of our surveys to have a reporter from our media partner(s) call them to do a follow-up interview or, in some circumstances, request participation in a focus group about their views.NoNo need to.Sometimes or Other (please specify)Not enough time or space to get into the Bayesians/frequentists debateNot enough time or space to get into the Bayesians/frequentists debate.NoThere is generally a misunderstanding as to what MOE [margin of error] is and for which types of surveys it may be calculated. It doesn't provide very much insight into the value or quality of the research although that is often the inference.SpanishMandarin or other ChineseFrenchGermanIt depends upon the language and the population incidence of those who speak the language.It depends upon the issue/topic surveyed. Sometimes there are significant differences and sometimes there are not.We're an educational research center, not a business.In 2010, it took fewer interviewing hours to get the same number of completed survey interviews.Proportion of cellphonesYesPrinciple of transparencyYesWe're a research center at Marist College and an educational program. So we often conduct surveys without sponsors.12 and about 450 interviewersStaff 3 Interviewers 45%Staff 9 Interviewers 55%Staff 12 Interviewers 70%Interviewers 15%Interviewers 10%Interviewers 5%YesYesWe have worked with a probability-based outside vendor.N/AN/AYesNoThe proportion of cellphone interviews has increased.It is a scale based on experience.45%College ageAbout 10 hoursPart of their training is in human subjects research. They are trained and evaluated in these specific skills, they role-play, have pat phrases, and a hierarchy of coaches and managers to assist.All interviewers are monitored and provided feedback during their shift. All interviews are recorded.No51Unlike 2012, when Obama had many paths to winning, the GOP this time has expanded the playing field and has multiple paths to gaining a majority.Nothing at this time
16
Brad CokerMason-Dixon Polling & Research, Inc.NoOur contracts specifically prohibit us from discussing a poll with any third party until the client publishes the results. Breaching a contract is not a smart business practice. Over the past 25+ years, we have developed cordial relationships with many of the major campaign polling firms. They know our situation and that about the only thing we can give them is an approximate day that the results will be released. However, once our poll results are in the public domain, we will discuss and compare notes with campaign consultants that have a degree of gravitas in the business.NoSimple business principle -- make more money than you spend or else you'll go out of business. It's better use of resources to do nothing and go fishing than it is to spend money and staff resources to conduct a poll no one is paying for.NoYesNo54Most are saying 51-53, so I'm going bold. It feels a lot like 1994, when Democratic incumbents polling in the mid 40s all lost.
17
Mark MellmanMellman GroupYesYesYes
18
Seth RosenthalMerriman River GroupNo.NoIt's simply not the type of work we do.Spanish25%Lower socioeconomic status, more likely to identify as a Democrat, less likely to vote, more likely to be undecided in down-ballot races.NoYesYes
19
Steve MitchellMitchell Research & CommunicationsWe first ask if they are registered voters. If registered, are they definitely going to vote, probably going to vote, not sure yet, or definitely not going to vote? We accept only the top two: definitely or probably. Probably is usually less than 2 percent. After absentee ballots are mailed, we ask: "Thinking about the upcoming November General Election for U.S. Senate and Governor, have you voted by absentee ballot, are you are planning to vote by absentee ballot, are you definitely voting on Election Day, probably voting on Election Day, not sure yet if you are voting, or definitely not voting?" If they answer not sure yet or definitely not, we do not poll them.Yes, we weight. We look at past election results in similar races. However, the closer we get, the less we weight by party.YesMichigan has an excellent list vendor and our results in all types of races have been very accurate using his lists. We have found that to be true in other states, as well.NoYesIn the past, we used to do a panel-back [call-back] poll the night before the election to see how the race would turn out the next day. We don't do that any more since we use automated phoning now. It was for internal use only.NoOur numbers are our numbers. Polling cannot be done in a vacuum because we all look at what is happening in other states and in the states we are polling. But I don't want my results tainted by talking to another pollster.Arabic50%It depends on whether it is automated or operator-assisted.Our costs are down.YesYesYesYesWe do not use online panels for political polling.We use them very rarely.YesYesWe have to weight for voters under 29 and for African-Americans.$9 per hourAbout 33%2110To be polite, thank them and hang up.33%Yes25%75%Local actors$50 per poll52Watching all states and blogs like FiveThirtyEight carefully.None
20
Patrick MurrayMonmouth UniversityNoWhile there may be some pollsters who would like to be able to do that, they would be hard-pressed to find a willing accomplice. P.S. You guys really need a lesson or two on question wording.Sometimes or Other (please specify)Neither really if you you are basing this on a strict definition of the terms. Using RBS [registration-based sampling] rather than RDD [random-digit dialing] allows for a little more Bayesian application in the model, but even those who take samples "as they lie" apply some Bayesian thinking -- it's just not quantifiable.This question is a good reminder that few pollsters -- especially the good ones -- are primarily statisticians. My first job in polling was as a telephone interviewer in college. I learned a lot more about the practical application of polling methodology from that experience than any of my stats classes. Good polling requires an understanding of how to communicate with individual respondents (either by phone or in writing) as much as how to weight and model databases.YesThe idea of a cap is not applicable for a true panel study. Your question asked if we use an "online panel" -- that could mean a variety of things now. You have to remember that many of us do non-election polling. For me a panel study is tracking indiviudal changes in attitudes and behavior over time. That's different from a so-called cross-sectional "panel" which is modeled to "look like" a population at a specific point in time. You need to clarify what you are actually asking about.YesYes
21
Christopher P. BorickMuhlenberg CollegeWe are using a RBS [registration-based sampling] sampling frame that utilizes past voting behaviors to determine preliminary screening for interviewing. If an individual has demonstrated voting patterns that qualify them for interviewing (e.g. voted in two of the last three midterm elections). During interviewing we screen out individuals who express a low likelihood of voting.Yes. The weighting varies depending upon if we are making inferences about registered or likely voters. For registered voters, we weight our sample to most recent party-registration statistics for the population of registered voters as was reported in the most current figures provided by the Pennsylvania Secretary of State. For likely voter samples, we incorporate previous party voting results in similar elections (e.g. midterms) as reported in exit polling during those elections.YesWe have transitioned in recent years from RDD [random-digit dialing] to RBS [registration-based sampling] samples only in our election polling because we feel that these sampling frames can provide us more validated voter-behavior information.YesIn election polls, we cap our weights for a demographic subgroup at 2.5. Obviously as weights increase for any subgroup, there are risks that additional survey error may be introduced, so we have opted for a cap at a weight of 2.5.NoSometimesIf we are conducting an experiment where individual-level analysis is necessary, we have called respondents back again. For example, we have done tests where we call back respondents after an event ( e.g. a debate, a high-profile ad) to see what changes may have occurred at the individual level. We wouldn't be able to test these individual changes with a new group of respondents. In our election polling that is reported publicly, we have only utilized new samples without calling back prior respondents. Our experiments with call backs have been done for academic studies.YesYes. I have shared results in advance of publication with other pollsters as a courtesy to give them a heads up of what we will be releasing. Other established pollsters in the state are often called on to comment on our release so I like to share what we find with them earlier so that they can get a sense of what will be coming out.FrequentistWe are getting more interested in Bayesian approaches but at this point don't feel comfortable using Bayesian techniques as a primary component of our methods. To be honest, we need to learn more about what Bayesian techniques can do to improve our survey quality and I imagine we will be thinking more about this issue in coming years.I'm not really sure by what you mean by traditional reporting, If you are talking about the difference between the standard MOE and TSE there is room for discussion about what should be shared.SpanishThe cost increase from English-only surveys ranges from about 15 percent to 20 percent higher. We don't do Spanish-language interviewing often (only about three times in the last four years), so we don't have a big N to look at.We haven't done enough research on this with our samples to be able to identify if there are any differences.YesThis should be a requirement for any publicly released poll. A pollster should give full disclosure of the funding of a poll upon request.YesWe do so as a core element of our institute's mission.about 50 total, 6 who are not interviewersapproximately 22, 1 who is not an interviewerapproximately 28, 5 who are not interviewersabout 37, 6 who are not interviewersabout 8, 0 who are not interviewers2, 0 who are not interviewers1, 0 who are not interviewersYesNoWe have done some survey experiments using online panels, but only occasionally. Thus repeating panel members have not been a significant concern for us.YesYesWe have had to increase the number of call backs we conduct.About 40%Around 21It depends on the survey projectTo always remain courteous and to anticipate that they will regularly deal with individuals who are upset with being contacted.We monitor interviewing at all times during calling hours.No54Latest polls (I check daily!) and other indicators such as the president's continued low approval ratings.I'm not sure but this was a really long survey. I really like your work at FiveThirtyEight so I hung in there, but I think I'm tapped out for a while.
22
Scott KeeterPew Research CenterWe build a likely-voter index from a set of seven or eight questions. Scale includes measures of engagement in the campaign and in politics more generally, past voting, and intention to vote. We add one or two bonus points for young people. We then establish a likely-voter turnout percentage based on historical averages and some estimate of whether the current election is likely to be higher or lower than average. We adjust the prediction to take account of the fact that survey samples tend to be a bit biased toward more engaged people. Then we use the adjusted turnout percentage to cut the likely-voter scale. It is usually the case that the target percentage does not correspond cleanly to a set of categories on the scale, and so we have to take a percentage of the people at a particular point, plus all of the people in categories higher than that. So, for example, if our scale is seven points, we may take all of the 7s and a portion of the 6s. The apportionment is done using weighting, rather than arbitrarily splitting the group.NoNoPast concerns about bias in voter lists. This may be changing, and we are likely to experiment with voter lists in the future.NoWe are mindful of the impact of extreme weights, but using a typical RDD [random-digit dialing] design, it is usually not necessary to adjust weights, other than the usual trimming that is done as a part of the normal weighting process.NoWe don't weight on party in our RDD [random-digit dialing] surveys.SometimesOccasionally we do this to determine whether voter intentions have changed. In non-political surveys, we sometimes call back respondents to gather additional information.NoDoesn't seem like a useful exercise. We have many conversations with our peers, but not at the point of deciding to publish a poll.FrequentistYesBut only if pollsters are calculating margin of error with design effects properly included.SpanishI don't have that information close at hand.Yes, but the differences vary across topics.We don't poll in states.We don't poll in states.YesWe are a nonprofit organization whose mission is public polling.NoOur funder is a sponsor in the sense that they fund us to conduct research, but they are not a client in the usual sense of sponsorship.YesYesOur probability-based panel was designed to yield one survey per month.We haven't been through a full year yet.After the initial attrition following recruitment, turnover has been very low.YesNoDeclining response rates could certainly lead to increased weighting, but at the same time we have been increasing the cellphone percentage in our samples (now 60%). That has had the effect of reducing the need for higher weights.No
23
Evans WittPrinceton Survey Research Associates InternationalYesCertainly. For certain surveys, especially state-level surveys on elections, RBS [registration-based sampling] can be an excellent choice.NoYesCall-back surveys are a valuable survey tool in a variety of circumstances.YesThere is a decades-long debate about the reporting of sampling margins of error. They should be reported because they provide one valuable piece of information about the survey.YesDisclosure is central to credibility. Please check out 20 Questions A Journalist should Ask about Polls Results. [http://www.ncpp.org/?q=node/4]YesYesNo
24
J. Ann SelzerSelzer & CompanyNo. There is a wonderful chart at Pollster.com that shows how widely polls vary when they ask party ID. It is not a fixed variable, therefore inappropriate for weighting. [http://elections.huffingtonpost.com/pollster/party-identification#!showpoints=yes&estimate=custom]NoWhen I first joined The Register in 1987, we were calling about the 1988 caucus. A strategy in place was to bank "warm" caucus-goers. We'd simply call back a random sample of people we had already polled and mix them with new caucusgoers. Once I ran the crosstab and saw that George H. W. Bush was wining with one group and Bob Dole with the other, we halted that practice immediately. We had no explanation for the difference, but there it was.YesWell, the reporting of the margin of error is okay. It's how people interpret it. More then 90% of the time, if it is mentioned, it is to say that the race could be closer, or the other person could have the lead. It is equally likely the race is farther apart. It seems a matter bias that one possibility is reported without the other, equally likely, possibility reported.YesIt's a principle of disclosure. If the respondent is going to give us time, what we can give in return is transparency.NoIt's not part of our business model.YesYesNo
25
Don LevySiena CollegeYesYesNo
26
Jay H. LeveSurveyUSANoNever in 22 years. Never even occurred to me to do so until this minute.NoPublic opinion polls should not be "loss leaders." A media sponsor is a critical second-set of eyeballs on every questionnaire we draft and on every set of research results we release. The journalists who review our questions before we launch and who review our findings before we publish create an essential "checks and balances" system that is wholly absent for those pollsters who act unilaterally.YesYesYesTelephone polling died in 2008. See article by Bialik in WSJ 11/06/18: "2008 is to telephone polling what 1948 was to passenger rail: The end of the line." [That was a quote by Leve in this article: http://online.wsj.com/articles/SB122592455567202805]Yes
27
Andrew SmithUniversity of New HampshireWe ask several questions about voter registration, interest in the election and voting in past elections, to create a context in which it is easy for a respondent to say they are NOT going to vote. We continue only with respondents who say they will definitely vote or who will vote unless an emergency comes up. In addition, we read an option to each election question that allows respondents to say they will skip that particular race. We do not weight voters on their propensity to vote.No, this is nuts! There is no parameter to weight to.NoThe states we most typically poll in (New Hampshire, Massachusetts, Maine) have same-day registration. In New Hampshire, this typically is between 5 percent and 15 percent of the electorate. Using registration lists systematically excludes these people. Also, lists never have contact information for all voters. This systematically excludes those for whom no information is available.NoHas never been an issue.NoYesQuality control.NoWhat would be the point? We publish all of our results regardless of what others look like.FrequentistNoMSE [margin of sampling error] is least important source of error. Non-response, particularly for IVR [interactive voice response] polls is a much greater source of error as are question-form effects. Focusing on MSE is misleading. Also, MSE should be reported with design effects.NoneYesReaders have to know the sponsor so they can judge their interest in the results.YesAbout 100, 5 full-time staffAbout 40 percent, 3 full-time staffAbout 60 percent, 2 full-time staffAbout 90 percentAbout 5 percentAbout 3 percentAbout 2 percentNot too many minorities in New Hampshire.NoYesNoBetween $9 and $15 per hour depending on level and experienceAbout 40%308-12 hoursThank them and hang upAbout 10%No52Bad year for Democrats
28
Gregg DurhamWe Ask AmericaNoYesYes