THE AMERICAN ASSOCIATION of Public Opinion Research is a rather staid institution. Its annual conference is four days of panel discussions on complex statistical problems in the field. (Sample conference T-shirt slogan: “The Weighting is the Hardest Part.”)  Many association members are researchers and statisticians who conduct important government surveys such as the US Census, or work for major research companies such as Gallup and Pew. It is not a group prone to bomb-throwing.

That’s why it was so surprising when, in August of last year, the association issued a scathing critique of the New York Times and CBS for, among other things, collaborating with the online survey firm YouGov to conduct state-level elections polling. Such online polls use respondents who have “opted-in” to take online surveys, as opposed to polls that utilize more traditional “probability-based” sampling.

An opt-in panel, the association argued, violated the Times’s previously published standards for survey research, which were replaced with a new, more vaguely worded set of standards around the same time the new polls were posted. What’s more, the association’s statement said, “these methods have little grounding in theory and the results can vary widely based on the particular method used.” While the removal of the standards and a perceived lack of transparency on the new methods were arguably key points of the association’s statement, the comments about online polling amounted to “shots fired” in the polling world.

The association’s statement set off a firestorm of criticism. One of the most colorful comments came from Columbia University statistics professor Andrew Gelman, who compared the organization to the “Association of Buggy Whip Manufacturers” criticizing the automobile. The controversy highlights the challenges facing survey researchers as Americans change the way they communicate. The debate also illustrates how fast the field is changing.

Since George Gallup pioneered the field of public polling in the 1930s, the underpinning of survey research has been “probability-based” sampling. A survey can only be said to be representative of a larger population if everyone in that larger population has a chance of being selected to take part. That chance might not be equal for everyone, but if the differences are known, steps can be taken to make the final sample match the overall population. For decades, the practical application of this idea has been to conduct polls by mail, in person, or over the telephone, since most everyone who could be surveyed has an address and a phone number.

No sooner had Al Gore invented the Internet than surveys began sprouting up online. The primary challenge of online surveys has been consistent: There is no way of even attempting to reach everyone on the Internet like there is with phone, snail mail, or door-to-door surveys. Instead, online surveys rely on a pool of respondents who have “opted-in” to participate. Typically, opt-in surveys are sent to the estimated 3 to 5 percent of Americans who have signed up to take some kind of online survey, according to figures produced by Braun Research. In other words, not everyone in the target population has a chance of taking part, only those who have chosen to participate.

And yet, many online surveys seem to be yielding accurate results, despite the fact that they don’t share the theoretical bedrock that telephone surveys have rested on for so long. While the early days of online surveys were a fairly inauspicious collection of catch-as-catch-can sampling, the methodologies are evolving. The survey providers that are now drawing the most attention are working studiously to overcome the barriers set by ditching the assumption that everyone can be reached. SurveyMonkey, the company better known for cheap (or free) do-it-yourself online surveys, conducted experimental political polling during the 2014 election cycle, and has now partnered with NBC for a limited number of surveys. The company’s results were both relatively accurate and free of partisan bias. YouGov, the Times’s partner, conducted pre-election polling in each competitive statewide race and every congressional district. The results were comparable to what other pollsters found.

At the same time, fewer and fewer people are responding to traditional surveys over the telephone. On a good day, with a lot of effort, a pollster can get a response rate of around 10 percent. The low response rate means 90 percent of would-be respondents choose not to take the survey by not answering the phone or refusing to cooperate. The remaining 10 percent, who do opt to participate, are then used to generalize to the entire population.

Truth be told, even a 10 percent response is generous these days. A recent Stanford University survey tried to reach respondents 14 times and still only succeeded in converting 12 percent of targeted respondents. Most political surveys do not include a budget that allows for this many contact attempts, and those that do have such a budget would be well-served to consider when they cross the line from survey to harassment.

Polling traditionalists are grappling with a tough question: Is a phone survey with such a low response rate still a probability survey? It’s easy to see why an online survey is not, since only the 3 to 5 percent of those who sign up for a panel can possibly participate. But on the phone, are those 9 or 10 percent who answer the phone effectively “opting in” in a way similar to the online survey takers? After all, many Americans have caller ID and choose to answer the phone or not. These questions are serious enough to call into question the theoretical superiority of telephone polling over online polling.

This debate is now seeping out from the halls of polling industry conferences as media organizations grapple with how to report on polls and even how to conduct their own. For decades, many news outlets would only report on probability-based polls, and they would certainly only put their own names on probability polls. Times are changing. Practices that used to earn the scarlet U, for “unscientific,” are now accepted and routinely reported by major news outlets. In addition to the Times’s collaboration with YouGov, NBC has partnered with SurveyMonkey. Even the venerable Gallup firm, whose namesake pioneered the use of probability-based sampling for polling in the 1930s, announced in March that it would be doing more online polling.

At the MassINC Polling Group, our elections polling has been conducted via telephone, both because the approach has worked well for us, and for practical reasons. There are not currently enough online polling providers with enough reliable sample and methods to support a large quantity of online polling in Massachusetts. The main purveyor of online election surveys (YouGov) is already working with UMass Amherst. We use online surveys, in some cases, for our non-elections work. But for elections, we still do live-interview telephone surveys and plan to continue to do so.

The truth is, the debate over telephone vs. online is likely irresolvable, since both have virtues and drawbacks. And it doesn’t need to be resolved, since there is clearly room for both and a need for both in today’s media environment. Practitioners of each method are working hard to overcome the challenges they face, whether these are non-responses or the lack of a theoretical foundation. The American Association of Public Opinion Research itself is holding a conference entitled “Reassessing Today’s Survey Methods” later this spring, focused on just this issue.

At the moment, both telephone and online polling seem to be working. Yes, each has its challenges. Even so, as Nate Silver, of ESPN’s FiveThirtytEight blog, writes, “But all of this must be weighed against a stubborn fact:  We have seen no widespread decline in the accuracy of election polls, at least not yet. Despite their challenges, the polls have reflected the outcome of recent presidential, Senate, and gubernatorial general elections reasonably well. If anything, the accuracy of election polls has continued to improve.”

But how can that be? Low response rates call into question the basic theory underlying survey research. And if researchers no longer have the bedrock theory to stand on when explaining why telephone polls work, can they continue to lord them over online surveys as statistically superior? As Columbia’s Gelman notes, “Whether your data come from random-digit dialing, address-based sampling, the Internet, or plain-old knocking on doors, you’ll have to do some adjustment to correct for known differences between sample and population.”

Which is another way of saying no survey data is a perfect representation when it is collected, no matter how hard we close our eyes and pretend. The good news is, despite the flaws in each, there are more methods that work for what we need them to do than ever before.

Steve Koczela is president of the MassINC Polling Group and president of the New England Chapter of the American Association of Public Opinion Research. Rich Parr is research director of the polling group. The opinions in this article are theirs alone.