The other answerers have scratched the surface – funding/bias, electoral college, polling vs. voting, etc. I think I have the answer to your question if you will bear with me.
First off, a poll is a statistical measuring tool, nothing more, nothing less. The simplest rule of polling and surveys is that given a specific sample size, there will be a particular margin of error of plus or minus so many points for each answer. Which means that let’s say you ask 600 people a yes or no question, 53% say yes and 47% say no, and the margin of error of the poll is plus or minus 3%. Well, that means that both the 53 and the 47 could be wrong by plus or minus 3 points, so in reality, what that poll tells you is that out of everybody that poll is supposed to represent (not just those people sampled) it could be between 50/50 and 56/44.
So, the next concept of polling you have to look at it is as I alluded to up above, everybody that poll is supposed to represent, aka, what you are trying to come up with is a “representative sample”. In theory, you try to get a “random” sample, but the problem with political polling over the phone is, you can’t really just get a random sample of Americans, North Dakotans or whatever. Some have phones, some don’t, some will or won’t answer. And certain areas might be highly Republican where others might be highly Democrat, and if you oversample one area, in an attempt to be “random”, you actually introduce more error into the poll. So, what pollsters do with public opinion polling is they attempt to correct for certain factors to be as accurate as possible.
One thing they need to correct for is the fact that if you tried to get a random sample of all Americans, it wouldn’t necessarily be the same as a random sample of all the people who are going to vote. Remember that about 40% of Americans don’t bother to vote, so what you really need to do in order to get a “representative sample” is to try to determine who is a “likely” voter. One of the best ways to do this is to look at who has voted before, and to sample people who based on past activity are likely to vote this time. So, you look at voter registration lists. Which is all well and good, but then you also have to say, well what percentage of voters were registered Democrats, what percentage were Registered Republicans and what percentage were Registered or Unregistered Independent voters. And you try to say, if it was 40/40/20, then you try to make sure that if you interview 1,000 people, 40 are Dems, 40 are Republicans and 20 are independents. And of course, you also have to adjust for new registrations, which can be pretty hard. For example, if a pollster was using 2004 election data, well, in 2004 there may have been more registered Republicans than registered Democrats. Today, the opposite may be true given the state of the economy, the war and the fact that Obama’s campaign has been turning out new voters left and right. And then of course you have to say, well how many of these new registrants are actually going to turn out…there is one big complexity. OK, if they made it to the primary, you can probably count them in, but Obama gets someone to register as a Democrat today, you have to make some assumptions. Different pollsters will make different assumptions…this is one place bias can come in or just simply sloppy methodology or even assumptions which are well intentioned, but wrong.
Then we have a problem now days that pollsters are having to figure out how to incorporate the surge in cell phones into their polling. They didn’t used to call cell phones at all, but now they do, the problem being though that some people don’t even have land lines anymore. And of course they’ve always had to adjust their samples for people who screen or block calls or have unlisted numbers. And yet, phone polling is probably going to be the most representative way, because internet polling and polling by mail is just going to give you the people who bother to send the results in, and in person, depending on where you do it could be biased based on the demographics of the area and what not.
So polling is very complex, and it’s inherently inaccurate, so even these margins of error are simply a representative sample assuming their assumptions and adjustments in sample are correct at establishing a pool that can truly represent those who will vote in November. The concept that pollsters have to make adjustments based on assumptions has been termed PIE or pollster introduced error by a website I like to visit called www.fivethirtyeight.com, which is a blog and an attempt to make predictions from polling data. They actually adjust the polls themselves based on historical accuracy of each pollster (difference between their poll and the real result), so they know that pollsters like Gallup and Rasmussen have very little PIE, but something like Zogby Interactive, which polls people on the internet, have VERY high PIE. This site looks at all the polls that are done with one exception…internal polls.
What is an internal poll? Well, it’s a poll commissioned and used by the campaign, and therefore it can not be deemed to be reliable enough…yes, the campaigns know what they’re doing and they want to be accurate but it’s hard when you pay for something to not get some bias in your favor. So they ignore those polls and just look at the polls that are out there which are independently financed, they weight the reliability of these polls by the margins of error, the recentness of the poll, the reliability of the particular pollster, etc. And they then in turn use regression analysis to adjust for trends, both in state and national polling. In this way they can come up with a fairly accurate picture overall of “if the election were held today, this is what we could expect”. And even this comprehensive method, which has the benefit of the collective sampling of ALL pollsters, has its limitations.
One thing that this method also does is to filter out noise. When something big happens on the campaign (say McCain introduces an unknown, unqualified person as his running mate, who turns out to be a darling of the Republican base which he’s had a hard time connecting with), a lot of noise happens on those days…it introduces a level of volatility in what people say they might do in November and what their real long term impressions are going to be.
So, because of all the factors that could affect a poll’s accuracy, it’s not all that rare to see two polls on the same day that say completely different things. And yes, traditional GOP strongholds WILL fall these days. But take North Dakota for an example. It’s ONE poll. If you look at the 538 website, you’ll see that in tandem, all the polling in ND still shows it as a red state, but a weak red state. A few more polls say the same thing, and suddenly it starts to look like the trend says that ND will flip blue. I don’t suspect ND will be a swing state, but Virginia, Nevada, and Colorado for example are looking mighty good for Obama.
Bottom line, take and single poll with a grain of salt…doesn’t mean it’s “useless”, but it’s really not all that meaningful unless looked at in the context of all similar polling.