they can not be trusted at all. It is surprising they came in as close as they did. It is not JUST a problem that "weighting" is more important, it is that polling ALWAYS had to make an implicit assumption that people in a given demographic cell who answer the phone are similar to their counterparts in the same cell who do not answer. Even when the response rate was in the 40s, that bothered me.
Now the wizards used the polls aggregated and weighted as their models suggested and they pulled in other variables that they had reason to believe were relevant and improved the unmeasurable "accuracy".
The funny thing is that the one thing that might have given more insight was something I always questioned whether it made sense while I did it -- door to door canvassing. Having done it, I know that even if people try to return both a few times to a neighborhood, there are people who fail to come to the door or are not there. There are also some who would not say. (Here, it would be interesting to know if they were willing to say in the prior year. Reading a few analyses, they have noted that less of this was done this year - including in the critical rust belt states. It should also be noted that this is something better done by people "from around here".
I hope that whoever gets the DNC job assigns a good team to look at how this is done in various places and how the data is compiled, both for use in that campaign, but to help building a database so we know who are people are. Combining the voter lists, the voter record of who voted in which elections and any response to a canvasser or phone banker for a prior election would give any future campaign - for any office - a wealth of information. (If you want to go there, they could also get various demographic information if they were willing to pay for it.)
Consider how that data base could give you a red flag long before an election. Let's say that you observe that you are NOT getting the definite yeses you did in a prior election for a set of people, you could have a red flag that something may be wrong. Having someone ask them about likely issues might determine if there is a systemic problem. Then if the issue is something the campaign thinks is a misunderstanding, it could be addressed to try to assure those votes.
I am not naive enough to think that data collected this way is necessarily good -- it might be very good as a sanity check for a telephone poll.