From Scientific American:
The most frequently quoted example of the crowd wisdom phenomenon comes from a 1987 study in which researchers asked 56 students to estimate the number of jelly beans in a jar. The average of the guesses (871) was closer to the true number (850) than all but one of the individual guesses. This approach doesn’t work in all cases, however.
Previous research aimed at improving accuracy has often involved obtaining confidence ratings. Giving more weight to higher confidence answers can increase accuracy, but still fails in some situations, such as when deliberately misleading questions are used. For example, this new study shows that, when asked if Philadelphia is the capital of Pennsylvania, most people incorrectly answer “yes” because they know it is a large, historically significant city in Pennsylvania, even though the correct answer is Harrisburg. Confidence ratings don’t solve this problem, as people are often as confident in an incorrect answer as the correct one. “Conceptually there’s something missing in confidence,” Prelec says. “You want people to express whether their information draws on common knowledge or not—it’s really how confident they are that they have unique information.”
The team devised a clever yet simple solution they call the “surprisingly popular” method. In addition to providing answers and confidence ratings, they asked participants to predict how others would respond. They show that selecting the answer that is more popular than predicted out-performs both “most popular” and “most confident” methods. Both the misguided majority and the correct minority predict all people will give the incorrect response, so the minority (but correct) response is given much more often than predicted. “Minorities can be wildly off-bat, but there are many situations where you have a hierarchy of knowledge, and the people with more knowledge often know other people won’t share their information,” Prelec explains. “In …