CrowdSmart -vs- Surveys
Please do not confuse CrowdSmart with a survey.
While both collect information, CrowdSmart is an outcome-driven, AI-facilitated collaboration platform. Participants share and interact with ideas from other contributors to stimulate thought and derive answers that are explicitly important to the group.
Thinking of CrowdSmart as a survey will likely result in a poor experience for participants and poor results for the effort owner. Think of CrowdSmart more like a massive round table in which you can ask a group of people to collaborate to address a specific topic.
From asking your team to identify the next best step to address an emerging issue to conducting a nationwide Townhall on a civic issue, CrowdSmart enables idea sharing and collaboration on a unprecedented scale.
CrowdSmart |
Traditional Surveys |
Ideas are crowd sourced from your audience, inspiring knowledge sharing and collaboration. Best ideas will be recognized and elevated | Survey designers craft questions and options with an outcome in mind, frequently introducing bias |
Respondent can enter any response and many responses, so options are not artificially limited | Options per question must be limited, best practice says 8 is the maximum, forcing people to pick something |
AI insures diverse ideas are shared among participants and interaction with ideas are tracked | No Collaboration or Idea Sharing; No cross-discussion or cross-pollination of ideas |
Can still test assumptions and insure robust ideation with seeds | If the survey designer omits an important topic or option, it’s difficult to detect and hard to recover the results |
Designed for Adaptive Conversations - Quick collaborative experiences allows for follow up with the next logical question. Participants direct the conversation. | Usually involves ‘anticipating’ answers to craft the next logical question, leading to very long surveys and survey fatigue |
Ideation of complete ideas provides clarity with no need to play “20 questions” | Little room for nuance without asking very similar / duplicative questions |
Idea generation before idea review review eliminates ‘me too’ groupthink | Having ‘answers’ presented without ideation results in ‘me too’ groupthink; anchoring on the first reasonable answer |
Answers can evolve - If a participant thinks of a new answer or a twist on an existing idea, it can still be added | One-and-Done - if participant thinks of something important after clicking ‘submit’, too bad. Answers do not evolve |
Ideation requires some level of thought and introspection instead of simply clicking a button for a score | Ratings-based questions quickly clicked through with little thought; responses area disconnected from behavior-based decisions or consideration of real trade-offs |
Idea ranking establishes explicit alignment around the most important/most popular ideas | Importance of score answers usually derived from correlation analysis, frequently based on an assumed relationship |
Can get very meaningful results from 15-20 users, but can scale to thousands with little to no | Verbatims must be
|
Encourages creative and unexpected results | Analyst find what they expect to find in the results |