For a respondent, the procedure looks prety much the same as the standard survey: (1) A small subset of responses of others is presented to each respondent. He/She assesses whether those responses are acceptable for him/her. (2) Afterward, a respondent writes down his/her own response if there is any. That's it!
What's important here is that the responses that are presented to each respondent are chosen randomly. The record of this procedure constitutes a graph (or a network) of opinions. Then, the responses can be summarized into typical responses with the help of graph clustering algorithms.
High degrees of freedom by incorporating free-format responses is an important feature of voteclustering. On the other hand, voteclustering functions just like a classical multiple-choice survey whenever a survey ended up with responses without much diversity. (And in this sense, our survey is nonparametric. ) Therefore, voteclustering can always be a choice.
The result of our survey is democratic. That is, the classification of the responses is performed on the basis of information input by the respondents themselves, rather than by a subjective criterion of an analyst or a completely objective distance measure of words.
For more details, please take a look at our paper:
Tatsuro Kawamoto and Takaaki Aoki, "Democratic classification of free-format survey responses with a network-based framework," Nature Machine Intelligence (2019).