# Participants ≠ Population in on-demand services, and an example from architecture

In a recent co-authored paper, we conceptually and technically formalize the simple idea that the participants in every on-demand crowdsourcing setting have a choice to participate and they only compete with the rest agents who also choose to participate (as opposed to the whole population). Since participation is a decision of the players themselves who enter simultaneously, obviously the agents do not know ex-ante how many other agents will eventually participate. That is, in every on-demand crowdsourcing setting, the agents play a game against an uncertain (and endogenously determined) number of competitors.

The actual participants are fundamentally different from the potential participants (which we refer as population). Conceptually, the comparison is similar to “sales” vs. “demand”: one only observes sales (realized demand), but one never observes demand (which is a more “theoretical” notion). Technically, if there is a population of $n$ potential agents, the number of actual participants is a realization of a non-negative random variable $N$ with support $\left\{ 0,1,\ldots,n\right\}$.

This key distinction has important implications for empirical works on crowdsourcing and marketplaces as wel. One only observes data from the (endogenously determined) participating agents, as opposed to the potentially participating agents (population) who could be even assumed having an infinite size for all practical purposes. Further, although a firm can affect the size and characteristics of agents’ population, the number of the participants is a random variable and its distribution is affected by the choice of the agents and the design of the marketplace. Hence, any interpretation of empirical results on “participants” can not directly extend to what is at the control of the firm, i.e. the “population”.

In this post, I follow the paper and go one step deeper and ask whether participants know how many other competitors they face when they exert effort. To focus on the most conservative case, we assume that participating agents do not know how many other participants exist (neither their skill) when they choose how much effort to exert leading to output. In what follows, I provide a modern use of crowdsourcing in architecture and space design industry and highlight how the above assumptions greatly apply in such a setting.

# GoPillar – design contests for space

Simply put, GoPillar is a marketplace where project creators host contests for architects and interior designers to suggest design solutiosn for a range of properties. Designers participate on-demand and only the top five solutions (ranked by certain criteria by the project creator) receive a payment in the form of a “reward”. Note that all contests must reward five solutions by design (see here).

Let’s consider a concrete example of a contest hosted on GoPillar. Here, there is a deadline to submit a design by roughly 33 days that satisfies the criteria mentioned in the “briefing” and “requirements” tabs respectively. The best design submitted is rewarded with 528EUR, the second-best is rewarded with 176EUR, etc, till the fifth-best design that receives 44EUR (and no other design is compensated).

And here is the initial options of a prospective designer who chooses to participate in a contest (among possibly from multiple ones).

Even more interestingly, observe that the number of participants is not shown to a prospective designer and note the tab “Go Premium“.

That is, a prospective designer has to pay a fee to be able to dissolve the existing participation uncertainty he/she faces. For the basic users, which are also the largest part of the designers’ population, our theoretical framework applies. In particular, when a basic user chooses to submit a design, he faces uncertainty over the skill of the others (participation quality) and uncertainty over the number of the others (participation quantity). Further, a firm hosting a contest employs only the best design (thus cares only about the quality of the best design out of the ones submitted (as opposed to the best out of the population). For related discussion, refer to the working paper mentioned and to its new version to be uploaded soon.