As an alternative to online labor markets, several platforms recruit unpaid online volunteers to participate in behavioral experiments that provide personalized feedback. These platforms rely on word-of-mouth sharing by previous participants for recruitment of new participants. We analyzed the impact of performance feedback provided at the end of an experiment on 81,131 participants' sharing behavior. We show that higher performing participants share significantly more. We also show that self-verification has a moderating effect: people who expected to do poorly are not affected by a high score, but people who expected to do as well as others or better, are. In a second experiment, we evaluate three distinct social comparison designs for the presentation of the results. As expected, the design that most emphasized participants' relative success led to most sharing. Contrary to our expectations, people who expected to do poorly benefited from the most optimistic social comparison more than participants who expected to do better than others.