This paper associated with these slides analyses the theoretical and practical implications of crowdsourcing two different kinds of text: transcriptions and annotations. Two projects that adopt the model for these respective purposes are Transcribe Bentham and Ossian Online. They exhibit differing motivations for choosing this model, and aim to crowdsource tasks whose requirements and biases place particular demands and restrictions on participants. As a consequence, the accuracy of the term crowdsource must be questioned for more subjective tasks that require the generation of original intellectual content.