A Classic Crowd Working Paper

Crowdsourcing is definitely one of the flavors of the month this year. Enthusiasm is high, and people are ready to try crowdsourcing darn near everything. Let a thousand crowds bloom!

But we really are still learning what tasks (and parts of tasks) should be crowdsourced, nor do we have good ways to integrate crowdsourced knowledge into workflows. This is particularly needed in cases where we want to add crowdsourced advice or help rather than simply substitute a crowd for a worker.

Several years ago, Michael Bernstein and colleagues experimented with how to augment the task of writing, adding crowdsourced assistance into Microsoft Word. They call this Soylent (because “it’s people”). “Using Soylent is like having an entire editorial staff available as you write.”

This month they look back  on what has been learned [2].

Aside: It is interesting (and frightening) to see how dated this work seems, even though it was done only five years ago (original paper [1]) The researchers use Mechanical Turk, which was then the only game in town. Now there are many other crowdsourcing platforms. For that matter, “Microsoft Word” isn’t that same animal any more.

 Anyway, this work is still interesting for a few reasons. First, they had to tackle the question of decomposing the overall writing process, to (relatively) seamlessly add crowd support. They identified three tasks the crowd should be able to do: shortening text (preserving as much meaning as possible), proof editing, and repetitive tasks (which they term “the Human Macro”). Anyone familiar with producing text will readily agree that these are somewhat difficult tasks, but do not necessarily require strategic understanding of the overall writing task. In other words, they are well chosen for crowdsourcing.

A second question is how to organize the crowdwork. Mechanical Turk workers are paid (tiny amounts), so the writer must be allowed to dispatch work as desired. But how should the work be done? The Soylent group define a generic workflow pattern they call “Find-Fix-Verify”, which has since become rather widely used. Basically, some workers are charged with identifying text that may deserve change, a second set changes selected text, and a third group checks that they agree. Each of these stages collects “votes” from several (3-5) workers, which is designed to smooth out flaky quality.

Other problems include getting the timing to be acceptable (waiting overnight for suggestions would be stupid), and the nasty mechanics of presenting the suggested changes in the WYSWIG text.

They raise a number of issues in the original paper. For one thing, their financial model was questionable. Not only is paid copyediting a luxury for most writers, the design of Soylent required payment even for changes that were rejected. As they comment, “it remains an open question whether the gains in productivity for the author are justified by the expense.” Given that this concept has not seen general use, I think we know the answer is “no”.

On the other hand, there are a lot of alternatives to Mechanical Turk now, so there is no need to tie strictly to their payment scheme.

They also commented that legal ownership is not a problem, because the TurkWorkers contractually waive their rights. This issue is actually more complex in some cases, when moral authority is at issue. For example, a doctoral candidate would be wise to be very cautious about using this sort of advice, lest he or she either lose credit for the originality of the work, or accidentally incorporate others’ work into his or her own.

In his introduction to this reprint, Aniket Kittur remarks in his introduction to this reprint the future of crowd work is whether it is capable of scaling up to the highly complex and creative tasks embodying the pinnacle of human cognition, such as science, art, and innovation.” [3]


 

  1. Bernstein, Michael S., Greg Little, Robert C. Miller, Bjo rn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich, Soylent: a word processor with a crowd inside, in Proceedings of the 23nd annual ACM symposium on User interface software and technology. 2010, ACM: New York, New York, USA. p. 313-322.
  2. Bernstein, Michael S., Greg Little, Robert C. Miller, Bjorn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich, Soylent: a word processor with a crowd inside. Commun. ACM, 58 (8):85-94, 2015.
  3. Kittur, Aniket, Corralling crowd power: technical perspective. Commun. ACM, 58 (8):84-84, 2015.

One thought on “A Classic Crowd Working Paper”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s