Let me join in the hue and cry about the Facebook study published last week, “Experimental evidence of massive-scale emotional contagion through social networks“, by Adam D. I. Kramer and colleagues.
The results weren’t especially Earth shattering, most of us believe that biasing news can bias what people talk about. After all, isn’t this what advertising is all about?
The uproar was about the fact that the experimenters manipulated the contents of thousands of ordinary Facebook users and then observed their postings. This was done without explicit information or permission.
As has been pointed out many times, this project could not be done at a legitimate University or Research facility, which requires strict protection of the rights of human subjects. In particular, any research on humans requires legally operative informed consent. Absent that, it is against the law and will result in discipline, firing, fines, lawsuits, and/or prosecution.
As a historical lesson, the legal foundation for these laws is built on the legacy of Nuremburg trials, which for the first time established that experimenting on humans without consent is not just a civil matter (e.g., assault or batttery), but a crime against humanity. Serious freaking stuff.
In the US this has been implemented by so called “Institutional Review Boards” at every major institution, which reviews all research beforehand and check that process was followed. The IRBs consider whether collecting personal information is justified by the research in question, whether the consent procedures ae adequate, and seek to ensure that data will be protected and not used for any purpose other than the specified research. Also, one must consider the welfare of the participants–very broadly construed–to provide them with information about the results and findings, and follow up if there is any distress or harm.
Note that all research must be reviewed, even “harmless” questionnaires. The IRB will decide if you are exempt or not. (There is an exemption for basic software testing–mercifully for everyone the IRBs don’t have to review every time you ask someone to try your new web page.)
Quaintly, video taping people for research is considered particularly intrusive, and is rigorously policed. Permission must be obtained, and the recordings must be saved in secure, encrypted vaults and never used except for the research project. No “long tail” on this data, its not legal.
This particular restriction is increasingly ironic, as the participants were no doubt videoed multiple times by securing, snoops, and their friends on the way to the experiment, and may well be videoing the experiement themselves with their own phones. The only thing you have to get permission for is the science.
Needless to say, the research in question would never have passed an IRB review, on several grounds.
So how did this happen.
Obviously, the rules don’t apply to big companies. Actually, they do, but since it wasn’t publicly funded, they aren’t routinely policed.
I note that in both advertising and software engineering, it is routine to collect behavioral data without explicit permission. After all, advertising is all about manipulating people without their consent. I have no doubt that it never occurred to the Facebookers that this kind of monkeying with people could be wrong–its their entire business model. (When a news source slants the news, it is evil, when an aggregater selects which slants to present, it is what?)
In fact, the official “apology” actually says, “The goal of all of our research at Facebook is to learn how to provide a better service.”–explicitly acknowleging that this is not legitimate scientific research (but also diving into the “software testing” loophole).
One more point. In the public sector, the research would also be reviewed for financial conflict of interest. This is yet another way that this research might have failed to pass muster. The academic researcher should not be providing “objective science” dressing for commercial research, at least not without scrutiny. In this case, it appears harmless enough, but there is clearly a potential conflict of interest between the funder and the interests of the users who are exploited in the study.
Overall, this study might have been allowed by an IRB, once reviewed and carefully justified. It might have been considered legitimately public information, and harmless. Also, being a software service, it might pass muster as algorithm testing. Maybe.
But you can’t just blow off ethical tests in the name of “improving our (for profit) service”.
The inexplicable thing for me is how Guillory and Hancock (of UCSF and Cornell) could justify participating without getting IRB approval from their institutions. They should have known better and be in hot water for embarassing their schools with such visible misbehavior.
One last thing: regardless of this case, the IRB rules certainly deserve to be reviewed. The supposed “intrusiveness” of video and other data needs to be reconsidered to be in line with everyday reality. Routine questionnaires do not need to be treated as if they are experimental brain surgery. Other factors maybe should be added, particularly about the responsibility of privately funded organizations–privatizing crimes against humanity aren’t OK just because the government wasn’t involved.
And engineers and business majors should be taught about research using human subjects, and that the principles apply to them.