eirias: (bluebird)
[personal profile] eirias
That Facebook Experiment has gotten me thinking about ethics in data science. Well, OK, I'd been thinking about it anyway, so this fifteen minutes of moral panic was well timed.

I've been thinking that there is really no code of ethics this young profession, so maybe I need to develop my own, where by "develop" I mean "steal cleverly from others." I've got a start.

I. First, tell no lies.
II. All models are wrong; some are useful.
III. Punch up, not down.

(no subject)

Date: 2014-07-03 06:03 pm (UTC)
From: [identity profile] ukelele.livejournal.com
This is a topic on which I would like to hear a great deal more being said.

(One of my particular ethical interests is - watching how differently people used to A/B testing and people used to IRBs react - and pondering how all the techniques of usability can be used for good and for evil, and there isn't actually a clear line between "A/B testing" (or whatever other sort of site performance & conversion & usability tests you do) and "experimenting on human subjects". Yet many people's intuitions about lots of usability testing are warm and fuzzy, even though they do not feel similarly about human subjects testing in general. Anyway. The blurriness of the lines there. It's interesting.)

(no subject)

Date: 2014-07-03 07:25 pm (UTC)
From: [identity profile] eirias.livejournal.com
I'm fascinated by this too, because my intuitions are similar. I think this is an area that does not admit clean distinctions.

The comments I have most loved in this brouhaha are not actually about ethical outrage per se but about the amusingly, stupidly small effect size. The comments I have most hated have been in one or more of the following categories:

- "it's not illegal / the IRB approved it / the IRB said it didn't need to approve it, therefore you should shut up"
- "everybody's doing it, therefore you should either shut up or quit the internet forever"
- "if they don't publish these things they'll still be doing studies and we'll never know about them, therefore you should shut up"

Nobody should shut up! What happens to our data is extremely important stuff to be talking about. All the more so given what we know state actors are doing with it. Given how companies are buying and selling it. Given how data security is imperfect everywhere and the only thing that happens to change breaches is that they keep getting bigger. Given how much of our lives we're talking about, now, not just our personal or just our professional or just our business lives but all of them.

The fact that no rules have been broken does not mean we shouldn't be talking about this. It means that we are at a critical juncture of examining the rules, written in another time with other affordances, and deciding whether we still like them, now, for our own time.

(I'm debating un-friends-locking this post; d'you mind? Will keep it locked if you do mind.)

(no subject)

Date: 2014-07-03 07:39 pm (UTC)
From: [identity profile] ukelele.livejournal.com
Go right ahead. Nothing I wouldn't say in other venues. Thanks for asking though, particularly in context :)

(no subject)

Date: 2014-07-04 01:22 am (UTC)
paperkingdoms: (Default)
From: [personal profile] paperkingdoms
I do not particularly care that it was approved; I think the outrage is a sign that we need to not shut up about this until we figure out what sort of new schema would leave us more comfortable, or at least less outraged.

(no subject)

Date: 2014-07-07 10:08 pm (UTC)
From: [identity profile] eirias.livejournal.com
Yes, exactly!

(no subject)

Date: 2014-07-04 08:00 pm (UTC)
kirin: Kirin Esper from Final Fantasy VI (Default)
From: [personal profile] kirin
To be honest, while I think the ethical questions are quite interesting, personally I'm mostly just annoyed at yet another example of FB hiding things from me for completely opaque reasons. This is essentially the last of many straws for me even *attempting* to read my FB timeline - I'm not deleting my account and I'll still use it for specific narrow purposes, but other than that I'm feeling pretty done.

I want control over what subset I see out of the data that I'm allowed to see. I realize that at some scale micro-managing that gets overwhelming, and I don't object to the *existence* of automated non-trivial algorithms for that task, but they should (a) be optional, and (b) give at least a general idea of what criteria they're using. Ideally peer-reviewable code would be nice but I understand that's often at odds with commercial competitiveness...

(no subject)

Date: 2014-07-05 03:45 pm (UTC)
From: [identity profile] drspiff.livejournal.com
I had been intrigued by this as an academic does not do human subject research but who at the same time knows what an IRB is and sometimes in an administrative capacity reviews or approves protocols. The articles I read did not mention an IRB process and I wondered if the project had even been reviewed before it was conducted. I gather from the comments to your post that there was an IRB review and approval. But at least in my experience it is uncommon (because it is more difficult) for an IRB to approve a protocol that does not require that the subjects are informed and consent before they are included in the study.

Was FB claiming that their EULA was sufficient to satisfy "inform and consent?" To me that would be fishy and ethically gray. In that case FB deserves to get its toenails singed because its actions are very self-serving.

But if there was no requirement imposed for the protocol to include inform and consent it seems that is an area that should be intensely discussed and subject to debate. That would seem to address most of the public outrage that I've heard tell about.

The other loop hole I see here is that the Feds enforce most IRB regulations by using the carrot of requiring it to apply for federal money and the stick that they will take that away if you violate the regulation. But private companies and concerns don't need to follow federal regulation if they don't care about federal money. Practically speaking professional ethics don't have any teeth without federal regulation. So what is one to do?

(no subject)

Date: 2014-07-05 08:59 pm (UTC)
paperkingdoms: (Default)
From: [personal profile] paperkingdoms
http://www.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/

The update in italics indicates that it did go through an IRB approval process (at Cornell), but that it got approval as a pre-existing data set.

(no subject)

Date: 2014-07-06 02:04 pm (UTC)
From: [identity profile] drspiff.livejournal.com
Thanks!

I can think of plenty of instances where an IRB might say "no" to an existing dataset because the way it was collected was reprehensible and has potential to poison the entire effort. So I don' think that closes the issue.

I also find it kind of odd that Cornell is deferring to Facebook to answer questions since it is the rep of their IRB that is on the line. Plus the points I (and that article) bring up about private concerns not being bound to follow the rules in the same way. It sounds like a cop out to make Facebook answer the questions.

(no subject)

Date: 2014-07-06 06:29 pm (UTC)
paperkingdoms: (Default)
From: [personal profile] paperkingdoms
I agree with what you've said here... both about oddness, and the concerns about private endeavors.

(no subject)

Date: 2014-07-07 10:09 pm (UTC)
From: [identity profile] eirias.livejournal.com
It sounds like a cop-out to make Facebook answer the questions

Quite.

Profile

eirias: (Default)
eirias

December 2016

S M T W T F S
    123
45678910
11121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags