I will have to make up for that some how this weekend, but it's going to be a challenge with our first paper for Rhetoric of Science and Technology class due on Monday, and three grueling reading assignments, for which discussion board entries have to be devised by midnight Sunday night.
Speaking of the discussion board entries, I'm counting this as an affirmation even though to the rest of the class it's going to appear as a "teacher's pet" sort of thing.
|One of the questions that the professor posted about one of the three readings for Monday's class is:|
How might you use Feenberg's four moments of secondary instrumentalization to recontextualize technology use? That is, to paraphrase John's concern, how might this reading be of practical use to you?
I met Joe at Helios at a little bit after 1:00, where he was a "second coder" for my Verbal Data Analysis project. A second coder is a person who uses your codes to mark your data so that you can then do a comparison on how each of you marked things, the theory being that the more you mark the same things the same way, the better your coding definitions are.
You calculate what's called Cohen's kappa, which factors in the probability of both of you agreeing on a code simply by chance:
where Pr(a) is the relative observed agreement among raters, and Pr(e) is the probability that agreement is due to chance. If the raters are in complete agreement then κ = 1. If there is no agreement among the raters (other than what would be expected by chance) then κ ≤ 0.
Here's what the results of Cohen's kappa means:
|Abysmal; you and the 2nd coder agreed on very few things.|
|.20 - .40|
|Slightly above abysmal, but still quite pathetic.|
|.40 - .60|
|You done good; make a few tweaks to your codes and carry on.|
|.60 - .80|
|Teacher's pet; or, your codes are so easy a two-year old could use them.|
.8 - 1.0
|You're a coding definitions god; put it on your resume.|
Not that I'm bitter, but my Cohen's kappa was:
Yes, that would be in the your-coding-definitions-suck-you-moron range. To shed some light on the gravity of the situation, this means that if instead of each of us taking an hour-and-a-half poring through my data marking things, if we both would have marked each one without even reading them (it's a yes or no decision basically—does the code apply or not), we would have agreed, simply by chance, with about the same correlation. Lovely.
Going into class tonight, I felt pretty crappy about this, but after applying some of the disagreement analysis techniques, together Jason and I came up with a couple of strategies that we think will really help. Yay!
At home after class, I spent the rest of the night completing my disagreement analysis, and finalizing the work, which lead to all of these results, to turn into Jason.
Exhausted, I actually got to bed at a reasonable hour. That is, if you call 1:30 in the morning a reasonable hour.