Talk:Fleiss' kappa

From Citizendium
Revision as of 11:41, 26 September 2007 by imported>Subpagination Bot (Add {{subpages}} and remove checklist (details))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition Statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. [d] [e]
Checklist and Archives
 Workgroup category Mathematics [Categories OK]
 Talk Archive none  English language variant British English

Comments

The first section states

"It is a measure of the degree of agreement that can be expected above chance. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are."

This seems a bit awkward to me. Would it be equivalent to say something like the following?

"It measures to what extent the raters are more in agreement than would be expected if they assigned ratings randomly."

Later, you say "A K value of 1 means complete agreement". Complete agreement between the raters or complete agreement with what would be achieved by chance? I'm pretty certain it's the former, but it's not entirely clear.

I leave this here as I'm not certain this is what you want to say. (I don't know enough statistics to be much more than a copyeditor on this) Simen Rustad 15:12, 21 November 2006 (CST)