How can we better develop assessment center exercises or train evaluators so that the inter-rater reliabilities are much higher than what were reported in this study? Additionally, couldn’t something simple like videotaping the exercises allow the evaluators to go back and review the participating individuals to ensure that their ratings are relatively reliable and accurate?
Has there ever been consideration of breaks for the raters within assessment centers to avoid cognitive overload and fatigue? This would likely improve inter-rater reliability.
This took me back to Dr. T's class and writing scales with examples of behaviors at the top, middle, and bottom (of the six-point scale). I would think this would be very challenging for 'drive for results, leadership, persuasiveness, team work), but also very helpful for raters. Does anyone have experience with writing these? Anyone have a good experience of using a good rating scale for this kind of observation of behavior? I have not been impressed by the scales I've seen at work (during interviews). Again, lots of thought and reference to JA needs to go into these. I am learning this.
I agree with Vicki, the cognitive load thing could be solved with video/audio recording. Also, to answer Amy's question, it seems from reading this article we should not automatically trust the ratings/conclusion we get from AC's. I don't know after reading this material, if I am liking Ac's. Although I'm sure we're reading a lot of criticisms.
There seems to be a trade-off between quality of ratings when assessors are given too many individuals to observe and rate and the cost of additional assessors. How might this be resolved without increasing costs? I think Vicki's idea of video/audio recording might hold promise as one solution.
How can we better develop assessment center exercises or train evaluators so that the inter-rater reliabilities are much higher than what were reported in this study? Additionally, couldn’t something simple like videotaping the exercises allow the evaluators to go back and review the participating individuals to ensure that their ratings are relatively reliable and accurate?
ReplyDeleteHas there ever been consideration of breaks for the raters within assessment centers to avoid cognitive overload and fatigue? This would likely improve inter-rater reliability.
ReplyDeleteHow much weight do you think organizations would put on the results of an AC?
ReplyDeleteThis took me back to Dr. T's class and writing scales with examples of behaviors at the top, middle, and bottom (of the six-point scale). I would think this would be very challenging for 'drive for results, leadership, persuasiveness, team work), but also very helpful for raters. Does anyone have experience with writing these? Anyone have a good experience of using a good rating scale for this kind of observation of behavior? I have not been impressed by the scales I've seen at work (during interviews). Again, lots of thought and reference to JA needs to go into these. I am learning this.
ReplyDeleteI agree with Vicki, the cognitive load thing could be solved with video/audio recording. Also, to answer Amy's question, it seems from reading this article we should not automatically trust the ratings/conclusion we get from AC's. I don't know after reading this material, if I am liking Ac's. Although I'm sure we're reading a lot of criticisms.
ReplyDeleteThere seems to be a trade-off between quality of ratings when assessors are given too many individuals to observe and rate and the cost of additional assessors. How might this be resolved without increasing costs? I think Vicki's idea of video/audio recording might hold promise as one solution.
ReplyDelete