Wednesday, January 19, 2011

Arthur, W., Edwards, B. D., & Barrett, G. V. (2002) - General Ability

Multiple choice and constructed response tests of ability: Race based subgroup performance differences on alternative paper-and-pencil test formats.

6 comments:

  1. •Critique – their w/in subjects study seems to be missing power to find significant results due to their small sample.
    •Actually, cognitive psychology discusses a difference in cognitive strategies and activation between multiple choice and fill-in-the-blank tests. Multiple tests are primarily dependent on familiarity (i.e., does this look familiar?) and strategy, whereas fill-in-the-blank tests require accurate knowledge and recall. Hence the study showed there to be lower levels of impact because both races were able to answer purely on recall and knowledge versus strategy where whites have an upper hand on that versus blacks partly due to culture differences.

    ReplyDelete
  2. The authors suggest that using methods other than tests to help reduce adverse impact. The examples they provide (e.g., assessment centers, performance tests) make sense to me because they are less like a testing environment. However, I was not totally clear why the use of a constructed response test would have less adverse impact than a multiple choice test. Why do you think this is?

    ReplyDelete
  3. This article suggests that simply changing the method of administering a test can help to reduce group differences in test performance. However, one major limitation they noted was the fact that the multiple choice and constructed response tests did not have the same exact items, making it difficult to compare the outcomes. Could using other testing methods really help to reduce group differences? How might organizations respond to using more costly methods that might reduce adverse impact?

    ReplyDelete
  4. Is adding more subjectivity to the equation really the best way to protect against adverse impact? These write-in answers obviously need to be scored by raters. Is this even practical considering how labor intensive the scoring of these selection tests would be? Especially if you have a large number of candidates/applicants per job, as the current economy elicits.

    ReplyDelete
  5. I wondered the same, Shane, both about the expense of scoring these kinds of tests and also the higher expense of implementing methods like assessment centers that might reduce adverse impact. This is pretty challenging. Researchers have been working on this issue a long time in this country. I am glad to see creative alternatives like this (constructive response tests) continuing to be developed, but we have not found the magic solution yet. I'm discouraged.

    ReplyDelete
  6. The authors manipulated test construction to study group differences on multiple choice versus constructed response items. The content of the test, however, was not general cognitive ability, but questions pertaining to knowledge of the job. The results yielded some evidence suggesting that group differences may be reduced when constructed response items are used rather than multiple choice items. Still does this really address the core problem of cognitive ability test and adverse impact when the content is domain specific rather than general?

    ReplyDelete

Followers