Wednesday, January 19, 2011

VG - Schmidt et al.

Forty Questions

8 comments:

  1. Question 6 addresses the issue of adverse impact when using tests such as cognitive ability tests as a basis for selection. The commentators express concern over the legality of practices such as quotas and separate hiring lists for subgroups. The rebuttal does not address the legality question nor the reasons why these types of tests might produce adverse impact and what could be done about it. If one of the goals in selection is to create a fair selection process (leaving aside the need for a legally defensible selection process), isn't the use of tests that create adverse impact - regardless of their validity as a predictor of performance - a serious concern?

    ReplyDelete
  2. • Question number 8 is absurd and the answer to this question/statement is right on. I feel that the additional information that correlations provide than raw score regression coefficients is also more practical and may provide more buy-in from practitioners within the selection field, along with the ability to compare across studies. Correlation analyses seem to be a necessary measure of the validity of selection tests. Do most practitioners and consultants primarily use the correlation or do they evaluate both the correlation and raw regression coefficients?

    ReplyDelete
  3. •Roni – What is your opinion to question and answer #9 regarding scientific statements of “These findings show that…” in meta-analysis studies?

    ReplyDelete
  4. Is there any support or research on validity generalization for predictors outside of cognitive ability, such as personality? It seems that the support for validity generalization research comes from the study of the relationship between cognitive ability predictors and job performance, and cognitive ability is not the only predictor available or used to predict job performance.

    ReplyDelete
  5. Similar to Vicki's comment, what other predictive (besides cognitive ability) might be used across enough jobs to make it worth doing VG tests, seeing as it VG seems like a very long and intense process.

    ReplyDelete
  6. As this was written when meta-analysis and validity generalization research were relatively new, this gives us a picture of how scientific ideas and understandings develop with the help of “constructive and informed skepticism” (p.700.) I found it hard to follow, but I appreciated the chance to glimpse the controversy. How much does the fact that validation studies are related to legal decisions about selection contribute to the heat in the debate or is this typical of scientific clarification?

    ReplyDelete
  7. Would it be possible to have a physical measure as a predictor that would generalize across multiple job contexts? It would seem like specific environmental factors (i.e., situational factors) would have more of an impact in this case.

    Also, it seems like a big difference if you are trying to generalize within an organization or industry than if you are trying to generalize across organizations or industries. I guess in the former case you are able to control for more of those situational factors. How is VG typically used in regard to this distinction?

    ReplyDelete
  8. Wouldn’t regression be better than correlation due to issues of statistical control? Even when units of measures are different, they can still be adjusted.

    ReplyDelete

Followers