Wednesday, January 19, 2011

VG - James et al. (1986)

A note on validity generalization procedures

8 comments:

  1. Conservative estimates tend to receive more buy-in and provide a feeling of safety (e.g., not overestimating). It would seem beneficial to make the .75% decision rule more “stringent” but also in a sense, conservative, in order for users to be more accepting of the rule.

    ReplyDelete
  2. James et al. suggest that the tests of situational specificity hypotheses fall into the problem of illogically “affirming the consequent” (p. 442) such that we are looking to support the null hypothesis that differences in validity coefficients are not due to situational factors. How often does this logical fallacy occur in research? Are there artifacts other than statistical and situational that might influence validity generalization?

    ReplyDelete
  3. The “75%” rule allowing for conclusions that statistical artifacts account for “most” of the variance in validity coefficients seems too lenient when this rule is used to rule out the situational specificity hypothesis, especially when situational factors are often not directly tested in a model (as suggested by James et al., 1992). It would seem odd to suggest that the other 25% of variance left unaccounted for is irrelevant? If not due to situational factors, then exactly what accounts for the potential remaining variance?

    ReplyDelete
  4. James et al. eschews the use of the 75% rule which they refer to as a decision heuristic, but are opposed to the use of a more conservative 90% rule saying that this just replaces one heuristic with another. They go on to suggest additional research. As in the other James et al. article, the authors debunk one analytic technique, but don't recommend anything to practitioners while more research is being done. Does it seem unreasonable to use the more conservative 90% rule while waiting (and perhaps contributing to) more definitive research?

    ReplyDelete
  5. James et al suggest including situational variables (e.g., leadership,
    stress and coping mechanisms for stress, systems norms and
    values, socialization strategies, formal and informal communication nets, formalization and standardization of structure, and physical environments) in analyses to determine if a selection test e.g. cognitive ability is generalizable. However who decides what level the organization is at for these different situational variables? Do scales need to be developed to determine where companies fall on these situational variables?

    ReplyDelete
  6. These authors are saying that situational specifity and cross-situation consistency are not the only options for explaining variations in validities.
    They advocate more sophisticated models of job
    performance including both person variables and situation variables as main effects as well as moderators. What do you think of this? Do they continue to advocate this in their later article?

    ReplyDelete
  7. Should we be basing our selection program from the job behaviors found in a job analysis or a performance appraisal/measure? It is difficult to affirm the consequent when measures of the consequent (performance, in this case) are so unreliable and may not relate to actual job behaviors.

    I like how these authors tied in productivity gains/losses to this discussion using actual numbers (pp. 441). Perhaps, a discussion of the benefits of a proper selection system is the first step to designing one. That is, even if I put in the most state-of-the-art selection system using multiple valid predictors and taking into account the situational factors of the specific organization, but we only see a 4-5% rise in production. Is it even worth it to go through the effort? It seems part of the challenge for I/O specialists is to realistically balance between what can be accomplished and what is financially feasible to accomplish. Perhaps part of the gap between scientist and practitioner is that the latter takes the cost/benefit consideration more seriously (perhaps they are more realistic about what can be accomplished by a selection system) and the former is more concerned with ideal measurements and 'experimental' control.

    ReplyDelete
  8. Although validity generalization is subject to inferring causation when it really is not there, I don’t think that I would recommend taking the more conservative approach initially because I think you can still miss out on meaningful relationships (e.g., those that might lead to interesting research questions). Do you agree with this? If so, how would you balance the research side and the practitioner side here?

    ReplyDelete

Followers