Wednesday, January 19, 2011

Banding - Schmidt

Why all banding procedures in personnel selection are logically flawed

7 comments:

  1. It seems that the rationale behind banding, regardless of which type, statistical significance testing, etc., is that the relationship between predictor scores and job performance criteria is linear. Has anyone tested the curvilinear relationship between predictor scores and job performance criteria?

    As a side note, I found it funny how Schmidt criticized Cascio et al. for their use of statistical significance and how arbitrary the process of statistical significance is, only to turn around and suggest that their meta-analysis technique is helping to correct this error, even though we read earlier about how arbitrary their 75% rule allowing for conclusions that statistical artifacts account for “most” of the variance in validity coefficients was!

    ReplyDelete
  2. •I understand Schmidt’s critique on Cascio et al.’s article regarding their misleading information from an unrepresentative sample, due to a large SD for minorities. However, this may be uniquely representative for certain areas of the U.S. where minorities are primarily applying for the position b/c it is located in their limited location (e.g., the Bronx). Is there much research on this where the selection samples differ based on minorities in the selection process? It would seem that big companies may not have as many minorities applying due to SES and level of education vs. small companies who probably don’t have much of a selection process.
    •Also, Schmidt mentioned that statistical testing b/w two true scores is not correct and that meta-analyses are helping to overcome the “statistical testing addiction”. Aren’t meta-analyses bringing to the table even more error based on the many study dimensions and samples? How would this be more valuable or reliable than a simple statistical test b/w two measures/scores?

    ReplyDelete
  3. This is interesting to read this early strong critique of SED banding that is referred to often in the later articles (especially Bobko & Roth, 2004).
    He's saying this kind of use of statistical approach, without secondary criteria (for selection) doesn't make sense, because it leads to very wide bands, sometimes including everyone in the band. Valid tests don't end up helping at all. It ends up being the same as just selecting randomly. It makes me question again the value of the other AI solutions - low cut-offs and within group percentiles (though I know they are not legally defensible.

    ReplyDelete
  4. While done it a sort of crude manner, I mostly agree with this critique of banding and Cascio. Along the lines of what Vicki was saying, I have read multiple papers that critique the misunderstandings of significance testing. But it seems the authors typically also use significance testing at some point or the other. I'm not sure one can really fault Cascio for trying to use significance testing to find a psychometric/statistical solution for the adverse impact issue. I really don't see any of these authors coming up with a solution besides concluding it is not a psychometric issue. What are some possible solutions?

    ReplyDelete
  5. Directed toward Roni: Can you please explain psychometric issues with banding in human terms? I think Schmidt makes a lot of valid points that I actually somewhat understand, but fear that it would be difficult for me to explain these reasons to someone (such as a client) who did not have a psychometric background.

    ReplyDelete
  6. Cascio et al. really takes it on the jaw in this article, but I found Schmidt quite convincing. As I understood Schmidt, significance testing of differences between scores is an inappropriate calculation based on the information the linear prediction model is designed to give. Based on this model even small differences of one point represent the prediction that applicants with the score that is one point higher will perform better on average than applicants with scores one point lower. A modification of the information the model is designed to produce such as is done with banding, ultimately undermines the model and thus the information. Does Schmidt seem to be saying that while something might be done to address the underlying reasons behind group differences in test scores, the most fair selection procedure is top-down selection?

    ReplyDelete
  7. It seems like one of the major critique of banding is that it produces too wide of bands. Do you think banding could be used along with cutoffs (made by SMEs) to make the bands smaller? Or does this seem more like a somewhat arbitrary way of making the group of applicants to choose from smaller?

    ReplyDelete

Followers