Psychology 9670 -
University of Nebraska at Omaha -
Dr. Roni Reiter-Palmon -
ASH 347K -
Employee Selection -
M/W: 1:00 PM - 1:15 PM
I feel that Binning and Barrett are almost too optimistic about creating “experimenting organizations.” From an I/O and research standpoint, it would be ideal to work in organizations that are looking toward the future and willing to do long-term research within their organization. However, organizations are more likely to focus on the bottom-line and next quarter’s profits, and avoid investing the time and money into research that may not (or may) advance the organization. How can we use and communicate the importance of validation and research design suggested by Binning and Barrett to support organizations in reaching their goals?
Binning and Barrett point out that performance constructs are clusters of "behavior-outcome units" that are artificially grouped together to reflect an organization's goals; whereas, predictor constructs are "clusters of behaviors created by psychologists to capture general regularities in behavior." Given this distinction, is it so surprising that most of the typically used predictors account for a relatively small amount of performance variance?
I agree with Vicki that the idea of " the experimenting organization" is a somewhat optimistic view. One way in which it might be a little more realistic is if organizations would share information about things that they had found in their experiments, however do you think it is likely that they would do this? That is, would an organization be willing to share this information with another company especially if they are competitors? Further, isn't it competing companies, those that are in the same area, from which the most useful information could be gleaned?
Binning & Barrett (1989)What a theoretical marathon. There are so, so many areas where theoretical clarity is needed in order to perform this scientific process of “identifying and mapping predictor samples of behavior to effectively overlap with performance domains” (p.481). This article makes it seem overwhelming to try to get good at this. Still, I valued the attempt to develop a theoretical framework and examine the different parts more closely and point out strengths and weaknesses. I see so many challenges. One that was clarified for me is getting “organizational decision makers and selection specialists collaborating to translate broad organizational objectives into normative statements of valued behaviors and outcomes” (p. 480). Have you done this? What is most challenging about this?
While both Binning and Barrett (1989) as well as Landy (1986) do a great job to talk about the issues with constructs and measures. I like the important distinction between "the validity of the construct vs. the validity of the test as a measure of the construct". This was helpful.However, both authors seemed to focus on the theoretical aspects of construct issues in selection, instead of practical advice that practitioners and researchers can use. How does Structural Equation Modeling fit into this conversation? Can one use latent factor modeling to provide more validity evidence in regards to measures matching up with constructs?
It was mentioned that personality traits may not be useful as predictors of job performance until employees have spent a considerable amount of time on the job. Should we as selection specialists buy into this notion, meaning, should we only recommend the use of personality as a predictor when we are intending to hire long-term employees? Does this mean that even hiring temporary sales associations should preclude us in looking at extraversion, a solid predictor of sales performance?
How are we accurately predicting when we are validating the inferences made off of the performance dimension of possible a bad JA?