A New Weapon in the Fight Against Faking on Personality Tests (IO Psychology)

Topic: Faking, Personality, Selection
Publication: Journal of Applied Psychology
Article: Testing the efficacy of a new procedure for reducing faking on personality tests within selection contexts
Authors: Fan, J. Gao., D., Carroll, S.A., Lopez, F.J., Tian, T.S., & Meng, H.
Reviewer: Neil Morelli

Has your organization ever used, or ever considered using a personality test as part of their selection battery? Due to personality tests’ predictive validity and relatively low subgroup differences, you’re not alone. However, one controversial issue still plagues the use of personality tests in selecting applicants: faking. Faking is defined as the intentional distortion of responses to portray a more positive image, and it can negatively affect the validity of the selection device. Fortunately, Fan et al. have recently tested a new method for identifying and reducing faking on personality tests that uses a computer-based warning system.

Fan et al. admits that there’s nothing new about warning applicants about faking, but the novel component of Fan et al.’s system is how the warning is provided. Instead of a reactive system for reducing faking via statistical controls, the Fan et al. method proactively mitigates faking by first testing for the likelihood of faking on an “initial item block” (this block consists of impression management items, a bogus scale, and a subset of items from the actual personality test). After comparing the scores from this block to a cutoff level for faking, the computer provides “a polite warning” to respondents flagged as potential fakers while non-flagged applicants are given a control message. All respondents are then given the “main item block” (a second testing of the faking items and the full personality measure).

This method’s utility rests in combining best practices from the faking literature: using a proactive mitigation strategy, only providing a warning to potential fakers, and allowing an opportunity for retest. In an organizational quasi-experiment, and a student-based true experiment, Fan et al. was able to demonstrate that flagged applicants lowered their scores after the warning was provided. Another benefit was that the perception of the test was not significantly affected. Admittedly, some of the kinks still need to be
ironed out, but as selection methods become more technologically advanced, new opportunities for reducing faking, such as the Fan et al. method, will be recommended.

Fan, J. Gao., D., Carroll, S.A., Lopez, F.J., Tian, T.S., & Meng, H. (2011). Testing the efficacy of a new procedure for reducing faking on personality tests within selection contexts. Journal of Applied Psychology.

human resource management, organizational industrial psychology, organizational management

source for picture: http://www.freedigitalphotos.net/images/Ideas_and_Decision_M_g409-Choices_p22024.html

2 thoughts on “A New Weapon in the Fight Against Faking on Personality Tests (IO Psychology)

  1. I get the appeal. However, couldn’t this pre-battery be faked too? I.e., what is the sensitivity of the various thresholds and to what extent can they be consciously skirted?

    Also, I wonder what is the ratio of hits to misses in terms of identifying “true fakes” vs. “accidental fakes.”

    Is there any consideration of combining this kind of method with a time-limited response method?

    Is there another “fit discovery” mechanism (i.e., structured, behavioural interview Q, team ratings of candidates, etc) that could mitigate any amount of faking on a personality score. Keep in mind, ratings could be given on candidate recorded media if real-time connections aren’t feasible.

    Long story short = thought provoking.

    Thanks.

    • This is from the leading author. I appreciate your comments on our new faking-mitigation procedure.

      It is important to keep in mind that faking is a highly debated research area, and we did not claim that we have found a way to fully address it. What we did was to suggest a new way to fight the faking problem and then to provide some initial evidence for its efficacy.

      We are currently collecting new data to address some of the concerns you raised, for instance, adding a baseline measurement post-entry. We will see what happens. Thanks again for your interest.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>