Test Bias Analysis: New Thoughts on an Old Method

Topic: Selection

Publication: Industrial and Organizational Psychology: Perspectives on Science and Practice

Article: Not Seeing Clearly With Cleary: What Test Bias Analyses Do and Do Not Tell Us

Authors: A.W. Meade and S. Tonidandel

Selected commentary authors: P.R. Sackett and P. Bobko

Reviewed By: Samantha Paustian-Underdahl

 


When practitioners use pre-employment tests for selection decisions, they must consider the potential biases that may result from the assessment. Using biased tests can lead to poor, ‘unfair’ hiring decisions. Not only can perceptions of unfairness negatively impact a company’s reputation and bottom line, but legal issues can arise if selection procedures are not free from bias (Allen v. Alabama State Board of Education, 2000).

Whether HR professionals are developing their own test or procedure, or
are purchasing a test from a vendor, an understanding of test bias is essential
to ensure there is no adverse impact to any candidate group.

Meade and Tonidandel (2010) argue that despite the importance of assessing for test bias in the field of I/O psychology, there are pervasive misunderstandings regarding what tests of bias really tell us. They believe that the standard technique for evaluating whether a test is biased (the regression-based approach outlined by Cleary (1968)) fails to tell us what we really need to know about a test. The authors explain that the Cleary approach focuses on differences in regression lines across groups, which is known as differential prediction.

In other words, the Cleary method examines the extent to which predicted
performance matches observed performance when using a common regression line in 
a specific selection context. The authors’ main concern with the Cleary method of assessing test bias is that when differential prediction is found, many
people assume there is a problem with the test itself.

However, there are many other reasons for differential prediction, e.g., the way the outcome variable (performance) is measured, the reliability of the test, or omitted variables.

In conclusion, Meade and Tonidandel (2010) provide some recommendations to anyone responsible for selection tests:

1)   Practitioners should work with clients to attempt to identify the source of the differential prediction and eliminate it if possible.

2)   The entire selection system (all tests used to predict performance) should be
assessed for adverse impact. If the differential prediction occurs for only
part of the selection system, look for a predictor that can reduce the overall
bias. For example, if a cognitive ability test leads to differential prediction
for one group of applicants, practitioners should look for another, non-biased
predictor (personality, motivation, cultural fit, etc.) to add to the selection
system that will account for the mean differences in performance.

3)   Practitioners should not assume a test is unusable in cases where differential prediction is encountered. Depending on the organizations’ goals and priorities, differential prediction does not necessarily make a test unusable for selection. When a predictor accurately predicts performance, and does not show adverse impact or mean differences, and the minority group has lower mean differences on
performance, it may well be in an organization’s best interest to not attend to
differential prediction. Suitability of use of the test in such situations
depends on opinions of fairness within the organization and society at large.

Selected Commentaries:

The commentaries generally fall into three categories: commentaries that take issue with aspects of the focal paper (Sackett & Bobko, 2010), commentaries that are generally supportive of aspects of the paper yet believe that the authors did not go far enough, and commentaries that focus on issues related to, but not directly addressed in, the focal paper. The Sackett and Bobko (2010) commentary is reviewed below, as it highlights issues most relevant to IOATWork.com’s audience.

Sackett and Bobko (2010) begin their commentary by clarifying the presumed context of the Meade and Tonidandel focal paper. They explain that there are generally two reasons for conducting differential prediction analyses: 1) to comply with equal employment opportunity regulations under which selection practice often operates and 2) to provide the scientist/ practitioner with insight as to the nature of predictor/criterion (selection test/job performance) relationship.

They believe that Meade and Tonidandel focus their discussion around the second reason since they state that they prefer to examine differential prediction regardless of the presence or absence of mean differences on the predictor, which takes their discussion outside the bounds of a regulatory framework. Thus, in practice, Sackett and Bobko argue that personnel selection professionals only need to conduct bias analyses if there are mean differences on the predictor between groups.

Another key point mentioned in the Sakett and Bobko
commentary is that many personnel selection practitioners use a range of
predictors that may produce a single score, such as an overall rating in an
interview or an assessment center. While the use of such measures to predict a
given outcome (i.e., job performance) can be examined for predictive bias, a
differential prediction analysis for each item is not always applicable.

Focal article:

Meade, A.W. and Tonidandel, S. (2010). Not Seeing Clearly With Cleary: What Test Bias
Analyses Do and Do Not Tell Us. Industrial and Organizational Psychology: Perspectives
on Science and Practice
, 3, 192–205.

Commentaries:

Sackett, P. R., & Bobko, P. (2010). Conceptual and technical issues in conducting and interpreting differential prediction analyses. Industrial and Organizational Psychology, 3, 213–217.

Citations:

Allen v. Alabama State Board of Education. U.S. Dist. LEXIS 123 (M.D. Al 2000).

Cleary, T. A. (1968). Test bias: Prediction of grades of Negro and White students in integrated colleges. Journal of Educational Measurement , 5, 115–124.