Good Stats Make Us Uncomfortable (IO Psychology)
In striving for profitability, companies often rely on key indicators of organizational performance. Common indicators like sales growth, customer loyalty, and earnings per share often guide strategy decisions and resource allocation. But sometimes key indicators may not be that “key” after all. They may have little or no true connection to profitability.
Size Matters in Court? Determinations of Adverse Impact Based on Organization Size (IO Psychology)
Publication: Journal of Business Psychology (JUN 2012)
Article: Unintended consequences of EEO enforcement policies: Being big is worse than being bad
Authors: R. Jacobs, K. Murphy, and J. Silva
Reviewed By: Megan Leasher
Adverse impact occurs when neutral-appearing employment practices have an unintentional, discriminatory effect on a protected group. The Equal Employment Opportunity Commission is charged with enforcing all federal legislation related to employment discrimination and adheres to the 1978 Uniform Guidelines on Employee Selection Procedures for “rules of thumb” on inferring whether adverse impact is present.
Using data to make smart decisions: 1 + 1 = It’s Not That Simple
Topic: Business Strategy, Decision Making, Evidence Based Management, Statistics
Publication: Harvard Business Review (APR 2012)
Article: Good Data Won’t Guarantee Good Decisions
Authors: S. Shah, A. Horne, and J. Capellá
Reviewed By: Megan Leasher
When we were in grade school, we learned that 1 + 1 = 2. We quickly realized and celebrated the immediate success in figuring out what came after the equal sign. This celebration built faith; blind faith that we should always believe in the result of an analysis.
Internet-based Data Collection: Just Do It Already!
Topic: Measurement, Statistics
Publication: Computers in Human Behavior
Article: From paper to pixels: A comparison of paper and computer formats in psychological assessment.
Author: M.J. Naus, L.M. Phillipp, M.Samsi
Featured by: Benjamin Granger
Although many organizations have jumped onto the internet-data collection bandwagon, several issues still need to be addressed. For example, are paper-pencil and internet-based tests of the same trait (e.g., personality questionnaire) or ability (e.g., cognitive ability test) really equivalent? Similarly, are there any reasons to believe that employees respond to internet-based tests differently than they would a paper-pencil test of the same trait or ability?
Naus, Philipp, and Samsi (2008) set out to investigate these questions using three commonly used psychological scales (Beck Depression Inventory, Short Form Health Survey, and the Neo-Five Factor Inventory).
Is interrater correlation really a proper measurement of reliability?
Topic: Measurement, Research Methodology, Statistics
Publication: Human Performance
Article: Exploring the relationship between interrater correlations and validity of peer ratings
Blogger: Rob Stilson
Interrater reliability (still with me?, Ok good) is often used as the main reliability estimate for the correction of validity coefficients when the criterion is job performance. Issues arise with this practice when one considers that the errors present between raters may not be random, but due to bias, while agreement between raters may also stem from bias instead of actual consistency. In this study, the authors’ main goal was to explore the relationship between interrater correlations and validity and also to explore the relationship between the number of raters and validity.