The Role of Storytelling in Effective Structured Job Interviews

Publication: Journal of Business and Psychology
Article: Storytelling in the selection interview? How applicants respond to past behavior questions
Reviewed by: Alexandra Rechlin


It’s no surprise that I-O psychologists recommend using structured job interviews when selecting someone for a position. This is because structured interviews are far better predictors of performance than are informal, unstructured interviews. As part of a structured interview, two types of questions may be asked – situational questions (e.g., “What would you do if you disagreed with your supervisor?”) or behavioral questions (e.g., “Tell me about a time that you disagreed with your supervisor”).

Storytelling is an important aspect of answering a behavioral interview question, because the interviewee needs to be able to tell a story about what happened. But how good are applicants at telling stories? Are there any general characteristics that make some people better storytellers than others? And does skill in storytelling impact interview outcomes? In a recent study, Bangerter, Corvalan, and Cavin (2014) set out to answer these questions.



A typical conversation has a collaborative aspect (or at least it should). Even if one person is telling a story, the other person will usually nod or give some sort of verbal acknowledgement. However, in a job interview, storytelling will differ somewhat from storytelling in a normal conversation. There isn’t the back-and-forth of a typical conversation, and the reactions from the interviewer may change the course of the story. If the interviewer seems bored or judgmental, that could change the applicant’s storytelling. Likewise, if the interviewer is smiling and engaged, that could also encourage the applicant. The reactions of the interviewer could therefore even have implications for reliability, or the extent to which the interview is capable of consistently assessing applicants. In addition, people have differing levels of storytelling skill. Apart from work experience, applicants have different amounts of experience with storytelling; the more behavioral interviews an applicant has participated in, the more experience she will have with telling stories in an interview situation.



To answer their questions about storytelling in the job interview, the researchers analyzed transcripts from 72 actual job interviews. Applicants answered four behavioral questions that assessed communication, persuasion, organization, and stress management. The most frequent type of a response was a pseudostory, which is more general and abstract than a story. A pseudostory describes a generic situation or set of similar situations, as opposed to one specific event. Less than one in four applicants told an actual story. Applicants also talked more about situations than about tasks, actions they took, or results. Applicants with higher intelligence told more pseudostories, applicants who were more conscientious voiced their values and opinions more, and men told proportionally more stories than women did. Men, extraverted people, and those who told more stories and pseudostories received more hiring recommendations. On the opposite side, applicants who told more self-descriptions (e.g., “I’m good at managing my stress”) were less likely to get a hiring recommendation.



If you’re an applicant, try to give detailed stories during a job interview. Before going into the interview, think about ways that you can frame some events from your past as stories that you can talk about. If you are an interviewer, try to elicit stories from applicants. Encourage them to provide more details, and be aware that applicants might not naturally provide the kind of stories you’re looking for. Doing these things will help ensure that job interviews remain fair and effective.

Can Proctored Internet-Based Selection Tests Really Stop Cheating?

Publication: Journal of Business and Psychology
Article: Cheating, reactions, and performance in remotely proctored testing: An exploratory experimental study
Reviewed by: Alexandra Rechlin


In order to curb potential cheating, many organizations have started using remotely proctored internet tests. But do they actually work? And could they have unintended consequences?



Numerous methods can be used to administer proctored internet testing. One common method is real-time webcam monitoring, and another is real-time screen sharing. Some proctored testing software keeps other programs or applications from working while the test is going on. It can prevent screenshots, copy and paste, and more. It appears that remote proctored testing may be a good idea to prevent cheating in employment tests, but little research has been conducted on it.



In this study, Karim, Kaminsky, and Behrend (2014) used a sample of people who were playing the role of test-takers, and they motivated the test-takers to perform well by using an extra monetary incentive. Half of participants were told to use a webcam to record themselves taking the test, while the other half were not asked to record themselves. All participants completed two tests: a quantitative test that included practice GRE questions that were easily searchable online, and a logical reasoning test whose answers could not be found online.



Participants in the proctored exam had a higher sense of pressure and more concerns about privacy. In addition, more participants withdrew from the proctored test than from the non-proctored test. On a positive note, proctoring did seem to decrease cheating, as proctored participants scored worse on the test that was searchable but not the test that was non-searchable. However, this decrease was very small. Gender and numerous individual differences did not have an effect on how proctoring affected scores.



This study suggests that although proctored internet tests (specifically, using a webcam) may slightly decrease cheating without affecting performance, it may also have a negative effect on applicant reactions. In addition, it’s important to consider privacy concerns and keep any recorded videos in a safe and secure location, and to delete them after they are no longer needed. The authors encourage being very cautious if you plan to use remote proctoring in your organization due to its potential for negative applicant reactions, which can lead to good applicants dropping out of the hiring process and applicants being less likely to recommend your organization to others.

How to Design a Resume That Will Get You Hired

Publication: Journal of Business and Psychology
Article: Effects of applicant personality on resume evaluations
Reviewed by: Alexandra Rechlin


When writing your resume, you probably thought about how potential employers might perceive you. Many articles and books give advice regarding what to include and how to design a resume, but many of those authors don’t actually agree on what method works best. A recent exploratory study discovered what personality traits people attribute to different parts of your resume, and how hirable they might make you appear.



This probably comes as no surprise, but past research suggests that people viewing your resume will make personal judgments about you based on what they see on your resume. In addition, parts of your resume are related to individual differences, like extraversion or conscientiousness. However, there is little research that has been conducted to understand what aspects of your resume lead to certain judgments.



In this paper, Burns et al. (2014) attempted to extend the previous research to better understand specific resume cues and how they are perceived. They actually conducted two related studies. In the first, they had participants view resume cues (e.g., GPA, job titles, extracurricular activities) and link them with personality adjectives (e.g., hardworking, creative). Participants also rated each cue regarding its importance in determining hirability. Not surprisingly, the majority of the cues that participants rated as being important to hirability came from the experience section of the resume. Participants also seemed to easily link cues to personality traits, especially conscientiousness.



In the second study, the researchers had actual HR professionals evaluate real resumes and give their impressions regarding the applicants’ personality and hirability. The HR professionals had low levels of agreement about hiring recommendations. The applicants’ self-ratings of personality only contributed slightly to the hiring recommendations, but the HR professionals’ ratings of the applicants’ personalities was a major contributor to the hiring recommendations. In other words, what the HR professionals thought about the applicants’ personalities was very influential in their hiring recommendations.



The researchers gave several important recommendations for job seekers based on the results of these studies. First, job seekers should provide detailed information about their education and be sure to include honors and awards. Second, job seekers should use a school email address (or other formal email address) instead of a personal email address, and they shouldn’t use any unusual fonts or formats. Third, job seekers should put their educational information before their past job information, and include any information about leadership roles or ways they financially benefited an organization. Finally, it seems to be good to include extracurricular or volunteer experiences. Following these tips will make sure that your resume gives you the best chance to land the new job.

Intelligence Testing: Is It Always the Smartest Thing to Do?

Publication: Journal of Applied Psychology
Article: A Meta-Analysis of the Relationship Between General Mental Ability and Nontask Performance
Reviewed by: Ben Sher


Smart employees tend to be better at doing their jobs. This is considered one of the most important findings in the history of I-O research. Meta-analysis, which is a method of compiling results from many different researchers and studies, has shown that intelligence (or general mental ability) is associated with better job performance for basically any job. But there are other important components that make organizations successful besides narrowly-defined task performance (parts of a job that are in the job description). New research (Gonzalez-Mulé, Mount, & Oh, 2014) investigates whether intelligence can also predict other measures of workplace success.



The authors conducted a meta-analysis to determine if intelligence is related to two major measures that are important to organizations: Counterproductive work behavior (CWB), and organizational citizenship behavior (OCB). These terms sound fancy but they are actually quite simple. CWBs mean anything that employees do that breaks organizational norms or expectations. This behavior can be directed at a coworker (i.e. bullying or harassment) or at the organization (i.e. stealing from the employer, unnecessary absences). OCBs refer to anything that employees do that are not formally recognized in their job description, for example helping out a coworker or suggesting a new way of doing things that can help the organization save resources.



The meta-analysis found that intelligence was associated with more OCBs, meaning that smarter employees also went beyond their job descriptions more frequently. The authors explain that smarter people are typically better at seeing the big picture, for example they may understand that helping a coworker has benefits for the organization in the long run. Also, smarter employees may sometimes have greater capacity to help out others. They may be the only ones who are capable of devising a solution to a problem that eventually helps out the organization.

However, when it came to CWBs, there was no real relationship with intelligence. The authors had predicted that smarter employees would engage in less bad behavior because they are more readily capable of seeing the dangerous outcomes, such as harming the company or harming themselves by getting caught. But the data didn’t support this conclusion.



The authors also compared intelligence testing with personality testing to see which was generally more useful for predicting success on the job. As predicted, intelligence testing predicted better than personality testing when the outcome was task performance, or the parts of a job that are listed in a job description. When using the other outcomes of job success (OCBs and CWBs), the authors found a different story. First, when it came to OCBs (going above and beyond job descriptions) intelligence and personality were about equally useful in predicting which employees will go above and beyond. When it came to CWBs (the bad behavior), personality was actually a better predictor than intelligence.



This study supports the idea that the best predictor of job success is general intelligence, specifically because it has the ability to predict good old fashioned task-performance. It pays to hire smart employees. But that’s not the entire story. The conclusions here also indicate that intelligence isn’t the be-all and end-all of how to hire employees. Organizations should also have the foresight to care about extra effort and misbehavior at work. If you want employees who strive to make the workplace better for everyone, intelligence testing may still help, but it is not any better than personality testing. But if you want employees who don’t misbehave, personality testing may be the way to go.

Specific Cognitive Abilities Can Benefit Selection Programs

Publication: Journal of Applied Psychology
Article: Examining the incremental validity and relative importance of specific cognitive abilities in a training context.
Reviewed by: Andrew Morris


Organizations oftentimes use specific cognitive abilities to help select people for jobs. Selection itself is important because organizations can sometimes waste millions of dollars in training people who don’t have the right aptitude, aren’t motivated, or who don’t fit minimum requirements for the job. When an organization selects employees, it often uses an assessment process to try and find the “right people.” This assessment often involves tests of general cognitive ability, which is basically what we’d consider overall intelligence. What if organizations could fine tune these processes so that they were more successful in identifying those who may succeed in a training context or in a job? Recent research findings offer a possible way to do this.



Many researchers adhere to the view that intelligence is made up of a single “general” factor, also called general mental ability. This view explains that there is an underlying single-dimension of mental ability that underlies numerous different types of learning and performance abilities. However, researchers debate about whether including specific abilities of intelligence can provide just a little bit of extra predictive power for organizations. Some believe that these specific abilities can help predict beyond what general mental ability alone can offer when it comes to selecting individuals for the job.



The researchers investigated the importance of using general mental ability and also specific abilities in a training context, specifically military personnel learning a foreign language. The researchers examined the predictive ability of general mental ability and the specific abilities within the trainee group by using different approaches to measuring cognitive ability. Results showed that if the specific mental abilities of the applicants aligned with what was being assessed, then using the specific abilities would add predictive value for the organization. For example, in testing learning of a foreign language, the specific ability of foreign language aptitude would be more useful than numerical ability.



These findings challenge many assessment and selection practices within hiring and training. In some cases, testing for general mental ability may be sufficient, but in other cases, managers should not underestimate the role that specific abilities may play in helping organizations predict who will succeed at training and on the job. This would require testing for specific abilities that are either closely aligned with job responsibilities, or are a requirement in a training program. Specific abilities should not be used when these responsibilities are not clearly defined or if there is a mismatch with actual job requirements.

When specific abilities match what is needed for either training or job success, then specific abilities can provide extra predictive power over merely assessing general intelligence in candidates. It is important to note that even a small incremental advantage in prediction for large selection programs could have a profound influence on return on investment figures.

Avoiding Adverse Impact: Selection Procedures That Increase Organizational Diversity

Publication: Journal of Applied Psychology
Article: More Than g: Selection Quality and Adverse Impact Implications of Considering Second-Stratum Cognitive Abilities
Reviewed by: Andrew Morris


Using cognitive tests as part of an employee selection process will generally help more than various other methods (such as interviews) to ensure the selection of better performing individuals. There are some methods that are slightly better predictors of performance, but cognitive tests have proven to be a mainstay.

Unfortunately, the use of such tests can lead to discriminatory hiring practices against minority groups, who often score below their white counterparts due to a variety of factors.

Different strategies have been proposed to counteract this adverse impact in selection procedures in order to ensure a fairer hiring process and encourage greater diversity within the workplace. The research reviewed here investigated one such strategy.



Adverse Impact is a means of measuring this type of discrimination. It is calculated by dividing the selection ratio from the lower scoring group of applicants (a minority group) by the selection ratio of the higher scoring comparison group (historically, more privileged groups).

Adverse Impact towards the minority group has occurred if the result of these calculations is less than 4/5ths. This is a way of guarding against discriminatory selection practices and ensuring a more diverse and representative workforce.

When cognitive tests are used for selection procedures, it is perceived that the organization now has to make a trade-off between selection criteria related to work performance and selecting for diversity by adhering to the Adverse Impact ratio.



One strategy for overcoming Adverse Impact is to weigh cognitive and non-cognitive tests differently. The researchers investigated the use of this weighting strategy on cognitive sub-tests, which represented the second-order stratum of cognitive ability.

Second-order cognitive abilities are not specific individual abilities, but rather a broader constellation of related abilities, yet still more refined than a measure of general cognitive ability (known as g). For example, measuring acquired knowledge in reading and writing (stratum II) would include relationships across vocabulary, reading comprehension, and analogy tests.

The researchers hypothesized that, although general cognitive ability may be a fairly good predictor of later performance, the stratum II abilities may be better predictors when a job requires that specific ability. By using a sophisticated weighing technique with varying values for specific abilities related to a job, the researchers found that this method could improve minority hiring, but not at the expense of selection quality if a test of general cognitive ability was used.



This research is particularly interesting for managers and recruiters because it provides a clear way forward in decreasing the possible Adverse Impact of company selection procedures, which helps to create a more diverse workforce.

Workplace diversity has been shown to have multiple benefits in terms of organizational outcomes. But you can also rest assured that using such weighted methods won’t decrease the quality of hires if the abilities are shown to relate to the job.

How Job Interviewers’ Selling Orientation Impacts Their Judgment

Publication: Academy of Management Journal
Article: Do Interviewers Sell Themselves Short? The Effect of Selling Orientation on Interviewers’ Judgments
Reviewed by: Susan Rosengarten


During our first meetings with potential clients, investors, colleagues or romantic partners, our initial impressions and appraisal of their character influence the judgments we make about them. But at the same time we’re evaluating others, we’re often ”selling” ourselves, or making ourselves seem more attractive.

Interviewers work in the same fashion: They ask prospective candidates as many questions as they can about their experiences or employment preferences to evaluate the candidate’s fit against both the requirements of the job and with the existing team. During these interviews, hiring managers also do their best to “sell” the job and the company to interviewees by talking about the perks the company offers and how the candidate would benefit from joining the team.

The current research by Jennifer Carson Marr and Dan M. Cable examines whether the job interviewer’s selling orientation– their motivation to “sell” the job to an applicant– ultimately influences the accuracy and validity of his or her subsequent judgments about the applicant.



These researchers designed two studies. In the first study they conducted mock interviews. Participants were assigned to one of two groups– interviewer or applicant– and interviewers were assigned to have either a high or low selling orientation condition.

The researchers then examined the relationship between the interviewers’ selling orientation and their accuracy in evaluating the applicants’ ratings of core self-evaluation. A person’s core self-evaluation is the appraisal of their own self worth, which includes measures of self-esteem, self-efficacy, locus of control and emotional stability.

The researchers found that those interviewers with a high selling orientation condition were less accurate in their evaluation of applicants’ core self-evaluations than those in the low selling orientation condition.



In their second study, the researchers used field studies to further understand how selling orientation influences the validity of interviewer judgments with regards to those applicants’ future success.

Participants in the first sample were interviewers interviewing aspiring MBA candidates for admission into MBA programs. Participants in the second sample were interviewers tasked with matching international teachers to various school districts in the United States.

The researchers found that, as the interviewers’ motivation to “sell” to job applicants increased, their accuracy in predicting the future success of these interviewees decreased.



The results of these studies have several important implications for HR professionals. The most significant of these is that formally separating the applicant attraction and evaluation processes may help interviewers make better selection decisions. This, ultimately, may lead to a stronger and more productive workforce.

Welcome to the Future: Investigating Mobile Devices as Assessment Platforms

Publication: International Journal of Selection and Assessment
Article: Establishing the Measurement Equivalence of Online Selection Assessments Delivered on Mobile versus Non-mobile Devices.
Reviewed by: Andrew Morris

There are very few areas of our lives that have not been affected by technological innovations. A vast number of people these days use their phones for virtually everything, from staying in contact with friends and family, to navigating a busy city center, to booking flights.

So it doesn’t seem out of the question that mobile phones might one day be used as a medium by which organizations could assess job applicants. And it appears as if that day may have already come.

A recent study conducted by a team of researchers (including one of I/O at Work’s own writers) looked at whether delivering and/or taking online selection assessments was an accurate indicator that could prove effective for selection purposes.



There are strict guidelines for assessing individuals when those results will be used for selection purposes. But, until now, relatively few studies have rigorously investigated the scientific properties concerning mobile assessment of cognitive abilities.

There are numerous issues specific to a mobile phone that could affect an individual’s results, including the size of the screen being used and whether question formats on various devices would be affected by environmental differences. In other words, would 2 people in completely different locations– such as a bus and a bedroom– have the same chance at acceptable scores?

To study the differences, researchers had job applicants take a series of selection tests on various devices. These tests included biodata assessments, a situational judgment test (SJT), multimedia work simulations and cognitive assessments.



The results were mixed, showing that mobile devices may be an option for certain types of tests, but that there are certain parameters that should be taken note of. For example, cognitive-ability tests performed similarly across various devices, but are more effective if they’re not timed, are multimedia in nature and, don’t require much reading comprehension.

The study found that, due to the increasingly good quality of mobile screen resolutions, the benefit an applicant would get in using other devices is greatly reduced. In other words, tests with multimedia work simulation would be a sound option if job applicants are accessing content on their mobiles. Results are still tentative and require broader research, but offer a promising glimpse into assessment platforms of the future.

Despite the promise shown by mobile assessments, there were certain differences with some tests taken on various platforms. With the SJT in particular, results for applicants on mobiles were lower than for test takers on other devices. The reason is not clear, but it may be due to the excessive scrolling and different reading needs of mobile devices.



This research is significant, both for companies looking to expand the platforms they make available to applicants when and for job seekers wanting an edge over their competitors.

the study showed a disadvantage for mobile applicants in some situations, there’s no doubt the differences the researchers found will eventually be narrowed by technological advancements, opening up new possibilities for companies and potential employees alike.

Interview Reliability: Statistics vs. Personal Experience

Publication: International Journal of Selection and Assessment
Article: Employment interview reliability: New meta-analytic estimates by structure and format
Reviewed by: Megan Leasher

This article focuses on the reliability of interviews. The more error introduced in interviews, the less reliable they are. The researchers targeted different sources of error, both from the interviewee and interviewers. Interviewees can introduce error into an interview when they answer similar questions from the same or multiple interviewers differently, while interviewers introduce error when they interpret, evaluate, and rate identical responses differently.

The researchers found that the more structured the interviews, the more reliable the interview results tended to be. In addition, panel interviews were more reliable than single-interviewer interviews.

I have to admit that I am a little torn regarding the desire to strive toward reliability in interviews. I know from a statistical perspective that interview reliability is important, since the more reliable interviews can be, the more capable they are of statistically predicting the future job performance of candidates (who become hires).

In industrial and organizational psychology, this conflict between ideal statistics and personal experience is the crossroads of “where research meets practice.” Sometimes what is helpful isn’t necessarily the ideal scenario or course of action from a statistical perspective. This crossroads can make for great discussion fodder, salty arguments, ideas for new research, or all of the above.

If interviewees answered the same or similar question in the exact, regurgitatingly same way, and all interviewers heard, interpreted, evaluated, and rated a candidate’s responses in the exact same way, you would have a perfectly reliable interview. From a statistical perspective, that is.

But would you be missing something?

Think of all of the times in an interview that you heard, saw, or interpreted something very differently from your colleague who asked the candidate a similar question. Then, when you debriefed after the interviews were complete, those differences gave you something really good to discuss, like nuances that the other didn’t pick up on, unique reactions to answers, and so on. Perhaps a third interviewer had another fresh perspective to share, turning it into a real debate. Debates lead to unexpected and more in-depth realizations about the candidate that one interviewer could have conjured up alone. Statistical reliability would not have wanted this scenario to take place. It would be unhappy that interviewers disagreed, because that would jeopardize and threaten the high reliability it would be striving for.

Which is more important? Reliability or the ability to learn or interpret something unique?

The Consequences of Fit Across Cultures

Previous research has demonstrated that fit – the compatibility between an employee and a work environment – tends to lead to better attitudes, better job performance, and lower turnover (Arthur, Bell, Villado, & Doverspike, 2006). However, this research has focused predominantly on populations in North America. Today, companies operate across geographical boundaries in a globalized world of business, and it does not seem prudent to apply results found in North America to countries in Europe and Asia. Therefore, it becomes necessary to understand if fit across cultures predicts work attitudes and job performance across the globe.

For this study, Oh, Guay, Kim, Harold, Lee, Heo, and Shin reviewed 96 studies that had previously been conducted in East Asia, Europe, and North America. First, they found that fit predicts work attitudes – such as organizational commitment, job satisfaction, and intent to quit – as well as job performance across cultures. In taking this result further, the authors then looked at different types of fit, how they may vary across cultures, and how this may influence job performance.

To that end, four types of fit were identified:

  • person-job fit, the compatibility between an employee and their job;
  • person–organization fit, the compatibility between an employee and the organization;
  • person–group fit, compatibility between an employee and their coworkers; and
  • person–supervisor fit, the compatibility between an employee and their direct supervisor.

These types of compatibility were then grouped into two types: impersonal and interpersonal. The compatibility between an employee and their job or organization were categorized as an impersonal type of fit, since they do not concern interpersonal and social aspects of work. The compatibility between an employee and their co-workers or their boss were categorized as an interpersonal type of fit, since they are directly concerned with how well an employee relates to other people in the workplace.

In comparing fit and performance across cultures, impersonal fit had stronger effects in North America and Europe, while interpersonal fit had stronger effects in East Asia. In other words, for Westerners, matching an employee to the right role and organization is most important, while human resource management in East Asian business environments should take special care to build positive teams in which social conflict is kept to a minimum.