How Job Interviewers’ Selling Orientation Impacts Their Judgment


Publication: Academy of Management Journal
Article: Do Interviewers Sell Themselves Short? The Effect of Selling Orientation on Interviewers’ Judgments
Reviewed by: Susan Rosengarten

 

During our first meetings with potential clients, investors, colleagues or romantic partners, our initial impressions and appraisal of their character influence the judgments we make about them. But at the same time we’re evaluating others, we’re often ”selling” ourselves, or making ourselves seem more attractive.

Interviewers work in the same fashion: They ask prospective candidates as many questions as they can about their experiences or employment preferences to evaluate the candidate’s fit against both the requirements of the job and with the existing team. During these interviews, hiring managers also do their best to “sell” the job and the company to interviewees by talking about the perks the company offers and how the candidate would benefit from joining the team.

The current research by Jennifer Carson Marr and Dan M. Cable examines whether the job interviewer’s selling orientation– their motivation to “sell” the job to an applicant– ultimately influences the accuracy and validity of his or her subsequent judgments about the applicant.

 

INTERVIEWERS’ SELLING ORIENTATION

These researchers designed two studies. In the first study they conducted mock interviews. Participants were assigned to one of two groups– interviewer or applicant– and interviewers were assigned to have either a high or low selling orientation condition.

The researchers then examined the relationship between the interviewers’ selling orientation and their accuracy in evaluating the applicants’ ratings of core self-evaluation. A person’s core self-evaluation is the appraisal of their own self worth, which includes measures of self-esteem, self-efficacy, locus of control and emotional stability.

The researchers found that those interviewers with a high selling orientation condition were less accurate in their evaluation of applicants’ core self-evaluations than those in the low selling orientation condition.

 

HOW SELLING ORIENTATION AFFECTS JUDGMENT

In their second study, the researchers used field studies to further understand how selling orientation influences the validity of interviewer judgments with regards to those applicants’ future success.

Participants in the first sample were interviewers interviewing aspiring MBA candidates for admission into MBA programs. Participants in the second sample were interviewers tasked with matching international teachers to various school districts in the United States.

The researchers found that, as the interviewers’ motivation to “sell” to job applicants increased, their accuracy in predicting the future success of these interviewees decreased.

 

BIG PICTURE TAKEAWAYS

The results of these studies have several important implications for HR professionals. The most significant of these is that formally separating the applicant attraction and evaluation processes may help interviewers make better selection decisions. This, ultimately, may lead to a stronger and more productive workforce.

Welcome to the Future: Investigating Mobile Devices as Assessment Platforms


Publication: International Journal of Selection and Assessment
Article: Establishing the Measurement Equivalence of Online Selection Assessments Delivered on Mobile versus Non-mobile Devices.
Reviewed by: Andrew Morris

There are very few areas of our lives that have not been affected by technological innovations. A vast number of people these days use their phones for virtually everything, from staying in contact with friends and family, to navigating a busy city center, to booking flights.

So it doesn’t seem out of the question that mobile phones might one day be used as a medium by which organizations could assess job applicants. And it appears as if that day may have already come.

A recent study conducted by a team of researchers (including one of I/O at Work’s own writers) looked at whether delivering and/or taking online selection assessments was an accurate indicator that could prove effective for selection purposes.

 

POSSIBLE ISSUES WITH MOBILE DEVICES AS ASSESSMENT PLATFORMS

There are strict guidelines for assessing individuals when those results will be used for selection purposes. But, until now, relatively few studies have rigorously investigated the scientific properties concerning mobile assessment of cognitive abilities.

There are numerous issues specific to a mobile phone that could affect an individual’s results, including the size of the screen being used and whether question formats on various devices would be affected by environmental differences. In other words, would 2 people in completely different locations– such as a bus and a bedroom– have the same chance at acceptable scores?

To study the differences, researchers had job applicants take a series of selection tests on various devices. These tests included biodata assessments, a situational judgment test (SJT), multimedia work simulations and cognitive assessments.

 

EXAMINING MEASUREMENT EQUIVALENCE

The results were mixed, showing that mobile devices may be an option for certain types of tests, but that there are certain parameters that should be taken note of. For example, cognitive-ability tests performed similarly across various devices, but are more effective if they’re not timed, are multimedia in nature and, don’t require much reading comprehension.

The study found that, due to the increasingly good quality of mobile screen resolutions, the benefit an applicant would get in using other devices is greatly reduced. In other words, tests with multimedia work simulation would be a sound option if job applicants are accessing content on their mobiles. Results are still tentative and require broader research, but offer a promising glimpse into assessment platforms of the future.

Despite the promise shown by mobile assessments, there were certain differences with some tests taken on various platforms. With the SJT in particular, results for applicants on mobiles were lower than for test takers on other devices. The reason is not clear, but it may be due to the excessive scrolling and different reading needs of mobile devices.

 

THE BIG PICTURE TAKEAWAY

This research is significant, both for companies looking to expand the platforms they make available to applicants when and for job seekers wanting an edge over their competitors.

the study showed a disadvantage for mobile applicants in some situations, there’s no doubt the differences the researchers found will eventually be narrowed by technological advancements, opening up new possibilities for companies and potential employees alike.

Interview Reliability: Statistics vs. Personal Experience


Publication: International Journal of Selection and Assessment
Article: Employment interview reliability: New meta-analytic estimates by structure and format
Reviewed by: Megan Leasher

This article focuses on the reliability of interviews. The more error introduced in interviews, the less reliable they are. The researchers targeted different sources of error, both from the interviewee and interviewers. Interviewees can introduce error into an interview when they answer similar questions from the same or multiple interviewers differently, while interviewers introduce error when they interpret, evaluate, and rate identical responses differently.

The researchers found that the more structured the interviews, the more reliable the interview results tended to be. In addition, panel interviews were more reliable than single-interviewer interviews.

I have to admit that I am a little torn regarding the desire to strive toward reliability in interviews. I know from a statistical perspective that interview reliability is important, since the more reliable interviews can be, the more capable they are of statistically predicting the future job performance of candidates (who become hires).

In industrial and organizational psychology, this conflict between ideal statistics and personal experience is the crossroads of “where research meets practice.” Sometimes what is helpful isn’t necessarily the ideal scenario or course of action from a statistical perspective. This crossroads can make for great discussion fodder, salty arguments, ideas for new research, or all of the above.

If interviewees answered the same or similar question in the exact, regurgitatingly same way, and all interviewers heard, interpreted, evaluated, and rated a candidate’s responses in the exact same way, you would have a perfectly reliable interview. From a statistical perspective, that is.

But would you be missing something?

Think of all of the times in an interview that you heard, saw, or interpreted something very differently from your colleague who asked the candidate a similar question. Then, when you debriefed after the interviews were complete, those differences gave you something really good to discuss, like nuances that the other didn’t pick up on, unique reactions to answers, and so on. Perhaps a third interviewer had another fresh perspective to share, turning it into a real debate. Debates lead to unexpected and more in-depth realizations about the candidate that one interviewer could have conjured up alone. Statistical reliability would not have wanted this scenario to take place. It would be unhappy that interviewers disagreed, because that would jeopardize and threaten the high reliability it would be striving for.

Which is more important? Reliability or the ability to learn or interpret something unique?

The Consequences of Fit Across Cultures

Previous research has demonstrated that fit – the compatibility between an employee and a work environment – tends to lead to better attitudes, better job performance, and lower turnover (Arthur, Bell, Villado, & Doverspike, 2006). However, this research has focused predominantly on populations in North America. Today, companies operate across geographical boundaries in a globalized world of business, and it does not seem prudent to apply results found in North America to countries in Europe and Asia. Therefore, it becomes necessary to understand if fit across cultures predicts work attitudes and job performance across the globe.

For this study, Oh, Guay, Kim, Harold, Lee, Heo, and Shin reviewed 96 studies that had previously been conducted in East Asia, Europe, and North America. First, they found that fit predicts work attitudes – such as organizational commitment, job satisfaction, and intent to quit – as well as job performance across cultures. In taking this result further, the authors then looked at different types of fit, how they may vary across cultures, and how this may influence job performance.

To that end, four types of fit were identified:

  • person-job fit, the compatibility between an employee and their job;
  • person–organization fit, the compatibility between an employee and the organization;
  • person–group fit, compatibility between an employee and their coworkers; and
  • person–supervisor fit, the compatibility between an employee and their direct supervisor.

These types of compatibility were then grouped into two types: impersonal and interpersonal. The compatibility between an employee and their job or organization were categorized as an impersonal type of fit, since they do not concern interpersonal and social aspects of work. The compatibility between an employee and their co-workers or their boss were categorized as an interpersonal type of fit, since they are directly concerned with how well an employee relates to other people in the workplace.

In comparing fit and performance across cultures, impersonal fit had stronger effects in North America and Europe, while interpersonal fit had stronger effects in East Asia. In other words, for Westerners, matching an employee to the right role and organization is most important, while human resource management in East Asian business environments should take special care to build positive teams in which social conflict is kept to a minimum.

Selection Tests and Job Performance

Ideally, when we test prospective employees, we gather valuable information that will help us determine if a candidate is suitable for a given job. But that’s not all. We also create an impression in the candidate’s mind about our company, its culture, and its values. Research has found that candidates’ reactions to selection testing do affect their attitudes. For example, candidates may react anxiously or perceive unjust treatment. These reactions can influence a candidate’s view of an organization, as well as determine whether they would recommend it to others. New research (McCarthy, Van Iddekinge, Lievens, Kung, Sinar, & Campion, 2013) explores the possibility that selections tests could also be influencing subsequent job performance.

The authors conducted four different experiments in a variety of settings. They found that reactions to selection tests did relate to job performance. However, they go on to explain that this connection is nothing to be concerned about. Many of the same personal characteristics that influence reactions to testing also influence job performance, so we’d naturally expect a relationship between the two. Similarly, reactions to testing may also have an effect on test scores, and test scores themselves are (hopefully) related to job performance. Additionally, the major finding of the study was that candidate reactions to testing did not diminish the usefulness of selection tests. That is to say, it makes little difference if some candidates feel discouraged by the testing, while others feel elated. Either way, the selection test will have the same ability to predict performance fairly.

The authors caution that these findings should not give an organization the go-ahead to completely disregard a candidate’s attitude. It is logical to expect that many benefits occur when candidates feel they are treated fairly. A company’s reputation is, in part, determined by word of mouth, and a sense of fair play may result in favorable attitudes toward the organization that are subsequently communicated to others (Hausknecht, Day, & Thomas, 2004).

This study is important, because it demonstrates that organizations need not make positive or negative feelings of candidates a primary objective when designing selection tests. Simply build a fair test that follows best practices. No matter how the candidates feel, we can be confident that a properly designed selection test is gathering the valuable information needed to hire the right employee for the job.

How Prospective Employees Judge Fit With An Organization


Publication: International Journal of Selection and Assessment
Article: How Interviewees Consider Content and Context Cues to Person–Organization Fit
Reviewed by: Megan Leasher

When you interview for a job, you make choices using the relatively small amount of information to which you have access. As a candidate, not yet on the job, your view of the organization and its work culture is limited. In a way, you are forced to judge a book by its cover, and maybe also by the sneak peak of pages the company gives you access to during the selection process. This research focused on how we make those judgments.

Person-organization (PO) fit is the match between a prospective employee’s personal characteristics and the organization’s cultural characteristics. When candidates feel that their personality and goals are a strong fit with an organization, they are more likely to continue pursuing the organization. When an employer senses strong fit between a candidate and the organization, that candidate is more likely to receive a job offer.

Participants in this study went through a series of scenarios to determine what judgments affect our sense of a good fit. The researchers found that both interviewer behavior and the interview process itself contributed strongly to candidates’ perceptions regarding fit, whereas the actual questions the interviewer asked did not.

In other words, it is not what interviewers say; it’s what they do. Interviewers are the ambassadors of their organization’s cultural values, whether they want to be or not. They are the book cover, and candidates will judge their potential fit within the organization based the image the interviewers give them. A candidate’s perception of being well matched can make the difference between accepting and declining a job offer. It would be well to make that presentation both positive and accurate.

There are tons of practical applications for these findings, but I want to focus on one the article does not address; the cost associated with a bad fit. Think about it. If an interviewer misrepresents the culture and values of the company, the candidate will likely misread their fit. This can lead to a candidate accepting an offer from a company for which they believe they are a match, only to find out later that the book didn’t match its cover. Turnover occurs. This is an expensive mistake for all parties; organizations have to begin the recruiting process again, selecting a new candidate, while productivity suffers from an open position. Candidates must start job searching again, leading to additional gaps in paychecks and benefits coverage. No one is happy.

Your book cover should always accurately represent the contents. Interviewers are brand ambassadors and should be selected with precision.

Emotional Labor: The True Cost of Service with a Smile


Publication: Journal of Applied Psychology (July, 2013)
Article: Affect Spin and the Emotion Regulation Process at Work
Reviewed by: Ben Sher

Talk about demanding work! In addition to their typical job duties, like waiting on tables, making sales, or assisting customers, customer service professionals must also perform emotional labor. When employees smile cheerfully at the end of a grueling shift, they are performing something called surface acting, which is a type of emotional labor. Research has shown that emotional labor can lead to psychological strain and fatigue. The current study (Beal, Trougakos, Weiss, & Dalal, 2013) has made advancements in this area of research by scrutinizing a new variable, called “affect spin”.

The authors define “affect spin” as the extent to which people experience a variety of different emotions throughout the day. For example, some people might fluctuate between two or three emotional states throughout the day, while others fluctuate between six or seven. Why does this matter? The authors conducted a study of restaurant servers in multiple restaurants, and found that levels of “affect spin” influence the degree to which the servers experienced psychological strain and other negative outcomes due to the emotional labor they performed.

When individuals experience many different emotions during the day (high “affect spin”), they also tend to react more strongly to emotionally charged events. Indeed, the study found that these servers experienced more psychological strain as a result of their surface acting. Additionally, people who experience many different emotions may find it harder to predict how they will feel at any given time. This unpredictability makes them exert more effort in forcing themselves to display the “right” emotions on the job. The study also found that people high on “affect spin” experienced more fatigue as a result of their emotional labor.
However, the authors also found that people with high levels of “affect spin” may also have the ability to experience higher levels of psychological strain without it leading to fatigue. This is because they may have more experience with feeling “stressed out”, and therefore handle it better. In other words, being high on “affect spin” has its advantages in addition to its disadvantages.

So yes, servers and other customer service professionals are at risk for psychological strain and fatigue due to the emotional labor required of them. However, thanks to this study, we better understand that not all individuals respond to those job demands in the same way, and why. Although this sheds light on who may be best cut out to do customer service work, as always (say this with a smile), more work is necessary.

Conscientiousness and Job Performance: Is Conscientiousness Always King?


Publication: International Journal of Selection and Assessment (2013)
Article: The validity of conscientiousness for predicting job performance: A meta-analytic test of two hypotheses
Reviewed by: Megan Leasher

Conscientiousness is a predictor of job performance in many jobs, job levels, and industries. But does being conscientious still predict job performance as strongly when characteristics and requirements of the job change? Is conscientiousness the Holy Grail of employee traits?

To learn more about this, the authors conducted a meta-analysis across 53 research studies where conscientiousness was a predictor of job performance. They then rated the jobs that were included in these studies on a number of factors including the level of worker autonomy, how much of the work followed a routine, how much thought and mental ability was required, and so on.

Overall, there were two big takeaways:

  1. Conscientiousness was a stronger predictor of performance for jobs that required more routine, structured work.
  2. Conscientiousness was a weaker predictor of performance in roles that required high levels of cognitive ability, possibly suggesting that intelligence in some way suppresses the influence of personality on job performance.

Taken together, conscientiousness may be more useful in roles with a lot of routine, which are more likely to be hourly and/or entry level roles. Alternatively, conscientiousness may not be as useful for higher-level roles that require more thought and mental ability. Thus, spending on assessments to determine conscientiousness may only be selectively useful. It’s a good lesson in not only what predicts job performance, but what is worth spending your budget on for successful hiring decisions.

What do you think about conscientiousness and job performance? We’d love to hear your thoughts in the comment section below.

Ask Me Online: Benefits of a Web-Based Reference Check Process


Publication: International Journal of Selection and Assessment (March 2013)
Article: Web-Based Multisource Reference Checking: An Investigation of Psychometric Integrity and Applied Benefits
Reviewed by: Thaddeus Rada

Reference checks, the process of asking past employers and colleagues about a job applicant’s qualifications and past performance, have long been a part of human resource management. However, despite the fact that the limitations of using reference checks in the hiring process have been well-recognized for many years, they continue to be a popular part of many organizations’ selection procedures. Recently, there has been increased interest in how reference checks might be made more useful.

To this end, Cynthia Hedricks and her colleagues tested a web-based reference checking survey. It allowed multiple references to all provide structured ratings about a particular applicant through a computer-based survey (as opposed the usual and dubiously valuable phone reference check). The researchers tested this online survey system in multiple organizations, using some basic questions about capabilities, such as professionalism and interpersonal skills. Each survey also had questions tailored to specific competencies and requirements of each organization. Overall, the authors found that the survey system was both efficient and effective. Information was gathered from references quickly, and the ratings that references provided were effective for predicting employees’ job performance and involuntary turnover. In addition, the standardization that the survey brings may help to improve the overall validity of reference checks, thereby improving the contribution that reference checks can make to the hiring process.

This web-based survey shows promise as a flexible, fair tool that could help many organizations streamline reference checks.

What do you think of web-based references? Are they effective? We’d love to hear from you in the comments section below.

When Reading Research Leads to a Brain Full of “What?!” (IO Psychology)


Publication: International Journal of Selection and Assessment (MAR 2013)
Article: Personality types and applicant reactions in real-life selection
Reviewed by: Megan Leasher

When you read scientific research, you should be left feeling as though you gained knowledge and/or have something new and shiny that can be applied to the real world. But once in a while you finish an article and there is nothing but unpoppable “What did I just read?!” bubbles floating in your brain.

This article focused on how applicants’ personality types might impact their reactions to assessment tests within a hiring process. Specifically, candidates for firefighter, dispatcher, and rescue management roles had to complete a series of personality and cognitive assessments as a part of the selection process. Immediately after, they were asked to complete a voluntary survey asking about their reactions to the tests. The researchers found that personality types had no impact on applicants’ perceptions that the assessments were related to the job and that the tests could predict future job performance. One personality type did perceive the tests as less fair than those with other personality types, but the difference may not have been large enough to have real meaning.

As I kept reading the article, I kept wondering how this information would be applied, or even how it would be useful. I kept wondering this because the authors never told me. The authors briefly mention previous research stating that applicant reactions can impact whether or not a candidate might accept a job offer and/or impact their future performance on the job. Yet they never relate their own findings to this previous research. I was left hanging.

The study also had a number of confounds, a few of which the authors acknowledged. Looking solely at rescue applicants isn’t representative of most jobs and applicants. Candidates had to first pass a physical test before they were allowed to begin the personality and cognitive assessments. The reactions survey only asked for their reactions to the personality and cognitive tests, but wouldn’t their perceptions of the physical test muck up their thoughts a bit?

Also, participants voluntarily completed the reactions survey, and not everyone completed it. Wouldn’t the thoughts of those who did NOT want to share their reactions be critical? Finally, their research found different reactions to the assessments based on gender and age, but they never investigated further, which I found disappointing.

Now I have to be fair and say that no research is perfect. All research has confounds. But when you feel as though you don’t get the “so what?” of the entire study and there are also lots of confounds, how are you supposed to react?

After reading this article I was left feeling a little icky inside. But it reminded me that reading research with a discerning amount of skepticism is not only healthy, it is mandatory. It reminded me of a wonderful quote by the philosopher George Santayana: “Skepticism, like chastity, should not be relinquished too readily.”