The Strange Story Behind Situational Judgment Tests: What Do They Really Measure?


Publication: Journal of Applied Psychology
Article: How “Situational” Is Judgment in Situational Judgment Tests?
Reviewed by: Ben Sher

 

Situational judgment tests are often used during employee selection. They present the job applicant with a series of situations that may be encountered on the job. For example, one situation might include an anecdote about a co-worker encouraging you to steal. For each situation, several different responses are listed. Applicants simply choose the response that seems most appropriate. Because these tests are (hopefully) designed by I-O psychologists or other highly trained experts, certain answers are designed to reflect behavior that is consistent with good job performance. The more the applicant choses these “good” answers, the more certain we are that the applicant will succeed on the job if hired.

 

The theory behind situational judgment tests is that applicants who score well are better equipped with knowledge that is very specific to the job. It’s not that the high scoring applicant is necessarily smarter, or has greater social tact. Instead, we believe that the high-scoring applicant has a certain kind of knowledge or skill that is useful for succeeding at the very specific situations that will be encountered on the job. However, new research (Krumm et al., 2015) showed that our assumptions about situational judgment tests may be wrong.

 

SITUATIONAL JUDGMENT TESTS: WHAT’S REALLY GOING ON

The researchers presented scenarios from real situational judgment tests, and investigated what would happen if they left out the situational anecdote and simply offered the behavioral responses. They wanted to know if people could identify the best answer without even knowing the question. As an example, here’s one that I just made up off the top of my head for my fictional new company:

 

Which would you do?

  1. Calmly reassure my boss that I would continue to work hard for the company and not let my disappointment interfere with my effectiveness on the job.
  2. Firmly grab my boss by his shoulders and scream in his face, reminding him that I am the very best employee and will absolutely not tolerate being marginalized in any conceivable way.
  3. Lay down on the floor in the fetal position and start crying.

 

So, which is it? If you want to work for my company, you’d need to have chosen ‘A’. I’m guessing that you knew that, even without the paragraph-long situational scenario asking you how you’d respond after your boss informs you that there will be no holiday bonus this year. Although this example sounds silly, it’s not that different from what the researchers were able to discover.

 

The researchers conducted several studies using real situational judgment tests. They found that in 43-71% of all scenarios (across different studies), it did not matter if the applicants were given the situational scenarios or not. They had an equal chance of getting the item correct whether or not the items were presented with the situational scenarios or by themselves.

 

What does this mean? The authors explain that many items from these situational judgment tests are not measuring knowledge that is specific to the job, but are instead measuring broader knowledge or abilities that might work on any job, such as social-skills or intelligence. The authors did find two scenarios in which applicants benefited from having the situational scenarios instead of just the behavioral answers. The first is when the items measured skills or abilities that specifically related to the job in question, and the second is when the behavioral options or answers included actions that were very specific to the given scenario, and not just generally good or bad behavior.

 

ORGANIZATIONAL IMPLICATIONS

This research shows that around half of all items on situational judgment tests are not measuring knowledge or skills that are needed for a specific job context. Instead, these rogue items seem to be measuring broader traits applicable to many jobs. Why does this matter?

 

Organizations typically invest time and money into developing a situational judgment test that reveals which employees are best suited to a specific job. They usually convene a panel of subject-matter experts (SMEs) to rigorously develop the scenarios and behavioral possibilities for these tests. If the organization is content with measuring broad general traits useful for employees, they may be able to use generic “off-the-shelf” tests that have already been developed, instead of investing the resources into developing a situational judgment test for the specific job. The authors say that this may be the case for entry-level or other low-complexity jobs.

 

Another alternative is to design a test that is specific to a job and simply omit the situational scenarios, providing the behavioral responses alone (like in my example above). This could also save the organization time and money, because the developmental process would be shorter.

 

Finally, if organizations really want to have a situational judgment test tailor-made for a job, they can do themselves a favor and make sure that the questions really are specific to the job. Make sure that the items measure job-specific information, and make sure that the behavioral response options are very specific to the scenario. This will ensure that the situational judgment test is measuring what it intends to measure.

Intelligence Testing in Selection: New Developments


Publication: Human Resource Management Review
Article: Implications of modern intelligence research for assessing intelligence in the workplace
Reviewed by: Lia Engelsted

 

Intelligence testing in selection is often critical because intelligence allows employees to innovate and problem solve. This article (Agnello, Ryan, & Yusko, 2015) reviews the most up-to-date perspectives for conceptualizing and measuring intelligence.

 

PSYCHOMETRIC INTELLIGENCE THEORY

According to the most traditional theory of intelligence, there is a single “general” factor behind intelligence that underlies “all branches of intellectual activity.” More recent approaches to measuring intelligence include the dynamic model of intelligence, which suggests that different cognitive processes grow and become dependent on one another as people develop. Likewise, research suggests that general intelligence may be synonymous with working memory. If that is the case, we can train people to become more intelligent just like we can train them to improve their memory. Another psychometric theory of intelligence is the Cattell-Horn-Carroll (CHC) theory, which depicts key dimensions of intelligence at three different levels, based on how broad or specific the dimension is. Overall, the outcomes of these new developments provide a greater theoretical understanding of intelligence and a new focus on developing theoretically sound intelligence tests.

 

COGNITIVE INTELLIGENCE THEORY

One of the best examples of intelligence developments based on cognitive processes is the Planning, Attention-Arousal, Simultaneous and Successive (PASS) theory of intelligence. The PASS theory conceptualizes mental ability as though it is built from different functions in the brain. The PASS theory focuses on four information processing factors:

 

  1. Planning includes problem-solving, goal striving, strategy formation, utilization of knowledge, and control of the other three processes.
  2. Attention includes focus, selective attention, and continuation of attention to specific things.
  3. Simultaneous processing includes organizing things into coherent patterns and perceiving relationships between things.
  4. Successive processing includes integrating information into a sequential order as well as the use of ordered information.

 

One test that is administered in an educational setting based on the four factors of PASS is the Cognitive Assessment System (CAS). The CAS includes twelve subtests which focus on cognitive processes, such as decision making, task performance, and spatial memory. The CAS is different from other intelligence tests because it uses novel tasks that focus on cognitive functioning. While still in its early stages, the CAS is promising, because it is capable of predicting performance in school, and differences between races are smaller than in other methods of testing for intelligence. It is not currently used for selection, but it could be adapted for this use in the future.

 

NEUROPSYCHOLOGICAL INTELLIGENCE THEORY

The neuropsychological approach studies how the brain relates to behavior. Neuropsychological assessments examine which parts of the frontal lobes are activated when participants complete certain behaviors, which include a variety of verbal and non-verbal tasks associated with memory and attention. These tests are generally shown to be valid through high-tech biological or neuroimaging techniques. Preliminary research on these tasks for work outcomes is promising. However, one limitation of the neuropsychological approach is that the brain regions activated during assessments may differ from person to person.

 

TEST DESIGN PRINCIPLES

In addition to the theoretical advances of intelligence, the authors discuss developments associated with using intelligence tests. There are two particular practices that are most relevant to HR professionals. First, it is important to make sure that tests are culturally fair to people of all backgrounds. Previous knowledge of some content may vary by background (e.g., race, gender, culture). Research demonstrates that performance can suffer when an individual is not familiar with the cultural content of the test. Users of intelligence tests should be cognizant of the degree to which intelligence tests contain culturally-specific content that is not relevant to measuring intelligence. Practitioners should make sure not to use tests which have compromised fairness standards. Second, there is an advantage of using “non-entrenched-tasks.” These tasks remove any acquired knowledge from a task so that all examinees are left on equal footing. In the HR context, these tasks may be especially useful for culturally diverse groups since they decrease the culture-dependent content.

 

IMPLICATIONS FOR THE WORKPLACE

The authors warn that HR professionals should proceed with caution before selecting intelligence tests to make employment decisions. However, there are numerous benefits to applying modern intelligence theories to HR practices, including the ability to predict successful job performance. Additionally, modern intelligence tests may be able to predict performance while also promoting racial diversity in the workforce, as modern intelligence tests reveal smaller score differences between different cultural or ethnic groups.

The Role of Storytelling in Effective Structured Job Interviews


Publication: Journal of Business and Psychology
Article: Storytelling in the selection interview? How applicants respond to past behavior questions
Reviewed by: Alexandra Rechlin

 

It’s no surprise that I-O psychologists recommend using structured job interviews when selecting someone for a position. This is because structured interviews are far better predictors of performance than are informal, unstructured interviews. As part of a structured interview, two types of questions may be asked – situational questions (e.g., “What would you do if you disagreed with your supervisor?”) or behavioral questions (e.g., “Tell me about a time that you disagreed with your supervisor”).

Storytelling is an important aspect of answering a behavioral interview question, because the interviewee needs to be able to tell a story about what happened. But how good are applicants at telling stories? Are there any general characteristics that make some people better storytellers than others? And does skill in storytelling impact interview outcomes? In a recent study, Bangerter, Corvalan, and Cavin (2014) set out to answer these questions.

 

STORYTELLING IN THE STRUCTURED JOB INTERVIEW

A typical conversation has a collaborative aspect (or at least it should). Even if one person is telling a story, the other person will usually nod or give some sort of verbal acknowledgement. However, in a job interview, storytelling will differ somewhat from storytelling in a normal conversation. There isn’t the back-and-forth of a typical conversation, and the reactions from the interviewer may change the course of the story. If the interviewer seems bored or judgmental, that could change the applicant’s storytelling. Likewise, if the interviewer is smiling and engaged, that could also encourage the applicant. The reactions of the interviewer could therefore even have implications for reliability, or the extent to which the interview is capable of consistently assessing applicants. In addition, people have differing levels of storytelling skill. Apart from work experience, applicants have different amounts of experience with storytelling; the more behavioral interviews an applicant has participated in, the more experience she will have with telling stories in an interview situation.

 

THE PRESENT STUDY

To answer their questions about storytelling in the job interview, the researchers analyzed transcripts from 72 actual job interviews. Applicants answered four behavioral questions that assessed communication, persuasion, organization, and stress management. The most frequent type of a response was a pseudostory, which is more general and abstract than a story. A pseudostory describes a generic situation or set of similar situations, as opposed to one specific event. Less than one in four applicants told an actual story. Applicants also talked more about situations than about tasks, actions they took, or results. Applicants with higher intelligence told more pseudostories, applicants who were more conscientious voiced their values and opinions more, and men told proportionally more stories than women did. Men, extraverted people, and those who told more stories and pseudostories received more hiring recommendations. On the opposite side, applicants who told more self-descriptions (e.g., “I’m good at managing my stress”) were less likely to get a hiring recommendation.

 

IMPLICATIONS FOR JOB-SEEKERS AND ORGANIZATIONS

If you’re an applicant, try to give detailed stories during a job interview. Before going into the interview, think about ways that you can frame some events from your past as stories that you can talk about. If you are an interviewer, try to elicit stories from applicants. Encourage them to provide more details, and be aware that applicants might not naturally provide the kind of stories you’re looking for. Doing these things will help ensure that job interviews remain fair and effective.

Can Proctored Internet-Based Selection Tests Really Stop Cheating?


Publication: Journal of Business and Psychology
Article: Cheating, reactions, and performance in remotely proctored testing: An exploratory experimental study
Reviewed by: Alexandra Rechlin

 

In order to curb potential cheating, many organizations have started using remotely proctored internet tests. But do they actually work? And could they have unintended consequences?

 

REMOTELY PROCTORED INTERNET-BASED SELECTION TESTS

Numerous methods can be used to administer proctored internet testing. One common method is real-time webcam monitoring, and another is real-time screen sharing. Some proctored testing software keeps other programs or applications from working while the test is going on. It can prevent screenshots, copy and paste, and more. It appears that remote proctored testing may be a good idea to prevent cheating in employment tests, but little research has been conducted on it.

 

THE CURRENT STUDY

In this study, Karim, Kaminsky, and Behrend (2014) used a sample of people who were playing the role of test-takers, and they motivated the test-takers to perform well by using an extra monetary incentive. Half of participants were told to use a webcam to record themselves taking the test, while the other half were not asked to record themselves. All participants completed two tests: a quantitative test that included practice GRE questions that were easily searchable online, and a logical reasoning test whose answers could not be found online.

 

RESULTS

Participants in the proctored exam had a higher sense of pressure and more concerns about privacy. In addition, more participants withdrew from the proctored test than from the non-proctored test. On a positive note, proctoring did seem to decrease cheating, as proctored participants scored worse on the test that was searchable but not the test that was non-searchable. However, this decrease was very small. Gender and numerous individual differences did not have an effect on how proctoring affected scores.

 

IMPLICATIONS FOR ORGANIZATIONS

This study suggests that although proctored internet tests (specifically, using a webcam) may slightly decrease cheating without affecting performance, it may also have a negative effect on applicant reactions. In addition, it’s important to consider privacy concerns and keep any recorded videos in a safe and secure location, and to delete them after they are no longer needed. The authors encourage being very cautious if you plan to use remote proctoring in your organization due to its potential for negative applicant reactions, which can lead to good applicants dropping out of the hiring process and applicants being less likely to recommend your organization to others.

How to Design a Resume That Will Get You Hired


Publication: Journal of Business and Psychology
Article: Effects of applicant personality on resume evaluations
Reviewed by: Alexandra Rechlin

 

When writing your resume, you probably thought about how potential employers might perceive you. Many articles and books give advice regarding what to include and how to design a resume, but many of those authors don’t actually agree on what method works best. A recent exploratory study discovered what personality traits people attribute to different parts of your resume, and how hirable they might make you appear.

 

RESUMES, PERSONALITY, AND HIRABILITY

This probably comes as no surprise, but past research suggests that people viewing your resume will make personal judgments about you based on what they see on your resume. In addition, parts of your resume are related to individual differences, like extraversion or conscientiousness. However, there is little research that has been conducted to understand what aspects of your resume lead to certain judgments.

 

THE FIRST STUDY

In this paper, Burns et al. (2014) attempted to extend the previous research to better understand specific resume cues and how they are perceived. They actually conducted two related studies. In the first, they had participants view resume cues (e.g., GPA, job titles, extracurricular activities) and link them with personality adjectives (e.g., hardworking, creative). Participants also rated each cue regarding its importance in determining hirability. Not surprisingly, the majority of the cues that participants rated as being important to hirability came from the experience section of the resume. Participants also seemed to easily link cues to personality traits, especially conscientiousness.

 

THE SECOND STUDY

In the second study, the researchers had actual HR professionals evaluate real resumes and give their impressions regarding the applicants’ personality and hirability. The HR professionals had low levels of agreement about hiring recommendations. The applicants’ self-ratings of personality only contributed slightly to the hiring recommendations, but the HR professionals’ ratings of the applicants’ personalities was a major contributor to the hiring recommendations. In other words, what the HR professionals thought about the applicants’ personalities was very influential in their hiring recommendations.

 

RECOMMENDATIONS FOR JOB SEEKERS

The researchers gave several important recommendations for job seekers based on the results of these studies. First, job seekers should provide detailed information about their education and be sure to include honors and awards. Second, job seekers should use a school email address (or other formal email address) instead of a personal email address, and they shouldn’t use any unusual fonts or formats. Third, job seekers should put their educational information before their past job information, and include any information about leadership roles or ways they financially benefited an organization. Finally, it seems to be good to include extracurricular or volunteer experiences. Following these tips will make sure that your resume gives you the best chance to land the new job.

Intelligence Testing: Is It Always the Smartest Thing to Do?


Publication: Journal of Applied Psychology
Article: A Meta-Analysis of the Relationship Between General Mental Ability and Nontask Performance
Reviewed by: Ben Sher

 

Smart employees tend to be better at doing their jobs. This is considered one of the most important findings in the history of I-O research. Meta-analysis, which is a method of compiling results from many different researchers and studies, has shown that intelligence (or general mental ability) is associated with better job performance for basically any job. But there are other important components that make organizations successful besides narrowly-defined task performance (parts of a job that are in the job description). New research (Gonzalez-Mulé, Mount, & Oh, 2014) investigates whether intelligence can also predict other measures of workplace success.

 

OTHER WAYS OF MEASURING JOB SUCCESS

The authors conducted a meta-analysis to determine if intelligence is related to two major measures that are important to organizations: Counterproductive work behavior (CWB), and organizational citizenship behavior (OCB). These terms sound fancy but they are actually quite simple. CWBs mean anything that employees do that breaks organizational norms or expectations. This behavior can be directed at a coworker (i.e. bullying or harassment) or at the organization (i.e. stealing from the employer, unnecessary absences). OCBs refer to anything that employees do that are not formally recognized in their job description, for example helping out a coworker or suggesting a new way of doing things that can help the organization save resources.

 

RESULTS OF THE STUDY

The meta-analysis found that intelligence was associated with more OCBs, meaning that smarter employees also went beyond their job descriptions more frequently. The authors explain that smarter people are typically better at seeing the big picture, for example they may understand that helping a coworker has benefits for the organization in the long run. Also, smarter employees may sometimes have greater capacity to help out others. They may be the only ones who are capable of devising a solution to a problem that eventually helps out the organization.

However, when it came to CWBs, there was no real relationship with intelligence. The authors had predicted that smarter employees would engage in less bad behavior because they are more readily capable of seeing the dangerous outcomes, such as harming the company or harming themselves by getting caught. But the data didn’t support this conclusion.

 

WHAT ABOUT PERSONALITY TESTING?

The authors also compared intelligence testing with personality testing to see which was generally more useful for predicting success on the job. As predicted, intelligence testing predicted better than personality testing when the outcome was task performance, or the parts of a job that are listed in a job description. When using the other outcomes of job success (OCBs and CWBs), the authors found a different story. First, when it came to OCBs (going above and beyond job descriptions) intelligence and personality were about equally useful in predicting which employees will go above and beyond. When it came to CWBs (the bad behavior), personality was actually a better predictor than intelligence.

 

IMPLICATIONS FOR THE ORGANIZATION

This study supports the idea that the best predictor of job success is general intelligence, specifically because it has the ability to predict good old fashioned task-performance. It pays to hire smart employees. But that’s not the entire story. The conclusions here also indicate that intelligence isn’t the be-all and end-all of how to hire employees. Organizations should also have the foresight to care about extra effort and misbehavior at work. If you want employees who strive to make the workplace better for everyone, intelligence testing may still help, but it is not any better than personality testing. But if you want employees who don’t misbehave, personality testing may be the way to go.

Specific Cognitive Abilities Can Benefit Selection Programs


Publication: Journal of Applied Psychology
Article: Examining the incremental validity and relative importance of specific cognitive abilities in a training context.
Reviewed by: Andrew Morris

 

Organizations oftentimes use specific cognitive abilities to help select people for jobs. Selection itself is important because organizations can sometimes waste millions of dollars in training people who don’t have the right aptitude, aren’t motivated, or who don’t fit minimum requirements for the job. When an organization selects employees, it often uses an assessment process to try and find the “right people.” This assessment often involves tests of general cognitive ability, which is basically what we’d consider overall intelligence. What if organizations could fine tune these processes so that they were more successful in identifying those who may succeed in a training context or in a job? Recent research findings offer a possible way to do this.

 

GENERAL INTELLIGENCE VERSUS SPECIFIC ABILITIES

Many researchers adhere to the view that intelligence is made up of a single “general” factor, also called general mental ability. This view explains that there is an underlying single-dimension of mental ability that underlies numerous different types of learning and performance abilities. However, researchers debate about whether including specific abilities of intelligence can provide just a little bit of extra predictive power for organizations. Some believe that these specific abilities can help predict beyond what general mental ability alone can offer when it comes to selecting individuals for the job.

 

THE RESEARCH FINDINGS

The researchers investigated the importance of using general mental ability and also specific abilities in a training context, specifically military personnel learning a foreign language. The researchers examined the predictive ability of general mental ability and the specific abilities within the trainee group by using different approaches to measuring cognitive ability. Results showed that if the specific mental abilities of the applicants aligned with what was being assessed, then using the specific abilities would add predictive value for the organization. For example, in testing learning of a foreign language, the specific ability of foreign language aptitude would be more useful than numerical ability.

 

IMPLICATIONS FOR ORGANIZATIONS

These findings challenge many assessment and selection practices within hiring and training. In some cases, testing for general mental ability may be sufficient, but in other cases, managers should not underestimate the role that specific abilities may play in helping organizations predict who will succeed at training and on the job. This would require testing for specific abilities that are either closely aligned with job responsibilities, or are a requirement in a training program. Specific abilities should not be used when these responsibilities are not clearly defined or if there is a mismatch with actual job requirements.

When specific abilities match what is needed for either training or job success, then specific abilities can provide extra predictive power over merely assessing general intelligence in candidates. It is important to note that even a small incremental advantage in prediction for large selection programs could have a profound influence on return on investment figures.

Avoiding Adverse Impact: Selection Procedures That Increase Organizational Diversity


Publication: Journal of Applied Psychology
Article: More Than g: Selection Quality and Adverse Impact Implications of Considering Second-Stratum Cognitive Abilities
Reviewed by: Andrew Morris

 

Using cognitive tests as part of an employee selection process will generally help more than various other methods (such as interviews) to ensure the selection of better performing individuals. There are some methods that are slightly better predictors of performance, but cognitive tests have proven to be a mainstay.

Unfortunately, the use of such tests can lead to discriminatory hiring practices against minority groups, who often score below their white counterparts due to a variety of factors.

Different strategies have been proposed to counteract this adverse impact in selection procedures in order to ensure a fairer hiring process and encourage greater diversity within the workplace. The research reviewed here investigated one such strategy.

 

ADVERSE IMPACT

Adverse Impact is a means of measuring this type of discrimination. It is calculated by dividing the selection ratio from the lower scoring group of applicants (a minority group) by the selection ratio of the higher scoring comparison group (historically, more privileged groups).

Adverse Impact towards the minority group has occurred if the result of these calculations is less than 4/5ths. This is a way of guarding against discriminatory selection practices and ensuring a more diverse and representative workforce.

When cognitive tests are used for selection procedures, it is perceived that the organization now has to make a trade-off between selection criteria related to work performance and selecting for diversity by adhering to the Adverse Impact ratio.

 

SECOND-ORDER STRATUM

One strategy for overcoming Adverse Impact is to weigh cognitive and non-cognitive tests differently. The researchers investigated the use of this weighting strategy on cognitive sub-tests, which represented the second-order stratum of cognitive ability.

Second-order cognitive abilities are not specific individual abilities, but rather a broader constellation of related abilities, yet still more refined than a measure of general cognitive ability (known as g). For example, measuring acquired knowledge in reading and writing (stratum II) would include relationships across vocabulary, reading comprehension, and analogy tests.

The researchers hypothesized that, although general cognitive ability may be a fairly good predictor of later performance, the stratum II abilities may be better predictors when a job requires that specific ability. By using a sophisticated weighing technique with varying values for specific abilities related to a job, the researchers found that this method could improve minority hiring, but not at the expense of selection quality if a test of general cognitive ability was used.

 

BIG PICTURE TAKE-AWAYS

This research is particularly interesting for managers and recruiters because it provides a clear way forward in decreasing the possible Adverse Impact of company selection procedures, which helps to create a more diverse workforce.

Workplace diversity has been shown to have multiple benefits in terms of organizational outcomes. But you can also rest assured that using such weighted methods won’t decrease the quality of hires if the abilities are shown to relate to the job.

How Job Interviewers’ Selling Orientation Impacts Their Judgment


Publication: Academy of Management Journal
Article: Do Interviewers Sell Themselves Short? The Effect of Selling Orientation on Interviewers’ Judgments
Reviewed by: Susan Rosengarten

 

During our first meetings with potential clients, investors, colleagues or romantic partners, our initial impressions and appraisal of their character influence the judgments we make about them. But at the same time we’re evaluating others, we’re often ”selling” ourselves, or making ourselves seem more attractive.

Interviewers work in the same fashion: They ask prospective candidates as many questions as they can about their experiences or employment preferences to evaluate the candidate’s fit against both the requirements of the job and with the existing team. During these interviews, hiring managers also do their best to “sell” the job and the company to interviewees by talking about the perks the company offers and how the candidate would benefit from joining the team.

The current research by Jennifer Carson Marr and Dan M. Cable examines whether the job interviewer’s selling orientation– their motivation to “sell” the job to an applicant– ultimately influences the accuracy and validity of his or her subsequent judgments about the applicant.

 

INTERVIEWERS’ SELLING ORIENTATION

These researchers designed two studies. In the first study they conducted mock interviews. Participants were assigned to one of two groups– interviewer or applicant– and interviewers were assigned to have either a high or low selling orientation condition.

The researchers then examined the relationship between the interviewers’ selling orientation and their accuracy in evaluating the applicants’ ratings of core self-evaluation. A person’s core self-evaluation is the appraisal of their own self worth, which includes measures of self-esteem, self-efficacy, locus of control and emotional stability.

The researchers found that those interviewers with a high selling orientation condition were less accurate in their evaluation of applicants’ core self-evaluations than those in the low selling orientation condition.

 

HOW SELLING ORIENTATION AFFECTS JUDGMENT

In their second study, the researchers used field studies to further understand how selling orientation influences the validity of interviewer judgments with regards to those applicants’ future success.

Participants in the first sample were interviewers interviewing aspiring MBA candidates for admission into MBA programs. Participants in the second sample were interviewers tasked with matching international teachers to various school districts in the United States.

The researchers found that, as the interviewers’ motivation to “sell” to job applicants increased, their accuracy in predicting the future success of these interviewees decreased.

 

BIG PICTURE TAKEAWAYS

The results of these studies have several important implications for HR professionals. The most significant of these is that formally separating the applicant attraction and evaluation processes may help interviewers make better selection decisions. This, ultimately, may lead to a stronger and more productive workforce.

Welcome to the Future: Investigating Mobile Devices as Assessment Platforms


Publication: International Journal of Selection and Assessment
Article: Establishing the Measurement Equivalence of Online Selection Assessments Delivered on Mobile versus Non-mobile Devices.
Reviewed by: Andrew Morris

There are very few areas of our lives that have not been affected by technological innovations. A vast number of people these days use their phones for virtually everything, from staying in contact with friends and family, to navigating a busy city center, to booking flights.

So it doesn’t seem out of the question that mobile phones might one day be used as a medium by which organizations could assess job applicants. And it appears as if that day may have already come.

A recent study conducted by a team of researchers (including one of I/O at Work’s own writers) looked at whether delivering and/or taking online selection assessments was an accurate indicator that could prove effective for selection purposes.

 

POSSIBLE ISSUES WITH MOBILE DEVICES AS ASSESSMENT PLATFORMS

There are strict guidelines for assessing individuals when those results will be used for selection purposes. But, until now, relatively few studies have rigorously investigated the scientific properties concerning mobile assessment of cognitive abilities.

There are numerous issues specific to a mobile phone that could affect an individual’s results, including the size of the screen being used and whether question formats on various devices would be affected by environmental differences. In other words, would 2 people in completely different locations– such as a bus and a bedroom– have the same chance at acceptable scores?

To study the differences, researchers had job applicants take a series of selection tests on various devices. These tests included biodata assessments, a situational judgment test (SJT), multimedia work simulations and cognitive assessments.

 

EXAMINING MEASUREMENT EQUIVALENCE

The results were mixed, showing that mobile devices may be an option for certain types of tests, but that there are certain parameters that should be taken note of. For example, cognitive-ability tests performed similarly across various devices, but are more effective if they’re not timed, are multimedia in nature and, don’t require much reading comprehension.

The study found that, due to the increasingly good quality of mobile screen resolutions, the benefit an applicant would get in using other devices is greatly reduced. In other words, tests with multimedia work simulation would be a sound option if job applicants are accessing content on their mobiles. Results are still tentative and require broader research, but offer a promising glimpse into assessment platforms of the future.

Despite the promise shown by mobile assessments, there were certain differences with some tests taken on various platforms. With the SJT in particular, results for applicants on mobiles were lower than for test takers on other devices. The reason is not clear, but it may be due to the excessive scrolling and different reading needs of mobile devices.

 

THE BIG PICTURE TAKEAWAY

This research is significant, both for companies looking to expand the platforms they make available to applicants when and for job seekers wanting an edge over their competitors.

the study showed a disadvantage for mobile applicants in some situations, there’s no doubt the differences the researchers found will eventually be narrowed by technological advancements, opening up new possibilities for companies and potential employees alike.