Skip Navigation

Cover

November-December 2013

Print
Email
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

Soft Skills for the Workplace

Two years ago, in the middle of the darkest days of the recession, ABC's This Week hosted a panel of recent college graduates and a pair of industry leaders. After hearing from the graduates about the difficulties of finding a job in a tight, competitive market, host Christiane Amanpour addressed the employers:

Amanpour: Let me turn to you both. You've now listened to [the graduates]. You see what they've studied. You've heard their prospects. Mort, as the owner of a real estate company, as the publisher of newspapers and magazines, what do you think they need to do? And are they hirable, [given] what you've just heard right now?

Zuckerman: Well, I don't know enough about their individual skills and capacities, but this is the worst atmosphere for employment that we've had in 50 or 60 years. I mean, just think of the fact, in the ‘70s, ‘80s, and ‘90s, the United States created over 20 million jobs in each one of those decades. In the first decade of this century, we created zero jobs.

If I were hiring today, … the one thing that I [would] look for more than anything else is some evidence of determination, which to me is the most important quality in terms of how people will do in their career. (ABC News, 2011, p. 13)

In this exchange, Mort Zuckerman—co-founder, chairman, and CEO of Boston Properties; owner and publisher of the New York Daily News and of US News & World Report; and former owner of The Atlantic and Fast Money—identifies determination as the quality that best predicts success in the workforce. It was not that long ago that many management consultants, economists, industrial-organizational psychologists, and laypeople believed that cognitive skill was the single most important predictor.

What happened to change that?

The Importance of Soft Skills

Until quite recently, the predominant belief at the policy level, in education at all levels, and in workforce settings was that cognitive abilities were the ones that most mattered. This led to the deployment of large-scale efforts to assess those skills. The National Assessment of Educational Progress (NAEP, NCES), administered every year in all 50 states and several other jurisdictions in the US, and the Program for International Student Assessment (PISA, OECD), administered every three years in over 70 countries, were initiated primarily to compare states and countries on the cognitive abilities of their schoolchildren (e.g., in reading, mathematics, science, and problem solving).

The centerpiece of the No Child Left Behind Act of 2001 was “accountability for results,” which meant that “student progress and achievement will be measured according to tests that will be given to every child, every year.” By tests, the Act was referring to cognitive tests.

At the postsecondary level college-placement testing, which determines whether students are ready for credit-bearing college mathematics and English courses, was and to a large extent still is exclusively based on performance on cognitive tests in those subjects. Students who get passing scores on the mathematics placement test go into college-level, credit-bearing mathematics courses; the rest go into developmental courses.

Arguments about the validity and fairness of standardized cognitive admissions tests such as the SAT, ACT, and GRE have dominated discussions about higher education admissions policies. In academic circles and on editorial pages, there have been national debates about the importance of cognitive ability, its heritability, race/ethnic differences, the “bell curve,” and the cognitive-skills shortage.

In the workplace, companies and the military have historically focused selection testing almost exclusively on cognitive abilities, in part because a literature in industrial organizational psychology attested to its preeminent importance in identifying workers most likely to succeed in training and on the job (Schmidt & Hunter, 1998). A generation was taught that other variables, such as personality, were unrelated to workforce outcomes or to just about anything else.

There had been occasional nods to the importance of personal qualities in education and the workplace (Willingham & Breland, 1982), but these were rare. The situation only began changing in the 1990s, when psychology began to coalesce around a five-factor model of personality (Goldberg, 1990). This led to rapid acceptance and expansion of the notion that personality mattered, and studies began contributing to an accumulating knowledge base about its importance.

By the middle of the millennium decade, researchers were able to link wide areas of human endeavor and outcomes to personality (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007): Personality measures were shown to predict mortality, divorce, occupational attainment, health behaviors, drug use, alcoholism, managerial success, leadership effectiveness, procrastination, creativity, job performance, absenteeism, team performance, and job satisfaction—to name just a few. The meta-analytic-based list of predictive relationships and their magnitude rivaled and in some cases exceeded similar analyses made a decade earlier for predictions based on cognitive ability (e.g., Gottfredson, 1997).

Particularly relevant were such studies showing that the big-five personality factors—most often conscientiousness (i.e., the trait of striving, being organized, and working hard)—predicted both workplace (Ones, Dilchert, Viswesvaran, & Judge, 2007) and academic success (Poropat, 2009). Other meta-analyses suggested additional non-cognitive predictors of school performance (grades and retention), such as having academic goals, institutional commitment, social support and involvement, academic self-efficacy and self-concept, conscientiousness, a tendency to procrastinate, a need for cognition, grade goals, time management skills, and persistence/effort regulation (Richardson, Abraham, & Bond, 2012).

Meanwhile, new conceptions about human-capital theory began appearing in the economics literature. Human capital is a worker's set of skills, broadly defined, that enhance productivity. They can be cognitive skills, abilities, knowledge, dispositions, attitudes, interests, etc. These attributes come from innate ability, education (e.g., years in school, quality of schooling), training, medical care, and/or parenting, or in other ways.

What was new was an increased awareness of the importance of non-cognitive skills. First, Heckman and Rubinstein (2001) showed that students who dropped out of high school but received GEDs possessed cognitive skills equal to those of high school graduates (based on standardized cognitive test scores) but had poorer performance in the workforce (e.g., lower wages, higher absenteeism, more unemployment, more legal troubles). The researchers attributed their relative lack of success to their lower non-cognitive skills, as reflected in their failure to persist to high school graduation.

The economists Samuel Bowles, Herbert Gintis, and Melissa Osborne showed that cognitive skills accounted for only 20 percent of the educational-attainment effects on labor-market outcomes (i.e., more schooling leads to higher employment and wages). Here the interpretation was that schooling develops non-cognitive as well as cognitive skills, and these non-cognitive skills drive workplace success (Levin, 2012).

A couple of recent studies make a similar point. In one, Segal (2012) showed that, controlling for cognitive test scores and family characteristics, teachers' ratings of 8th-grade male students on a non-cognitive five-item checklist—is the student “frequently tardy?” “frequently absent?” “consistently inattentive?” “rarely completes his homework?” “frequently disruptive?” (National Educational Longitudinal Study, or NELS:88) —predicted educational attainment.

The ratings also predicted workplace earnings 12 years later, over and above cognitive test scores. This was true regardless of educational attainment, whereas cognitive test scores predicted earnings only for students with postsecondary degrees.

Lindqvist and Vestman (2011) had a similar finding: They tracked 14,000 Swedish 18–19 year-old military enlistees who had been given both a two-hour cognitive test and a 25-minute non-cognitive interview. The interview had led to a rating (on a nine-point scale) of their willingness to assume responsibility, independence, outgoing character, persistence, emotional stability, initiative, social skills, and lack of personality disorders.

The researchers found that both the cognitive and non-cognitive measures predicted employment outcomes (earnings and unemployment) of those same men, now 32 to 41 years old. However, the non-cognitive measures predicted outcomes at all levels of educational attainment, while the cognitive score predicted them only for those who were above the median. The cognitive measure was a stronger predictor of educational attainment, but, controlling for educational attainment, the non-cognitive measure was a stronger predictor of earnings and employment.

This and other research started to raise the national consciousness in this decade. For example, Paul Tough's 2012 best seller, How Children Succeed, contrasted the “cognitive hypothesis” that “success today depends primarily on cognitive skills—the kind of intelligence that gets measured on I.Q. tests” with a new view that success has more to do with character skills such as perseverance, grit, curiosity, optimism, and self-control. According to Tough, these also happen to be more malleable than IQ. A plea to include non-cognitive as well as cognitive skills in the educational conversation could also be seen in a 2012 report by the National Research Council entitled Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century.

This cultural shift is apparent not only in the quote from Mort Zuckerman during This Week but in the results of several large-scale surveys that asked employers which were the most important skills for workforce success. A 2012 study by Millenial Branding found “communication skills,” a “positive attitude,” being “adaptable to change,” and “teamwork skills” to be the four most important traits employers were looking for when they hired. Another major employer survey sponsored in 2006 by the Conference Board and others entitled “Are they really ready to work?” identified “professionalism/ work ethic,” “teamwork/collaboration,” and “oral communications” as the top three “very important” skills for job success for new workforce entrants at all three education levels: high school graduates, two-year college graduates, and four-year college graduates.

Can Soft Skills Be Taught?

The finding that cognitive test scores accounted for only 20 percent of educational attainment's effect on earnings and employment suggests that one possibly unintended accomplishment of schooling is to teach non-cognitive skills. Additional support for this idea comes from studies of early-childhood intervention programs that, although targeting cognitive skill development, have even greater effects on non-cognitive skills.

For example, Barnett (2011) examined several preschool programs for disadvantaged students: the High/Scope Perry Preschool program, the Abecedarian program, and the Chicago-Child Parent Centers program. All involved a random assignment of students to the program at an early age and followed them through their 20s and later. Barnett found mixed results for the programs' success in boosting the cognitive test scores (IQs) of participants but clear results with regard to many other outcomes, such as educational attainment, avoiding arrests and legal trouble, avoiding welfare, owning a home, and maintaining good health habits.

If preschool and regular schooling enhance a student's non-cognitive skills as a byproduct, can targeted interventions provide an even greater boost? This has been the theme of work by the Collaborative for Academic, Social, and Emotional Learning (CASEL), an organization with a mission “to establish social and emotional learning as an essential part of education” (http://casel.org/about-us/mission-vision/).

It does this by advancing the science of social-emotional learning, expanding evidence-based practice, and strengthening the field. CASEL defines the core social-emotional competencies as self-awareness, self-management, social awareness, relationship skills, and responsible decision-making (i.e., considering the social norms involved in and the ethics, safety, and consequences of making decisions).

CASEL performed a trio of meta-analyses: one an examination of 69 after-school programs, another of 213 regular school-based studies, and a third of 80 studies of children selected for signs of social-emotional problems (Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, 2011). The targeted programs were effective in enhancing the students' academic and social-emotional skills.

The meta-analyses also suggested that particular program features could be considered best practices because they were associated with program success. One of these was program implementation adequacy, which seems obvious but is sometimes overlooked in reviews such as these. The others are captured by the acronym SAFE: Effective programs were

  • sequenced, involving a planned set of activities to be executed step by step;

  • active, requiring active learning activities such as role plays and behavioral rehearsals;

  • focused, devoting sufficient time for developing social and emotional skills; and

  • explicit, targeting specific social and emotional skills.

A different take on the trainability of soft skills was provided in a recent meta-analysis by Brent Roberts and colleagues, who had shown in earlier research that personality tended to change over a lifespan. Self-confidence, warmth, self-control, and emotional stability all tend to increase with age, particularly in young adulthood but continuing through middle and old age (Roberts, Walton, & Viechtbauer, 2006).

Particular events in one's life seem to be associated with significant changes in personality: Successful careers are associated with increases in emotional stability and conscientiousness, remarriage is associated with a reduction in neuroticism, and engaging in negative workplace behaviors is associated with decreases in conscientiousness and emotional stability (Roberts & Mroczek, 2008).

Can personality enhancements be induced through specific interventions, such as psychotherapy? In a paper presented at the 2013 meeting of the Association for Psychological Science, Brent Roberts identified 144 studies (with 15,047 participants) that included personality measures as pretests and posttests for both clinical (e.g., for depression) and nonclinical (e.g., for eating disorders) samples, with an average intervention duration of 28 weeks. On average he found that interventions changed personality factors on average by about a half a standard deviation (e.g., from the 50th to the 67th percentile) and that the change did not fade over as much as five years.

While the interventions Roberts examined affected many personality traits, emotional stability was the most altered, perhaps because of the way the interventions were chosen (e.g., therapy for anxiety or depression is likely to target emotional stability). Interventions of a different nature (e.g., executive or life-skills coaching) might be expected to address different personality traits, such as conscientiousness, drive, and organization.

Finally, corporate training, a $50-billion-dollar industry, is concerned to a considerable extent with soft skills. In their meta-analysis of the effects of corporate training, Arthur, Bennett, Edens, and Bell (2003) identified 123 training programs targeting interpersonal skills that had an effect size even larger than the intervention effect size reported by Roberts. The programs used a wide variety of formats (lecture, audiovisual, discussion) and focused on various outcomes (learning the interpersonal skill, transferring it to the job, and seeing improvement in workplace performance as a result).

Together these studies show that effective programs that are already in place, from preschool to the workplace, can develop and increase soft skills. Given the newfound recognition of the importance of such skills, it is likely that this education and training will prove to be an active area of development in both education and the workplace in the coming decade.

What Can We Do with Soft Skills Assessments?

Soft-skills assessments are commonly included in employee recruiting, prescreening, and selection, and there is a major human-resources consulting industry around these uses. Companies such as SHL, an industry leader, market tests, simulations, and interview tools to capture biodata and measure personality and behavior, situational judgment, motivation, dependability, and safety.

According to a 2001 survey by the American Management Association, 13 percent of employers used a personality test, and almost all Fortune 500 companies did so. It is likely that those numbers are higher today. In 2009 the United States Department of Defense began to use a personality test called the Tailored Adaptive Personality Assessment System (TAPAS) for screening military recruits; over half a million have been tested so far.

Colleges have also recently begun to use personality assessments to help in some admissions decisions. A few years ago, the Educational Testing Service began administering the Personal Potential Index (PPI) to supplement the GRE for graduate school admissions (Kyllonen, 2008). The PPI measures six factors: knowledge and creativity, communication skills, teamwork, resilience, planning and organization, and ethics and integrity. Current users range from Notre Dame Business School to the American Dental Education Association. A major multi-institutional validity study is currently underway, with results expected in 2014.

A study of law-school students, graduates, and practicing lawyers by Berkeley Law School professor Marjorie Shultz and psychology professor Sheldon Zedeck led to the development of a soft-skills assessment for law-school admissions—designed to measure, among other factors, communications, influencing and advocating, strategic planning, self-management, conflict resolution and negotiation skills, networking skills, community involvement, integrity, stress management, passion, diligence, and self-development.

Several experiments with soft-skills assessment for higher education admissions have been conducted in the past few years. In a series of College Board studies, SAT-taking students were given situational judgment and biodata items designed to measure a variety of soft skills—including multicultural tolerance, leadership, interpersonal skills, social responsibility, adaptability, perseverance, and ethics—that were shown to have unique predictive validity for certain college student outcomes (Schmitt, 2012).

One exciting development is the use of soft-skills assessment in college-placement testing. Traditionally, placement testing has been strictly cognitive—students take a mathematics test to determine their readiness for college-level mathematics coursework. Those who do not make the cutoff are assigned to take a non-credit-bearing developmental or remedial course prior to being eligible for the credit-bearing course.

But perhaps non-cognitive skills, such as motivation and determination, can compensate to some extent for deficient mathematics skills. A determined student is likely to do what it takes to pass an entry-level course, whether that involves doing extra homework, studying nights and weekends, or working with a tutor.

For this reason, ETS is currently evaluating whether non-cognitive information could supplement cognitive test results for placement testing. The assessment, ETS's SuccessNavigator, measures academic skills and soft skills in the areas of commitment, self-management, and social support (Markle, Olivera-Aguilar, Jackson, Noeth, & Robbins, 2013).

In a recent issue of Change, Alexander McCormick and colleagues discussed how the National Survey of Student Engagement (NSSE) is being used to develop student typologies. “Disengaged” students, they found, “had lower first-year GPAs, perceived learning gains, and persistence to the second year” than “maximizers”—“their most-engaged peers.”

Assessment Methods

The overwhelming method of choice for soft-skills assessment in both scientific research and in practice has been the simple self-rating scale. A student or employment applicant might be asked to “indicate your level of agreement with the following statement: ‘I meet my deadlines’ (a) strongly agree, (b) agree, (c) disagree, or (d) strongly disagree.'”

For many low-stakes applications, there is no incentive for respondents to lie, so responses are probably reasonably valid indicators of at least what they believe about themselves. However, if scores are used for something like school admissions or employment screening, there is a strong incentive for self-reporters to make themselves look good, and scores are much less trustworthy.

For this reason, research has focused on several alternatives to simple rating scales. One is a rating by another person. A meta-analysis by Connelly and Ones (2010) showed that compared to self-ratings, ratings by others were more accurate, less biased, and more predictive of future outcomes.

Letters of recommendation, which are widely used in higher education admissions and employee selection, are essentially ratings by others, albeit in a non-standardized format. ETS's PPI is essentially a standardized letter of recommendation. It asks an evaluator to rate a graduate-student applicant on six dimensions and for open-text comments on each of the dimensions to support the ratings.

One of the drawbacks of ratings by others is that finding someone to do a rating is not always straightforward, the recommendations are often inflated, and there is sometimes considerable disagreement between raters. So there is still interest in assessments done by the person who is the target of the assessment. This brings us back to self-assessments.

A form of self-assessment that might not be as susceptible to faking as the simple rating scale is the pairwise-preference format. This involves presenting two, three, or four statements to examinees and asking them to indicate which is the most true. For example, examinees might be asked to choose between “I meet my deadlines,” and “I work well with others in teams.” In this case, neither statement seems to be clearly superior to the other in terms of what an employer might be looking for.

This format is the basis for the TAPAS assessment and is the reason why the US Department of Defense, after half a century of research, opted to go operational with a pairwise-preference soft-skills assessment. Their conclusion is in agreement with a meta-analysis that compared a single-statement personality measure with several forced-choice ones (Salgado & Tauriz, 2012).

The forced-choice measures that were studied by Salgado and Tauriz were ipsative (all statements are paired with all other statements), quasi-ipsative (some statements are paired with others, but not all), and normative (only statements measuring the same dimension are paired with each other, such as “I work hard” vs. “I work too hard”). This is a technical distinction, but the important finding was that one of these formats, the quasi-ipsative format, was found to be superior to the others.

Another approach that has been growing in popularity is the situational judgment test (SJT). In this format, respondents are given a situation such as “You have been assigned a team project, but one of the team members, call him ‘Joe,’ has a bad attitude and seems determined to thwart the team's efforts.” Then they are given a series of possible responses, with instructions to select the one that would be most appropriate. These might include

(a) tell other team members to ignore Joe, (b) confront Joe and threaten to tell the boss about his behavior, (c) speak to Joe in front of the team and encourage him to contribute positively to the effort, or (d) talk to Joe privately and encourage him to participate by telling him how the team will only be successful if he is involved.

SJT items such as these were used in the Schmitt (2012) and law school studies referred to previously. Their popularity might be attributed to their combining authenticity with reduced susceptibility to faking. SJTs also are amenable to video presentation of situations and alternative response formats, such as speaking, although the vast majority of those available today are written and require multiple-choice responses.

Interviews, a form of assessment, have been used in educational admissions and employment selection for a very long time. An attractive feature of interviews is that both candidates and employers like them. The latter often feel that they cannot get a real sense of the person without an interview, preferably face to face. But interviews are expensive and typically neither standardized nor reliable.

Behavioral interviews are a way to mitigate the lack of standardization. They are based on the soft skills an employer is interested in evaluating, such as drive, enthusiasm, or customer orientation. The interview consists of questions designed to elicit evidence that the applicant possesses those skills.

For example, the applicant might be asked to “describe a situation in which you had to meet a deadline but had competing commitments,” or “a time when you felt extremely excited about an event at work.” Often such interviews are accompanied by scoring rubrics that require the interviewer to evaluate the candidate on several scales pertaining to the responses expected.

But behavioral interviews are as expensive as any other. Employers are increasingly attempting to reduce those costs with technology, using remote interviewing methods (e.g., teleconferences or video-conference calls). In the near future, asynchronous interviewing—e.g., where a candidate uploads a video-recorded response to a question presented through a website—may become the norm. Such systems are already in wide use.

The next steps will be to score such interviews with expert raters, in much the same way essay tests from standardized exams such as the SAT or GRE are scored today. Automated scoring is likely to follow soon thereafter.

There is considerable interest in the idea of a standardized soft-skills assessment that avoids the problems of ratings. Standardized tests of soft skills, such the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), ask examinees to identify the emotions (e.g., happiness, fear, sadness) expressed in a picture of a face; the emotion one might feel if given additional work; or to recognize how much a particular action (e.g., making a list) might affect one's mood. Such assessments are beginning to be used in industry. It is likely that efforts to develop such measures will continue.

Future Developments

In the latter part of the 20th century, behavioral scientists and society more generally adopted a cultural belief that cognitive ability was the most significant determinant of educational and workforce outcomes. This led to efforts to raise students test scores, the promotion of teachers who were successful in doing so, and heavy if not exclusive reliance on test scores for admissions and employment screening.

But behavioral science research in psychology and economics suggests that non-cognitive factors—soft skills such as motivation, work ethic, teamwork, organization, cultural awareness, and effective communication—play a role that is as important or even more important in determining success in school and in the workplace.

So the 21st century is becoming the era in which we recognize the importance of soft skills, the role education plays in developing those skills, and the way they evolve throughout the life cycle. And we are developing new education, training, and intervention methods and new assessments in recognition of this importance.

Resources

1. Arthur Jr., W., Bennett Jr., W., Edens, P. S. and Bell, S. T. (2003) Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology 88, pp. 234-245.

2. Barnett, W. (2011) Effectiveness of early educational intervention. Science 333, pp. 975.

3. Connelly, B. S. and Ones, D. S. (2010) Another perspective on personality: Meta-analytic integration of observers' accuracy and predictive validity. Psychological Bulletin 136, pp. 1092-1122.

4. Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D. and Schellinger, K. B. (2011) The impact of enhancing students' social and emotional learning: a meta-analysis of school-based universal interventions. Child Development 82, pp. 405-432.

5. Goldberg, L. R. (1990) An alternative “description of personality”: The big-five factor structure. Journal of Personality and Social Psychology 59, pp. 1216-1229. Doi:10.1037/0022–3514.59.6.1216.

6. Gottfredson, L. (1997) Why g matters: the complexity of everyday life. Intelligence 24, pp. 79-132.

7. Heckman, J. and Rubinstein, Y. (2001) The importance of non-cognitive skills: Lessons from the GED testing program. American Economic Review 91, pp. 145-149. Papers and Proceedings

8. Kyllonen, P. C. (2008) The research behind the ETS Personal Potential Index (PPI), Educational Testing Service., Princeton, NJ.

9. Levin, H. M. (2012) More than just test scores. Prospects: Quarterly Review of Comparative Education 42, pp. 269-284.

10. Lindqvist, E. and Vestman, R. (2011) The labor market returns to cognitive and non-cognitive ability: Evidence from the Swedish enlistment. American Economic Journal: Applied Economics 3, pp. 101-128.

11. Markle, R., Olivera-Aguilar, M., Jackson, T., Noeth, R. and Robbins, S. (2013) Examining evidence of reliability, validity, and fairness for the SuccessNavigator assessment, Educational Testing Service., Princeton, NJ. Research Report ETS RR-13–12

12. Ones, D. S., Dilchert, S., Viswesvaran, C. and Judge, T. A. (2007) In support of personality assessment in organizational settings. Personnel Psychology 60, pp. 995-1027.

13. Poropat, A. E. (2009) A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin 135, pp. 322-338.

14. Richardson, M., Abraham, C. and Bond, R. (2012) Psychological correlates of university students' academic performance: A systematic review and meta-analysis. Psychological Bulletin 138, pp. 353-387.

15. Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A. and Goldberg, L. R. (2007) The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives on Psychological Science 2, pp. 313-345.

16. Roberts, B. W. and Mroczek, D. K. (2008) Personality trait stability and change. Current Directions in Psychological Science 17, pp. 31-35.

17. Roberts, B. W., Walton, K. and Viechtbauer, W. (2006) Patterns of mean-level change in personality traits across the life course: A meta-analysis of longitudinal studies. Psychological Bulletin 132, pp. 1-25.

18. Salgado, J. F. and Táuriz, G. (2012) The Five-Factor Model, forced-choice personality inventories and performance: A comprehensive meta-analysis of academic and occupational validity studies. European Journal of Work and Organizational Psychology Retrieved from http://dx.doi.org/10.1080/1359432X.2012.716198

19. Schmidt, F. L. and Hunter, J. E. (1998) The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin 124, pp. 262-274.

20. Schmitt, N. (2012) Development of rationale and measures of non-cognitive college student potential. Educational Psychologist 47, pp. 18-29.

21. Segal, C. (2012) Misbehavior, education, and labor market outcomes. Journal of the European Economic Association 11, pp. 743-779.

22. Willingham, W. W. and Breland, H. M. (1982) Personal qualities and college admissions, College Entrance Examination Board., New York, NY.

Patrick C. Kyllonen (pkyllonen@ets.org) is the senior research director of the Center for Academic and Workforce Readiness and Success at the Educational Testing Service. For over ten years, the Center has been conducting research to identify important 21st-century skills and creating approaches for developing and measuring them.

In this Issue

On this Topic

©2010 Taylor & Francis Group · 325 Chestut Street, Suite 800, Philadelphia, PA · 19106 · heldref@taylorandfrancis.com