Skip Navigation

Cover

May-June 2009

Print
Email
Comment
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

The New Guys in Assessment Town


I  guess I haven’t been paying attention. For the past twenty years, most of the assessment experts I know have been banging the drum of faculty involvement and local responsibility. Yes, they say (and I agree), the demand for assessment may come from outside the institution, but the work must take shape inside, with real engagement by faculty and others whose job it is to ensure that students learn what the institution promises. The most notable success stories—the ones showcased at national conferences and written about in the extensive literature—are from places where assessment has been an occasion for faculty to come together; hash out the most important goals for students; develop projects, instruments, and approaches to determine whether those goals are being met; and enter into a cycle of ongoing improvement. This is really hard work.

But it turns out there’s a new kind of help on the way, from outside the institution. Of course many campuses have engaged external consultants to jumpstart the  assessment process; that’s not new.  And neither is the use of tests and instruments designed by others. What’s new is an influx of for-profit assessment providers offering tools and services that promise, variously, to make assessment easier, faster, less intrusive, more useful, and/or more cost effective.

Some of these firms were founded in the last two or three years and are just getting started. Others have a longer, already prosperous history of work in other aspects of education (like course management) and are now moving into the assessment niche. A few have their roots in other industries—like quality assurance in health care—and have recently added student  assessment to the mix. Some are run by academics or former academics and some by people with a corporate background. 

Their capacities vary as well. One has a staff of nine people, several of whom are part time; another has multiple offices across the country. Some tend to specialize, working largely with community colleges in a couple of cases and with business schools in another. One has managed to attract a number of elite research universities.

Motivated in large part by accreditation pressures, campuses are turning to these new providers for assistance with a wide range of assessment-related tasks and processes. Some offer help in formulating student learning outcomes—and in bringing (as one says) “the science of learning” to “the art of teaching.” Several are in the rubric-development business; a few actually score student work and generate outcomes data. Others (the largest number, so far as I could see) sell and support software for managing, maintaining, and reporting assessment data. Some provide electronic platforms for portfolio assessment. And quite a number offer several of these “solutions” (to use a word that appears prominently in their marketing materials). According to long-time assessment watcher Peter Ewell, they are also “all over the map with respect to technical quality.”

My purpose here is not to provide a consumers’ guide but to survey the general lay of the land, look at a few examples based on interviews with both users and providers, and reflect on where this new development is coming from and where it might take us. Can these for-profit providers make assessment easier? Is easier better? How do we ensure that the “solutions” they offer don’t short-circuit important campus deliberations? Will they facilitate more productive uses of data? Make information from the classroom more (or less) visible and important in the assessment process? Help higher education present itself to outside audiences more effectively? 

In exploring these questions, I’ve tried to put aside my own prejudices and explore what’s out there in the spirit of reporter. So, what did I find?

New Tools for Managing Data
It turns out that some of these firms are right here in my own backyard in California, as I discovered at a community college conference in the state last fall. I used the occasion to pick up vendors’ materials, listen to their spiels, and talk with individuals from colleges using—or thinking about using—their services. What was clear is that many campuses are in the market for better ways to manage and report on assessment data, and they’re looking for help.         

One resource they might find is TracDat, a product of Nuventive, with headquarters in Pittsburgh and offices around the country, including one in Menlo Park, California, just down the road from Carnegie. Claiming to be “The Leading Enterprise Outcomes Assessment Solution,” TracDat’s mission is “managing continuous improvement.” 

According to Scott Johnson (the rep from Menlo Park, whose session I attended at the community college conference), over 200 institutions in 44 U.S. states, Guam, Trinidad and Tobago, and Saudi Arabia are now using TracDat. Most of them are four-year institutions, but there’s a long list of community college clients as well. Nuventive has been at it since 2000, so they are relative veterans in what Inside Higher Ed called “the new assessment market” (January 2007).  

One of TracDat’s  most experienced users is Mt. San Antonio College (Mt. SAC), a two-year institution in Walnut, California, which began using the system  in spring 2005. According to Priyadarshini Chaplot, an analyst in the Office of Research and Educational Effectiveness, the college had a carefully designed program review process that incorporated planning and assessment. But the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database. Having determined that a homegrown system would be too expensive, the campus began to look at external solutions. TracDat was selected because, Chaplot explains, “it had the capability to mirror the program review process that was already in place.”   
 
More concretely, what TracDat makes possible is “vertical integration and alignment.”  The focus is not on the learning of individual students (Nuventive sells an electronic portfolio platform for that purpose), and faculty are described on the website as “intermittent” rather than “core” users. Rather, the system is designed to map the alignment of goals and outcomes across levels of the curriculum. So, Chaplot says, if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome. Access to this big picture, Chaplot says, “helps the college understand the relationship between planning, student learning outcomes, and other institutional processes.”       

Mt. SAC is now working its way through a training process that will prepare at least one person in every department to use TracDat. Starting in the 2009-2010 academic year, the system will be the sole mechanism for completing the program review process—which is to say, mandatory. One big benefit will come at accreditation time. Like just about everyone I spoke to, Chaplot points to the benefits of being able to share data and generate reports in response to accreditation guidelines. In fact, TracDat’s Scott Johnson is now working with the two-year-college arm of the Western Association of Schools and Colleges to build in features tailored to WASC reporting requirements.  
     
One of TracDat’s newer competitors is eLumen, launched in 2002 by David Shupe, previously system director of academic accountability for the Minnesota State Colleges and Universities and vice president for academic affairs at Inver Hills Community College. Shupe is a high-energy guy with a huge enthusiasm for what he’s doing, which he describes as bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels” and “the whole can be more than the sum of the parts.”  ELumen is now being used on about 40 campuses, many of them community colleges.

Shupe’s background as an academic was an important factor in Pasadena City College (PCC)’s decision to use eLumen.  According to Carol Curtis, who coordinates PCC’s outcomes and assessment efforts, the college looked at several systems and chose eLumen in part because it was developed “by people who were once academics” and reflects “a really good grasp of how faculty think at the course level.” 

Indeed, what most distinguishes eLumen from TracDat (and many of the  data-management systems available for assessment) is the focus on individual student performance in individual courses. Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments). What comes along with all of this, of course, is the need for real engagement by faculty, who are required (or invited, if you will) to be much more than TracDat’s “intermittent users.” 

Depending on whom you ask, this level of involvement by faculty is a boon or an intrusion. At Chabot College in Oakland, California, English instructor Katie Hern confessed, “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”

It’s possible that all grading might be done electronically in the future, Hern concedes. But this would represent “a major change for most English teachers, and not
necessarily a positive one. My question would be, does this really benefit students’ learning, or is it simply about external accountability mandates?”

Back at Pasadena City College, which has a  slightly longer experience with eLumen than Chabot, the faculty experience seems to be more positive—at least for some users. Carol Curtis, who not only directs work on student learning outcomes but teaches English as a second language, used eLumen in her courses and found it “easy” and “nice to see all the results in one place.” Though recognizing the shift in practice this would entail for many faculty, Curtis and two of her colleagues—Crystal Kollross, an institutional planner, and Linda Hintzman, the “faculty data steward” (a term coined by eLumen)—are hopeful about the effects of the system on faculty thinking and practice. They tell the story of a training session where “the eLumen part of the conversation was very tiny” compared to the conversation among faculty about what they teach, what they’re doing in their classrooms, and how they could learn from one another and “share stuff.” 

As with TracDat, one of eLumen’s big selling points is the ability to generate reports. Tom Dewit, a leader of numerous institutional initiatives at Chabot, puts it this way: “Colleges are looking for a way of representing that they have covered their student learning outcomes across the disciplines and courses. Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.” For Carolyn Arnold, Chabot’s coordinator of institutional research, that’s a welcome possibility. eLumen certainly can’t replace the “hard work faculty must do to identify student learning outcomes,” she says, but “with an accreditation visit looming, I like it. We’ll have numbers.”    

In fairness, they won’t have many numbers yet; the real power of these data-management systems comes from widespread use on campus. But that prospect is coming closer in some places. Pasadena City College started with five faculty using eLumen in spring 2008. In fall 2008, there were some 50 users, among them all full-time faculty who teach beginning algebra. Instructors are required to turn in assessment information to their division, Curtis explains, and they can do so either by using a paper form or on eLumen. “But we’re hoping to make eLumen what they want to do.” 

Clearly there are some high hopes attached to these data-management tools and services. But it’s important to repeat what everyone I spoke to said as well: that these tools—whether eLumen, TracDat, or one of the growing number of other options—are not magic bullets. As TracDat’s Scott Johnson observes, “We don’t talk you into assessment.” Campuses must still struggle through the difficult process of identifying course and program goals, making judgments about student progress, and using information to improve learning. 

But as it turns out, there may be help for these more fundamental challenges as well. 

End-to-End Solutions
While TracDat and eLumen represent what appears to be the predominant direction in the for-profit assessment market—that is, database management—a number of firms are moving into the business of “end-to-end” assessment, as newcomer EduMetry advertises. Founded by a business school professor with corporate experience, EduMetry provides a full suite of services to enable institutions to “meet the demands of today’s measurement-driven world.” 

I asked Robert Galvin, the firm’s vice president, what makes EduMetry different from other providers. “They sell software packages,” he said.  “We partner with institutions to develop an assessment framework.” Established in 2005, EduMetry is still small, with a client list of 15 to 20 institutions. Many of them are business schools (at Butler, Northern Iowa, and George Washington University, for instance), a niche selected in order to tap into the accreditation guidelines of the Association to Advance Collegiate Schools of Business. One of EduMetry’s claims to fame in this arena is a “turnkey solution” for the assessment of learning in settings that use Harvard Business School cases.

In an effort to diversify, the firm also has its eye on engineering programs (which, like business schools, are driven by professional accreditation requirements) and is now working with a chiropractic college preparing for regional accreditation. But not surprisingly, Galvin believes that the approach—“connecting assessment expertise with faculty’s knowledge of the discipline”—is relevant across a wide range of fields.    

EduMetry’s original core business was scoring student work—term papers, essay exams, and the like—a service it continues to offer through a separate entity (see www.Virtual-TA.com) that attracts mainly institutions abroad and for-profit campuses in the U.S. But sensing a bigger need, the firm moved early on to offer a wider set of services aimed at supporting campuses in the identification of student learning outcomes, curriculum mapping, and rubric design—with much of this work undertaken through tools made possible by technology. Like TracDat and eLumen, EduMetry also provides data-management solutions. But its special niche reaches back to its original  business, the scoring of student work. “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”     

What does this mean in practice?  Imagine that a campus wanted to assess students’ writing abilities. EduMetry works with faculty to develop a scoring rubric and to identify student papers and other relevant written products—already assigned, collected, and graded by faculty as part of the regular work of teaching—that would provide a sample of performance in that arena. The scoring is then done by “professionals from education or business,” Galvin explains. The deal, as announced in bold on the website, is that EduMetry can “relieve the faculty of the burden of generating data on Student Learning Outcomes by collecting, scoring, and deriving data from actual student learning artifacts.” 

I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.” With that misconception out of the way, the possible savings of time becomes a powerful incentive. “If you’re a dean or provost, you can’t ask faculty to do too much,” Galvin explains. “You have to think about the best way to engage faculty to get their expertise without overburdening them.” 

This is what Leslie Wilson, associate dean at the University of Northern Iowa College of Business Administration, is trying to do through EduMetry. “Faculty,” she told me, picking her words carefully, “are not exactly on the assessment bandwagon.” Wilson heard about EduMetry from a business school colleague at Butler University and contracted with the firm because “we needed help.”

Things were going well when I talked to her in late November 2008; she was meeting with faculty to identify artifacts for a first round of scoring against the rubrics (there are five of them) developed by EduMetry in consultation with faculty. But she was reserving judgment about the long term. “We’re investing a lot of money and time in this assessment effort,” she told me. “It’s not yet clear whether it will produce something faculty actually see as useful, something that will improve our program.”

Supply and Demand
I first came across the for-profit phenomenon in June 2008 during a meeting of the National Survey of Student Engagement (NSSE) board. NSSE had been approached by several commercial services (not those featured above) about the possibility of some kind of partnership, and the staff took the occasion to collect and share information about
eight or ten of these new arrivals on the assessment scene.

I think it’s safe to say that I wasn’t the only person on the board for whom the growth of these for-profit assessment providers was news. Others I spoke with subsequently were surprised as well. But even in the past six months, this has started to change. A friend in an education program at a large research university recently reported finding a note in campus his mailbox suggesting that the department discuss what one of these firms “might do for us.” A listserv for educational researchers in California hosted a lively discussion of the pros and cons of different assessment vendors. And Peggy Maki, a well-known figure on the assessment circuit, is preparing a second edition of her widely respected 2004 volume, Assessing for Learning, with added material on these new providers. “They’re very competitive,” she says, “and campuses need a way to examine the options.” In short, the for-profit assessment sector  and its visibility are growing by the day, and it’s important to think about the forces of supply and demand that are at work here.    

On the supply side, these for-profit services and tools can be seen as offspring of a belated marriage between assessment and technology. In the worldwide web of iPhones, email, Google, and spreadsheets, and with everything instant and electronic, it’s hard to remember that when the assessment movement began to take shape in higher education in the mid-80s, most of these technologies did not exist. The medium of assessment was paper. Now technology is clearly catching up with the push to assess, and the for-profit sector has stepped into the picture with computerized tools for managing assessment data, online processes for curricular mapping, and electronic portfolio platforms. In other words, the growing supply of new technologies for assessment may now be fueling the demand for them.  

The other force that’s clearly driving demand is accreditation. When I asked EduMetry’s Robert Galvin about market forces, he was quick to proclaim that “no one would be doing this without accreditation.” Indeed, every campus administrator and institutional researcher I talked with emphasized the need for a less burdensome way to generate the reports perceived to be needed for regional or programmatic accreditation. “I’ve been putting together reports by hand, getting information from everyone and compiling it manually,” one person told me. “I’m ready for this!” Notably, some  of these firms exhibit their wares at events sponsored by the accreditation agencies, an arrangement that might seem to signal a stamp of approval and maybe even an expectation that the services in question can add value when the visiting team arrives. 

That said, accreditation’s emphasis on assessment, data, planning, and improvement must be seen as part of a larger trend of escalating expectations for more and more data in more and more reports. The Spellings Commission on the Future of Higher Education, funders, policymakers, parents, students ... they all want more information. In short, reporting is becoming an industry, and the for-profit assessment market is stepping in to serve that industry—and to benefit from it.  

Simplifying the complex process of assessment and making it easier to analyze and report relevant activity may be good thing. But I find myself wondering if assessment has become too much about data and reports. Might it be that more automated, push-the-button reporting gives greater visibility and importance to “the data” than it should and short-circuits the deliberative tasks that should be central to assessment? Assessment has always contained a tension between improvement and accountability, and the entrance of the kinds of services and tools featured here may threaten to push it further in the latter direction.  

Peter Ewell puts his finger on a particularly troubling aspect of this prospect (one hinted at in the opening of this essay): “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty. My worry here is that the faculty increasingly will say, ‘Fine, this is not about us, and we never really liked this stuff in the first place.’” 

Peggy Maki has a similar concern. Though acknowledging the usefulness of the new crop of services and tools in meeting accountability requirements, she worries that the focus on reporting “might encourage a surface approach, discouraging the exploration of deeper questions about how people learn.” 

In fairness, the scenario Ewell and Maki worry about is possible but not inevitable. In the right hands, for the right purposes, these services and tools can be a  boon.  They can move campuses off the assessment dime and give them a way to get started. They can prompt and inform conversations about teaching and learning, as colleagues from Pasadena City College report. 

Additionally, these new providers may help institutions share tools and resources across sites. Many of them meet with user groups periodically to ascertain what new needs are emerging and how they might modify their offerings to be more helpful—and to stay competitive in an increasingly crowded market. The campus people I spoke to were enthusiastic about how quickly their particular needs or circumstances led to a customized “fix” that was then available to all other customers as well. Indeed, because of client requests, eLumen (to name just one example) is developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users at all sites. This is an important development, since innovations in teaching, learning, and assessment have tended to be small- scale and isolated, and therefore not accessible to be built upon and used by those in other settings.      

Commerce and the Academy
The prospect of financial motives being interjected into the core business of teaching, learning, and assessment, as I found in writing this piece, makes many academics nervous. It makes me nervous. The issue is not only, or even perhaps primarily, about the amount of money at stake. While some of these firms are making hefty profits, it’s likely that others—especially the smaller, newer ones—are a long ways from getting rich.

It’s probably worth remembering, too, that  paying for assessment is nothing new to higher education. Widely used assessment instruments like those from the ETS and ACT, as well as newer offerings such as the Collegiate Learning Assessment and the National Survey of Student Engagement, aren’t free. As the organizations behind these instruments point out, they are independent, not-for-profit operations, but this may or may not be a distinction that makes a real difference on the ground. Assessment organized primarily around these (not-for-profit) external tests and instruments risks the same kind of faculty disengagement that is worrying in the case of for-profit providers. Still, there’s something about the idea of the latter gets under the skin of many academics. (See for example, Kuh, 2008.)

But where some see trouble, others—including Carnegie’s president Anthony Bryk—see potential, arguing that the commercial sector can be an important partner to schools and universities. Bryk is admittedly cautious about this position: “Schools are shaped by what they buy,” he told me, ominously. And he recognizes the cultural divide and distrust that stand between academe and the commercial sector. But as an example of what’s possible, he points to the role of Wireless Generation in K-12 school reform (see Byrk and Gomez). 

Established in 2000 by Greg Gunn and Larry Berger (full disclosure: Berger is a member of the Carnegie Foundation’s board), Wireless Generation began with the development of software that enables teachers to use handheld devices to give formative assessments commonly relied upon in the elementary grades. By using the handhelds instead of paper, teachers save significant time that they can then devote to instruction informed by timely data on students’ learning needs.

Wireless Generation’s handheld versions of these assessments were developed with the help of academic researchers, through close collaborations that are structurally similar to those between non-profit and academic organization, but with an additional focus on scaling the innovation so that the business (and the production and continued enhancement of the software) can be sustained. Today, the firm has a few hundred employees and a growing suite of tools and professional development services that allow teachers to record student performance at key points, monitor progress, analyze trends and data, and tailor instruction to student needs. Notably, Wireless Generation is one of a number of commercial firms featured by disruptive innovation guru Clayton Christensen in his 2008 volume, Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns. 

My point is not to suggest that the answer to higher education’s assessment challenges lies with a tool that was designed for K-12 settings. It is, rather, that there are interesting examples of for-profit firms bringing fresh tools, products, and expertise to teaching, learning, and assessment.

*********

Assessment has now been a feature on the higher education landscape for more than two decades. And even as pressure to “do something” increases, many campuses are just getting started. The entrance of for-profit providers is a noteworthy development—evidence, if any is needed, that assessment has really “arrived.” This essay merely scratches the surface of what’s out there and what expertise and tools the for-profit world will bring to the work. Surely we need all the help we can get—and it’s even possible that some of them will turn out to be usefully “disruptive” innovations that invite a fundamental rethinking of teaching and learning. 

It’s good to remember that this development is at an early stage in higher education. As these firms become more widely known and their client lists grow, and as experience with them unfolds, it will be important to seek out much more detailed stories of how campuses have worked with them and how they in turn have worked with campuses to deliver—or not—on the promises of assessment.


For-Profit Assessment Services

The following list is a sampling of organizations that provide (among other offerings) higher education
 assessment tools and services.     
   

The Advisory Board Company
http://www.advisoryboardcompany.com/

Blackboard 
http://www.blackboard.com/us/index.bbb

EduMetry
http://www.edumetry.com/

Eduventures
http://www.eduventures.com/

eLumen
http://www.elumen.info/

Epsilen
http://www.epsilen.com/LandingSite/Home.aspx

Foliotek
http://www.foliotek.com/

iWebfolio (Nuventive)
http://www.nuventive.com/products_iwebfolio.html

LiveText
https://www.livetext.com/

StudentVoice
https://www.studentvoice.com/app/Views/Home/Default.aspx

TaskStream
https://www.taskstream.com/pub/

Tk20, Inc.
https://www.tk20.com/

TracDat (Nuventive)
http://www.nuventive.com/products_tracdat.html

TrueOutcomes
http://www.trueoutcomes.com/

WEAVEonline
http://www.weaveonline.com/



Resources


Bryk, Anthony and Gomez, Louis.  “Reinventing a Research and Development Capacity,” in The Future of Educational Entrepreneurship: Possibilities for School Reform, ed. by Frederick M. Hess. Cambridge, MA: Harvard University Press, 2008.  

Christensen, Clayton M., Horn, Michael B., and Johnson, Curtis W. Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns.  New York: McGraw-Hill, 2008.

Jaschik, Scott.  “The New Assessment Market.”  www.insidehighered.com, January 17, 2007.

Kuh, George D.  “Diagnosing Why Some Students Don’t Succeed.” The Chronicle of Higher Education, December 12, 2008. 

Maki, Peggy L. Assessing for Learning: Building a Sustainable Commitment Across the Institution. Sterling, VA: Stylus Publishing, 2004.

Wireless Generation: http://www.wirelessgeneration.com/



Pat Hutchings is vice president of the Carnegie Foundation for the Advancement of Teaching. She has written widely on the investigation and documentation of teaching and learning, peer collaboration on and the review of teaching, and the scholarship of teaching and learning. Recent publications include
Opening Lines: Approaches to the Scholarship of Teaching and Learning (2000) and, with Mary Taylor Huber, The Advancement of Learning: Building the Teaching Commons (2005).  Before joining Carnegie, she was a senior staff member at the American Association for Higher Education, where she was the inaugural director of the AAHE Assessment Forum from 1987 to 1989. 

In this Issue

Currently, no other articles in this issue are available.

On this Topic

©2010 Taylor & Francis Group · 325 Chestut Street, Suite 800, Philadelphia, PA · 19106 · heldref@taylorandfrancis.com