Skip Navigation

 

Change Magazine - May-June 2014
Skip Navigation

Cover

May-June 2014

Print
Email
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

Editorial: Quality Assurance

As John Stuart Mill pointed out in his essay on Thomas Coleridge, even the most libertarian citizens—those who tell government to “let us alone”—want that government to protect them from fraud. And one of the defining characteristics of a profession is that it is self-policing. Quality assurance is, in short, both a chief raison d'être of higher education policymakers and foundational to the very identities of colleges' and universities' faculty and professional staff.

State bureaucrats approve institutions to open shop within their borders in order to shield citizens from shady operators. And the state and federal governments' demands that we assess students' learning arise from the public's suspicion that the academic profession has been nodding off when it comes to its self-monitoring duties. The federal government has requisitioned accreditation—our main instrument for performing those duties—to enforce the imperative that we be more demanding of ourselves.

Some very good work on quality assurance is described in these pages, but the descriptions also make it clear just how challenging, messy, and complicated the work is. David Longanecker and Marshall Hill, for instance, explain a new national process (the State Authorization Reciprocity Agreement, or SARA) by which out-of-state institutions can gain cross-state approval to operate—a process that “relies on state-to-state reciprocity, the same mechanism that keeps people from having to get a separate driver's license in every state in which they drive.”

The difficulty here is in balancing the need for quality control with the one for enabling, as expeditiously and efficiently as possible, innovative responses to a rapidly changing environment and a burgeoning need. “While regulation is often necessary,” Longanecker and Hill note, “it should not be excessive.” They go on to say that the regulatory approach they developed follows “the mantra that ‘less is better, unless it is not enough.’”

Most of the work to beef up academe's self-monitoring has focused on documenting student learning and making that documentation public. Linda Krzykowski and Kevin Kinser argue that the explicitness of accreditors' expectations affects the degree to which institutions make assessment processes and findings transparent. The assumption behind these writers' analysis is that at some point, market forces will kick in: Students will rush to institutions where they are apt to learn the most, institutions will respond by instituting reforms, and student learning will improve.

David Dill isn't so sure: He quotes Richard Arum and Josipa Roksa to the effect that “'there is no reason to expect that students and parents as consumers will prioritize undergraduate learning as an outcome.'” Instead, Dill believes that the key to accountability is returning to faculty, as a collective body, the responsibility to regulate themselves.

The “effective measurement and use of information in the collective actions by the faculty of each institution,” Dill argues, should be “a principal criterion for accreditation.” The accreditors' job, according to him, is to audit institutional procedures for, among other things, bringing some reliability to grading and ensuring the coherence of programs (including general education).

The Massachusetts state assessment initiative described by Pat Crosson and Bonnie Orcutt is one of the most ambitious attempts made to date to reconcile the need to be accountable to the larger public with the desire to leave the assessment of student learning in the hands of the faculty. Hundreds of people have participated in the project, and I am impressed—no, awed—by the time and energy they have spent doing this very worthy work.

But for many of them, this work will not be rewarded or even honored. For many of their institutions, the primary product of higher education is not student learning but publications and grants, leading to more institutional prestige and higher rankings. When the failure to reward this kind of service is combined with the notorious faculty predilection to debate issues at endless leisure (the pilot feasibility study resulted in the conclusion that more study was needed), in my more pessimistic moments, I worry about the sustainability of the initiative.

I'm haunted by the collapse of the portfolio project described in the previous issue of Change, in which faculty and students alike thought the whole thing was just too much work—work that falls, by the way, on a shrinking core of full-time core faculty. Like the quality-assurance mechanisms in other countries that rely on external assessments at the subject level, I'm afraid that such efforts will prove, in Dill's words, too “costly and exhausting over time.”

So we scramble to find proxies for learning as a metric. Today's favorite is employment and income outcomes, recommended by both Dill and Davis Jenkins and Clive Belfield. In my own state of Virginia, the State Council for Higher Education (SCHEV) makes such information available by program for all public institutions.

But down in the weeds, it seems that such measures are trickier than we knew. In Playing the Numbers, Patrick Kelly and Christina Whitfield describe the results of such an analysis within the Kentucky Community and Technical College System (KCTCS). It turns out that local circumstances have a huge effect on program outcomes—both across the state and from year to year. As they say, “Because of fluctuating local economic conditions, annual program-level accountability for many institutions is like holding feet to a very erratic fire. It is as much about the flames as the feet.”

Timeframe matters as well. Debra Humphreys, in the same Playing the Numbers piece, points out that liberal arts graduates with advanced degrees do very well in the job market—after a number of years. And as of now, program-level employment outcomes are reported in Virginia only for 18 months after graduation. Even though SCHEV, fully realizing the limitations of its data, “strongly cautions reviewers not to use the short-term wage outcomes of recent graduates to measure the quality or long-term effectiveness of any of Virginia's individual institutions,” absent other metrics, it will be hard for policymakers to resist doing so.

All this said, a country like Hungary illustrates what happens when a lack of stability and adequate funding undermines any attempts to ensure quality, never mind to measure it. Lurching from one crisis and nearsighted government mandate to another, the focus ends up being on short-term survival—and students are the biggest losers.

So what are our options? I think that Dill is on the right track in recommending that accreditors audit institutional procedures for ensuring academic quality—which will require that institutions reward faculty for spending time and energy on that work. That also means that they need to beef up the core of permanent faculty.

I would also adopt Jenkins and Belfield's suggestion that students be given structured pathways though the curriculum. This would entail faculty agreement that the curriculum no longer be treated like a pop-bead necklace, with every new advance in a discipline requiring that yet another bead be added to the string.

I also like Jenkins and Belfield's focus on efficiency—that is, “the cost per completion of a high-quality credential.” But this takes us right back, of course, to where we started—quality assurance. I think we simply cannot avoid our responsibility to collectively monitor and improve student learning, whatever changes to faculty and institutional reward systems we need to make to do that.

The articles in this issue give me a glimmer of hope that the battleship is beginning to turn. SARA is in place. Massachusetts steams ahead with its ambitious efforts to reconcile improvement and accountability, and it is pulling nine other states in its wake. Even starved for funds, community colleges are becoming more efficient. And then (not covered in this issue) there is the slow spread of the DQP and learning outcomes assessment, as well as the steady progress of the Voluntary System of Accountability (VSA). Despite my misgivings, we may be moving in the right direction.


More Editorials

In this Issue

By this Author

©2010 Taylor & Francis Group · 325 Chestut Street, Suite 800, Philadelphia, PA · 19106 · heldref@taylorandfrancis.com