Skip Navigation

Cover

September-October 2011

Print
Email
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

Closing the Assessment Loop by Design

The idea for this article came from Change editor Peg Miller, who heard a short description of what Kaplan University is doing to assess teaching and learning, including the recent completion of our work to “close the loop.” She asked if we would write an article that explains how we are able to take results from a regular program of course-level assessment, use them to inform curricular changes, and then measure the learning in those revised courses to determine whether and which changes made a difference.

Our story answers some basic questions: Why was it important for Kaplan University to implement course-level assessment? How did we design our assessment program so that we would be able to close the loop? How does it work inside and outside of the classroom? What has been the impact of using this process? What have we learned?

Why Course-Level Assessment Is Important to Kaplan University

It is true that assessment is important to all accredited postsecondary institutions; we are required to assess student outcomes. Accreditation is not, however, the primary driver of assessment at Kaplan University. Like many for-profit, universities, we are a comparatively young and fast-growing institution. We began offering online programs in 2001, for example, with just 34 students and a few degree programs. But over the past decade we have expanded, so that we now serve more than 58,000 students online and more than 7,000 students at 11 campuses in Iowa, Nebraska, Maryland, and Maine, as well as at several learning centers across the country. Given that growth trajectory, it became increasingly important to know that we were doing things right.

We are also a student-focused institution that emphasizes flexible programs and market-relevant degrees. Students enter Kaplan University with an average of four NCES-identified risk factors and without a great deal of college preparation. It is critical to our mission to know that they are learning what they need to know for success in their academic and professional lives.

Finally, with a standardized, outcomes-driven curriculum delivered by means of 1,000 courses supported by significant investments in teaching/learning technologies as well as measurement systems, we had the building blocks in place to assess the impact of all the courses in our curriculum in a continual, consistent way and to apply that information to make improvements.

Designing an Assessment Program to Close the Loop

Many higher education practitioners agree assessment is the answer to the question, how do we know what students learn? However, actually using assessment results to change curricula and teaching strategies is relatively rare, as noted by Trudy Banta and Charles Blaich in the January/February 2011 issue of Change.

And indeed, although until 2008 Kaplan assessed student achievement against program-level outcomes by means of portfolios using institution-wide rubrics, the information gathered was not widely used because it wasn't sufficiently specific to help drive changes in teaching and learning. Our desire to make data-driven decisions within a much tighter feedback loop led us to move from an emphasis on program-level assessment to a focus on the course level.

So in 2008, we developed a course-level assessment (CLA) system that would measure student learning within each course; create a feedback loop of assessment data to inform revisions of curricula and instructional techniques; and validate improvements in student learning that resulted from these efforts. Once planning was complete, the second half of 2008 and all of 2009 were focused on creating learning goals and objectives and corresponding rubrics for each course. With the CLA fully implemented in 2010, we set a goal of making and evaluating curricular changes in a minimum of 200 courses.

Curricular change is an ongoing process at Kaplan and is influenced by factors unrelated to student learning levels: Courses are updated to reflect changing professional emphases in fields of practice, for example. Consequently, in order to test the efficacy of the CLA system, we needed to distinguish changes to courses that were driven by or that utilized CLA results from those not directly involving CLA data.

By late 2010, we had reached a significant milestone on the road toward improvements in teaching and learning. We were able to:

  • evaluate curricular revisions in 221 courses using assessment data produced by the university's CLA system;

  • provide feedback to faculty on the successes and failures of their courses based on student learning measured against the stated course outcomes; and

  • identify statistically significant improvements in student learning, observed in 44 percent of the revised courses (we also identified statistically significant declines in student learning in another 23 percent).

Finally, faculty members had data to answer questions about what in their courses was working, what was not working, and why.

Overview of the CLA

The teaching and learning assessment system at Kaplan University is predicated on the belief that assessment should be comprehensive and direct. Therefore, every learning goal is assessed for every student, every term, in every section of every course. These are direct measurements of student learning, based on the evaluation of projects and examinations using standard rubrics—not self-reported data from students.

The CLA system is multi-tiered; each of the course-level goals is mapped to program goals and is the framework within which individual learning activities can be associated with specific learning objectives (Chart 1). We assign four to six discipline-and course-specific learning objectives to both graduate and undergraduate courses. Additionally, for undergraduate courses we assign at least two general education literacies (GELs) that we want students to master. GELs are selected from a list of 36 goals that fall into eight categories: communications, mathematics, science, social science, arts and humanities, research and information, ethics, and critical thinking.

Chart 1—Framework for Course-Level Assessment

Chart 1—Framework for Course-Level Assessment

Both components are assessed at the course level, and both use the same scale within the rubrics, from “no progress” on the learning goals (0) to “mastery” (5). As a student progresses through a program, we expect to see an increase in the cognitive complexity of the learning outcomes. GELs differ from discipline-specific goals in that the same GELs appear in courses across a program, whereas disciplinary goals are specific to a single course. At the program level, we assign to the majority of programs six discipline-specific and nine general-education learning goals.

The CLA is the cornerstone of Kaplan University's layered assessment framework, which also includes pre-enrollment screening tests, external instruments (including the ETS Proficiency Profile and the National Survey of Student Engagement), grades, faculty and student course evaluations, program-level outcomes, capstone experiences, graduate and alumni surveys, and more.

How Did We Create It?

The process of creating explicit learning outcomes and tying them to assignments was an intensive, multiyear effort. To move over a period of just three years to an institution-wide system of assessment required a substantial investment of human, capital, and technological resources. Kaplan also had to design technical and operational processes to support assessment, including a common database into which to enter learning goals, rubrics, and outcomes; an institutional process for reviewing learning goals and rubrics; project-management and systems-analysis support for the technological integrations; and an assessment governance framework.

The faculty, working in teams that included subject-matter and curriculum-design experts, created the explicitly stated learning goals and objectives for each course—a valuable and time-intensive experience. They also created corresponding assignments and grading rubrics. While nearly all the assessments were based on assignments and exams that already existed in the courses, new grading rubrics had to be created to ensure that the CLA system produced measurements of specific learning objectives independent of aggregate grades. Rubrics were created to measure student mastery of each objective on a criterion-referenced basis (Chart 2).

Chart 2—Sample Learning Goals and Rubric

Chart 2—Sample Learning Goals and Rubric (click to enlarge to full size)

How Does It Work In Practice?

The creation of the CLA system was a massive undertaking, but now that it is in place, there is little additional burden placed on the faculty. Throughout a course, students regularly submit work that is reviewed by faculty, who enter CLA data throughout the term via a component of the online gradebook used in all courses. When courses are changed or new courses and programs are implemented, CLAs are created up front as part of the process.

The learning goals and rubrics that appear in the online gradebook come from a common data repository—eCollege's Learning Outcome Manager (LOM)—that Kaplan, along with several other of eCollege's large clients, designed to be a single source of record for course- and program-level information. The LOM provides a linkage between specific assignments, exams, or components of those assignments and exams and the stated learning objectives. By linking those objectives, rubrics, and assessment data, we can compare student achievement on any specific objective across any number of instructors, sections, or terms with the confidence that the same assessment was used, addressing the same learning objective, graded with the same rubric. This level of consistency, supported by a common repository, is foundational to our institution-wide CLA system.

Checks and Balances

To ensure the fidelity of data collection and consistency in the student experience, we designed two university-wide system checkpoints. First, as part of the curriculum-development process, all learning goals and rubrics are reviewed by the Office of Institutional Effectiveness to ensure that

  • each goal describes only one primary area of knowledge,

  • for each, specific behavior(s) can manifest the knowledge or skills that students should be able to demonstrate mastery of by the end of the course,

  • the cognitive tasks demonstrate the appropriate level of complexity required for given levels of mastery, and

  • the rubrics comply with Kaplan guidelines.

Second, all new courses or major course revisions, including modifications to learning goals, are subject to review by one or more governance committees, depending on the type of course change. Both the General Education Committee and the Faculty Curriculum Committee are charged with approving all major curricular changes within their jurisdiction, including reviewing all learning goals, their alignment with the course and program, and the viability of their successful assessment within the course.

In addition to governance committee oversight, we developed and continually update training modules on the use of the rubrics and the CLA approach in general. This required training contains both a general orientation to the framework and detailed calibration exercises. Kaplan periodically surveys the faculty on content knowledge as well as general attitudes toward CLA and rubric use. This survey informs the revision of the CLA training program.

Assessment as Culture: Using the Data

Kaplan University started planning what to do with the data concurrently with implementation of curriculum changes to incorporate CLA goals, rubrics, and outcomes. As a result, both assessment and assessment-driven decision-making are now established in the university's culture. As Kara VanDam, vice provost for academic affairs, relates, “We are deliberately focused on what students are actually learning, not simply what we assume we are teaching them. This core belief in the importance of measurable student learning informs every conversation we have and every curricular improvement we undertake.”

In January 2011, we conducted a survey to gauge the faculty's understanding and use of the CLA tools and system. The survey, sent to 1,640 randomly sampled faculty from a total of 4,923 across the university, had a 48 percent completion rate. Among these respondents, 95 percent claimed to have had experience with applying the CLA rubrics and entering CLA scores.

Nearly half reported that they assign learning activities that address specific course outcomes always or most of the time and that this awareness of the CLA system affects their lesson planning. Yet a clear majority also reported that the CLA system does not divert attention from the required coursework—in other words, faculty do not distort classroom work to “teach to the test.”

We have made CLA-related data part of Kaplan University's normal operations, and we ensure that all data reporting remains focused on actionable information. Our Office of Institutional Effectiveness is responsible for regularly publishing reports that aid particular stakeholders. For example, a report showing the impact of a curriculum change within a course was designed specifically for the curriculum team (Chart 3). This type of report includes parametric statistical tests on pre- and post-change data so that users can quickly ascertain the effect size of any changes.

CLA scores reveal student performance and inform administrative conversations on curricular design. But the performance of faculty members' students in any given term and/or course do not factor into the performance indicators for those faculty.

Chart 3—Sample Feedback Report

Chart 3—Sample Feedback Report

Is It Working?

Kaplan University now has sufficient data generated from the CLA system to evaluate the institution-wide impact of this approach. In 2010, we initiated a study of 221 courses that had been revised since the inception of the CLA system, using post-revision data. Given the variations in student cohorts and other influences, it would not have been realistic to expect that every curricular modification would result in significantly better student outcomes, although that is the overall goal. Indeed, for some subsequent groups, we might see no evidence of improvement, or even regression in some cases.

We looked for two types of improvement evidence. First, a comparison of student outcomes between pre- and post-modification of the courses should show that the average levels of learning are higher post-revision at statistically significant levels. The second form of improvement was defined by a reduction in the rate at which students fail a given course. Kaplan tracks this definition through a “U rate”—the “U” representing unsuccessful performance. A course-level U rate includes all students who fail to earn course credit for whatever reason, including an inability to achieve passing grades or dropping out.

The dual definition of improvement was created to reward changes that encouraged students to persist who might otherwise have failed to successfully complete assessments or withdrawn from the course. Some students can have a negative impact on the average CLA score for the course because they withdrew, even though they demonstrated an acceptable level of competency to have successfully completed the course.

Using this dual definition for improvements, results showed more improvements than declines, at roughly twice the order of magnitude (Table 1).

Table: Table 1—“Close-the-Loop” Results of Curriculum Changes

Summary of CLA Results to 12/31/2010

Curricular Change Metrics (Percent of First Row)

Changes Indicating CLA Data Were Present or Used

Changes with Multiple Indicators Used

Total

Total Courses Revisions With CLA Data Available

98 100%

123 100%

221 100%

Improvements Observed

45 46%

53 43%

98 44%

Declines Observed

26 27%

24 20%

50 23%

No Significant Differences Observed

27 27%

46 37%

73 33%

Source: Kaplan University Office of Institutional Effectiveness.

After operating the CLA system for a full year, we conclude that

  • the CLA data validate improvements in student learning at the course level,

  • the improvements significantly outnumber the declines,

  • the use of CLA data in making course changes substantially reduces the possibility of observing no change in learning, and

  • the CLA structure and data identify, through analysis of pre- and post-change indicators, specific strategies that delivered either improvements or declines in student performance.

We have also conducted detailed case studies of individual courses included in this study. An example is IT117, an undergraduate course in website design required for students enrolled in the Bachelor of Science in Information Technology and the Associate of Applied Science of Information Technology. Based on a review of CLA data, along with other academic metrics—grade distributions, end-of-term student satisfaction surveys, and faculty surveys—the curriculum team and faculty discussed the need to revise the course. Because IT117 is a popular course that runs many concurrent sections, the faculty and department chair created a set of experiments to test four change strategies: major course revision, an emphasis on mastery learning, guided online tutorials, and the administration of a self-efficacy scale.

The major course revision consisted of adding audio/visual introduction to each project, adding additional examples and templates, and updating the textbook. In the mastery-learning change scenario, students were allowed to redo and resubmit their work for re-grading provided they had submitted the original assignment on time and it was determined to be in reasonable form by the instructor. The guided online tutorials provided structured training in several target skills for the course. The self-efficacy scales measured students' learning at a task-specific level both pre- and post-instruction. The CLA scores increased from 3.92 to 4.53 (within the 0-to-5 scale) as a result of these interventions, with largest effects resulting from the major course revision and mastery learning.

What Have We Learned?

Through the intensive process of creating learning goals and rubrics across its curriculum, the university has gained a better understanding of the value of each course and its importance to program outcomes. The comprehensive nature of the system; the granularity of the assessments; the regularity of reports and data provided to faculty and administrators; and the balance among university-level oversight, faculty autonomy, and consistency for the student—all have contributed to the progress we have achieved to date.

Successfully incorporating the CLA structure into Kaplan's operations and cultural norms, especially in such a short span of time, was possible because of several key factors:

  • Alignment with the university's mission statement and core values. While there are also external motivators—most notably accreditation and federal policy changes—the CLA was, and remains, an undertaking inspired by a focus on student learning.

  • Availability of sufficient resources. Human capital and technological resources were allocated to make implementation possible

  • Champions of the cause from across the university. Executives championed the undertaking, academic leadership was empowered to drive change, faculty engaged in the project, and institutional-research and faculty-development staff provided support.

  • Transparency of the undertaking. From regular meetings to updates on the employee intranet, everyone knew what was going on, why the change was happening, and what their role would be.

  • Plan for data usage. How the data would be used was an integral part of the project, not an afterthought once work began.

  • Incorporation of the CLA into the culture of the university. It is incorporated into our day-to-day tasks, academic projects, and strategic planning.

The need for sufficient resources to do this work cannot be overstated. But the substantial investment of human, capital, and technological resources to create an institution-wide system of assessment was a strategic decision, made with both academic and business outcomes in mind. We hope that this level of learning assessment will pay off, not just in terms of immediate gains in student learning but also through a recognition of the quality of our programs that will attract new cohorts of students, faculty who engage in the scholarship of teaching and learning, and new academic partners.

The successes to date from the CLA approach and structure have not come without some changes, lessons learned along the way, and adjustments based on feedback. These include:

  • Acknowledgment of concerns from stakeholders regarding their existing paradigms for instruction. We had to ensure safe havens for discussion of differing views on teaching as a science or an art, on the balance between a centralized curriculum versus individual teaching styles, and so forth.

  • Balance of responsibilities. Different people provided institutional oversight, curriculum expertise, and a knowledge of assessment best practices, while faculty autonomy was respected even while we aimed to reach a common goal.

  • Appropriate granularity of analysis. Discipline was required to focus on what information we need to gather and will use, rather than collecting more and more information because we can.

  • Potential for spurious results. There are many things going on in our students' lives that influence their learning; our multilayered overall approach to assessment helps us guard against focusing too much on any single metric.

What's Next?

The work on assessment at Kaplan University over the past three years has built a foundation that we intend to build on going forward. Plans include:

  • testing—internally and, as viable, externally—the validity of CLA data;

  • identifying for intensive analysis the next group of courses from among those that are being improved based on CLA results;

  • focusing on students' knowledge and experience of outcome-based assessments of their learning; and

  • developing feedback loops of different lengths, including course, degree, immediate employment, and graduation-plus-five-years.

This approach to assessment is not to be taken on lightly, and we certainly have more to do to improve our approach. At the same time, the benefits to students, faculty, and the institution are clear, and we look forward to all that we will learn in the coming years.

Resources

1. Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. and Krathwohl, D. R. (eds) (1956) Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain, David McKay., New York, NY.

2. Gonzalez, K. P. and Padilla, R. V. (1999) Faculty commitment and engagement in organizational reform, ASHE Annual Meeting Paper: 46

3. Kuh, G. and Ikenberry, S. (2009) More than you think, less than we need: Learning outcomes assessment in American higher education, National Institute for Learning Outcomes Assessment., Champaign, IL.

4. Middaugh, M. F. (2010) Planning and assessment in higher education, Jossey-Bass., San Francisco, CA.

5. Palomba, C. A. and Banta, T. W. (1999) Assessment essentials: Planning, implementation, and improving assessment in higher education, Higher and Adult Education Series Jossey-Bass., San Francisco, CA.

6. Wergin, J. (2005) Taking responsibility for student learning. Change 37:1, pp. 30-33.

Thayer E. Reed (treed@kaplan.edu) is the associate director of assessment at Kaplan University, prior to which she worked at the Wisconsin Center for the Advancement of Postsecondary Education (WISCAPE) at the University of Wisconsin-Madison.

Jason Levin (jlevin@kaplan.edu) is the executive director of the Office of Institutional Effectiveness at Kaplan; he is responsible for administration of the university's assessment program and institutional research.

Geri H. Malandra (gmalandra@kaplan.edu) is Kaplan's provost. Earlier she had leadership positions at the American Council on Education, the University of Texas System, and the University of Minnesota.

In this Issue

On this Topic

©2010 Taylor & Francis Group · 325 Chestut Street, Suite 800, Philadelphia, PA · 19106 · heldref@taylorandfrancis.com