Skip Navigation

Cover

March-April 2011

Print
Email
ResizeResize Text: Original Large XLarge Untitled Document Subscribe

Cold Rolled Steel and Knowledge: What Can Higher Education Learn About Productivity?

How is teaching introductory statistics like producing a high-quality coil of cold rolled steel? Can educators learn techniques from the manufacturing sector that make instruction more productive? Such questions typically create immediate controversy when raised with an academic audience. Indeed, almost any suggestion that there may be analogies between instruction and the production of goods is met with unease.

The unease is appropriate, because the mechanisms by which humans develop knowledge and skills are far more complex, non-linear, and—as of now, at least—less well understood than the processes for producing cold rolled steel or any other manufactured good. However, the time has come to get past the discomfort: there is something to be gained by exploring the potential similarities.

The Need and the Opportunity

Higher education is simply not making substantial progress in addressing its most significant challenges: educating an increasingly diverse body of students while containing the cost that is putting postsecondary education beyond the reach of a growing percentage of the world's population. Tweaking long-standing strategies to achieve incremental improvement is no longer enough. The need to seek new ideas in traditionally overlooked arenas is urgent.

Higher education can no longer run away from the “p” word: productivity. The 2008 Measuring Up national report card on higher education shows that from 1982 to 2007, while the consumer price index increased 106 percent, tuition and fees at US colleges and universities grew 439 percent. If during this same time we had graduated more students per dollar or if the students we graduated in 2007 had gained far more knowledge than those who graduated in 1982, the overall increase in costs might be more tolerable. Unfortunately, there is no evidence that we gained ground on either measure.

An emerging marriage of learning science and technology challenges the belief, supported by the Baumol/Bowen analysis of productivity in the services sector, that improving productivity in education necessitates a reduction in quality. While there was such a negative correlation between quality and productivity when we attempted to increase productivity in higher education by, say, increasing class size, using television to broadcast lectures, or transferring traditional teaching methods to the web, that correlation is not inevitable.

Moreover, the view that the processes of learning are too complex to yield to scientific understanding is no longer tenable. Learning and brain scientists are making extraordinary advances in our understanding of human learning, and this knowledge can transform how we teach novice learners.

So now may well be the time to see how other industries have used information technologies and data-gathering techniques to improve the quality of their products while reducing costs. For them, real-time data from supply chain and production processes and interactions between fundamental research and production processes enable adaptive management. We believe that in higher education, data can be collected and current interactions between the learning sciences and instructional practice can be improved for more effective adaptive instruction.

Of course the very term “products” raises concerns about cheapening the value of instruction by comparing our outputs to those of the manufacturing sector. We don't produce “products,” do we? We perform a complex set of services. The notion of using techniques from those who produce products is wrong-headed from the beginning, right?

We disagree. Education does have concrete products: changes in the knowledge states of scholars and students. Our successes and failures can be judged by whether we produce those changes in the ways we intend and/or in ways that lead to better and more productive lives for our students.

In their influential 1995 Change article, Robert Barr and John Tagg described a shift occurring in undergraduate education from an “instruction paradigm” to a “learning paradigm”: a move from viewing higher education in purely service terms—i.e., “A college is an institution that exists to provide instruction”' to viewing it as producing a product—i.e., “A college is an institution that exists to produce learning.”

Unfortunately, the disruptive change that Barr and Tagg perceived as being in its early stages more than 15 years ago has yet to spread beyond a few pockets of innovation. Fundamental organizational frameworks, policies, and practices continue to reflect the belief that postsecondary education exists to deliver a service, instruction. Efforts to improve that service typically focus on tinkering with its components (reducing the size of classes, teaching interdisciplinary classes, offering seminars rather than lectures, introducing “class response systems” in large classes to foster discussion, etc.)

Yet too little consideration is given to measuring how such modifications affect the product they are supposed to deliver—learning. While there has been increasing recognition that higher education needs to more clearly define the learning objectives of instruction and find ways to measure the effectiveness of its services, a widespread shift in attention from the process of instruction to its outcomes has not occurred.

It has been widely recognized that making rich data derived from formative assessment of instructional effectiveness available to both learners and instructors can be a key factor in improving learning outcomes. Yet colleges and universities spend shockingly few resources on providing students and faculty data that would tell them whether their individual and collective processes for learning and teaching are effective or not.

In lieu of data, we depend on faculty intuitions about what works and what doesn't. While those intuitions are certainly sometimes right, it is unlikely that intuition alone is a sufficient means to improve instruction. Even when particular instructional interventions are identified as being effective, they seldom persist beyond the practice of an individual faculty member. This approach is thoroughly unscientific and incapable of producing the persistence, spread of adoption, and iterative improvement that is required to bring about transformative change.

Productivity Improvement in Manufacturing

This is exactly where a comparison to the manufacturing sector is particularly apt. In the manufacturing sector, organizations that depend on intuition about effective methods are simply not as viable as those that gather and use key data to adaptively improve output from current production practices. Even more successful are firms that redesign the next generation of production practices to be more efficient and effective than the last.

In manufacturing, it is clear that all steps in the supply chain and the production line are directed toward creating a high-quality product in the most cost-efficient fashion possible. Fairly early in the development of modern production techniques, a few industry leaders recognized that gathering granular data about the supply and manufacturing processes and using those data to understand the costs involved and to identify flaws in each process could affect the quality of the final product.

For example, in the latter part of the 19th century, William P. Shinn, the first general manager of Andrew Carnegie's Edgar Thomson Steel Works, imported methods used first by the railroads in the US to gather both cost and process data about various steps in steel production. And as A. D. Chandler points out, “In addition to using their cost sheets to evaluate the performance of department managers, foremen and men, Carnegie, Shinn and Jones relied on them to check the quality and mix of raw materials. They used them to evaluate improvements in process and in product and to make decisions on developing by-products.”

In the latter half of the twentieth century, information technology provided better tools for gathering data and modeling supply and production processes. If we fast-forward to the present in the steel industry, the result of all these efforts is a remarkable implementation of massive data-gathering about the entire production process.

A finished roll of cold rolled sheet steel is linked to a wealth of rich data about each step in the process. The finished product is evaluated for defects; if they are found, those data can be used to find their probable cause and fix the problem.

The steel industry has also developed a set of feedback loops among materials researchers and producers. Central to these loops are production engineers who translate the chemistry of steel into production lines and process designs. The extensive measurements done by those engaged in production of various kinds of steel are fed back into theoretical chemical models, then used to improve those models and to design instruments to produce even better measurements of the most salient elements in the process.

As Alexander McLean wrote in his 2006 article on the science of steel making,

In this context of collaborative interactions, basic studies should be performed by the research community in close cooperation with the user community and resource community, the provider of equipment and materials. We must recognize that these three communities represent quite different cultures, with very different objectives, and collaboration is not always an easy task. Nevertheless, when effective communication is established among the members of this triumvirate, there is a powerful driving force for the synergistic development of innovative technologies.

This passage should describe collaboration among learning scientists, a still-to-be-developed “engineering” class of professionals who translate scientific results into effective instructional interventions, and faculty content experts. Creating the analogue of the dynamic equilibrium of theory, production processes, feedback to theory, and iterative improvement to processes might well serve to increase both quality and productivity in education the way it has in steel production.

We should also look to the role that information technology (IT) has played in improving productivity in industry. IT systems make it easier to gather, store, and mine the data that is essential to quality control and management. In the 1990s and first decade of the 21st century, industries improved the usability and analysis of data by rolling up the results and displaying them in computer-user interfaces called “dashboards.”

These “business intelligence dashboards” make information about a set of operating components easily accessible, just as a car dashboard does for the components of a car. But unlike those in cars, management dashboards can improve both the daily operations and the overall business processes of the supply chain and manufacturing processes over the long term.

Can't they play a similar role in education? We believe that they can and that the actions they enable can be transformative. We now present an example of how this use of theory, data, and collaboration has produced meaningful changes in the productivity of postsecondary instruction.

Carnegie Mellon's Open Learning Initiative (OLI)

The OLI is an open educational resources project that began in 2002 with a grant from The William and Flora Hewlett Foundation. Unlike many similar projects, the OLI is not a collection of materials created by individual faculty to support traditional instruction. While OLI courses are most effectively used in combination with classroom instruction, the original and most challenging goal of the project was to develop web-based learning environments that could support individual learners who do not have the benefit of an instructor to achieve the same learning outcomes as students who complete the traditional course at Carnegie Mellon. To meet this challenging goal, Carnegie Mellon built on its institutional strengths in cognitive science, software engineering, and human-computer interaction.

Just as collaboration among key experts and communities drives the development of innovative technologies in steel making, collaboration among diverse experts can drive innovation in education. OLI courses are developed by teams composed of faculty, learning scientists, human-computer interaction experts, and software engineers in order to make the best use of multidisciplinary knowledge for designing effective learning environments. The OLI design team articulates an initial set of student-centered, measurable learning objectives and designs the instructional environment to support students in achieving them.

The instructional activities in OLI courses contain small amounts of explanatory text and many activities that capitalize on the computer's capability to display digital images and simulations and to promote interaction. Many of the courses also include virtual lab environments that encourage flexible and authentic exploration.

Perhaps the most salient feature of OLI course design is the embedding of quasi-intelligent tutors—or “mini-tutors”—within the learning activities throughout the course. OLI benefits from inheriting some of the best work done in the area of computer-based tutoring by Carnegie Mellon and University of Pittsburgh faculty. An intelligent tutor is a computerized learning environment whose design is based on cognitive principles and whose interaction with students is like those of a human tutor—making comments when students err, answering questions about what to do next, and maintaining a low profile when they are performing well. This approach differs from traditional computer-aided instruction, which gives didactic feedback to students on their final answers; the OLI tutors provide context-specific assistance throughout the problem-solving process.

In the activity depicted by Figure 1, the student is presented with a graphical representation of a problem and asked to type in (rather than select from a forced-choice menu) the answer. If the student needs help, he or she can click the green “hint” button in the corner. The first hint provides a link which, when clicked, depicts the various steps needed to solve the problem.

Caption: Figure 1. Tutoring

In this image, the tutor has expanded to show the first of four steps in solving the problem. The hints and feedback given by the tutor depend on which part of the exercise the student is attempting. The tutor recognizes when the student has used the scaffolding and hints and has completed the problem. Then the tutor generates a completely new problem.

The graph, problem statement, hints, feedback, and answers are dynamically generated so that the student can work through the activity multiple times, receiving a different problem and context-specific tutoring each time until he or she understands the concept and has developed fluency with the procedure. This provides virtually unlimited opportunities for supported practice.

Embedded assessments and tutors in OLI courses are designed to support students, but they have an additional purpose that is analogous to the data-gathering instruments built into contemporary manufacturing processes: they collect data on student performance that is fed back into the system. It is used to guide the student, the faculty member teaching the course, the team that will produce the next iteration of the course, and learning scientists who use the data to create and refine theories of human learning.

One unique power of contemporary educational technology is its ability to embed assessment into virtually every instructional activity and to collect fine-grained student-learning data from those activities. With the students' permission, we digitally record interaction-level detail of student learning activities in all OLI courses and labs. We then use the data to create an analogue of a “business intelligence system” (call it an “instructional intelligence system”).

The richness of the data the system collects about student use and learning provides an unprecedented opportunity for keeping instructors in tune with the many aspects of their students' learning. Marsha Lovett, a cognitive scientist at the Eberly Center for Teaching Excellence at Carnegie Mellon, has been leading the team that is designing the instructional intelligence system for faculty, the “instructor's learning dashboard” (Figure 2). The dashboard is part of an instructional intelligence system that can support a new level of effectiveness and efficiency for blended-mode instruction: it analyzes and distills click-stream data that are automatically collected from the students' interactions with the system in order to communicate key information on the class's learning and progress to guide instruction in real time.

Caption: Figure 2. Instructor's Learning Dashboard

Unlike reports from traditional course management systems, the dashboard presents instructors with a measurement of learning for each objective. The dashboard also provides more detailed information, such as the class's learning of sub-objectives, the learning of individual students, and the types of tasks students struggle with the most.

The data collected from all of the students in a class enable instructors to make immediate adjustments to their teaching. The data collected across multiple classes provide information to the team to use in making adjustments to the course design.

Learning-science theory informs the first version of an OLI course, but subsequent versions are refined based on the empirical data collected from users, so that the courses are continuously improving. The OLI is based on a constant research cycle: research informs the design of the course, and the student learning data not only provides feedback to students, instructors, and course-design teams but also prompts further research.

Some OLI courses serve as part of the research environment for the Pittsburgh Science of Learning Center (PSLC). In 2004, the National Science Foundation funded the PSLC to study the nature of human learning. The PSLC dramatically increases the ease and speed with which learning researchers can create the kinds of rigorous, theory-based experiments that pave the way for an understanding of human learning. The PSLC uses OLI courses to facilitate the design of experiments that combine the realism of classroom field studies and the rigor of controlled theory-based laboratory studies.

So what difference does the set of OLI strategies make?

Students register in OLI courses in one of two different modes: academic mode, in which the student is using the OLI material as part of a for-credit instructor-led class to complement and support the instruction, or open and free mode in which students use the OLI course to support their own learning and may register as guests or anonymously. In the last four years, there have been over 25,000 academic student registrations and over 400,000 anonymous, open and free course registrations. OLI serves nearly 100,000 visits by 45,000 distinct visitors from 188 countries/territories each month.

Evaluation studies have been conducted at institutions spanning the full range of Carnegie classifications from part-time open-enrollment two-year colleges to full-time more selective four-year colleges and have shown accelerated learning, reduced attrition, and significant correlations between OLI learning activities and learning gains. Results include the following:

  • Students using a OLI course at Carnegie Mellon University in hybrid mode successfully learned as much material in less than half the time (i.e., they completed the course in 8 weeks with 2 class meeting per week, while traditional students completed the course in 15 weeks with 4 class meetings per week), and the OLI students demonstrated learning outcomes that were as good as or better than those of traditional students. Further, there was no significant difference in information retention between OLI students and traditional students in tests given over a semester later (Lovett, Meyer & Thille 2008).

  • Students using OLI in the fully online mode at a large public university with a high proportion of English-as-a-second language learners achieved the same learning outcomes as students in traditional classes, and many more successfully completed the course. In this study of over 300 students, only 41 percent of the students in the traditional sections completed the course, while 99 percent of the students in the OLI version did so (Schunn & Patchan 2009).

  • In a community-college accelerated-learning study, students in the OLI Logic and Proofs course taught by an instructor with minimal experience in teaching logic learned more material (~33 percent more) than students in traditional instruction and performed at higher levels on shared material (Schunn & Patchan 2009).

  • A study conducted on the OLI chemistry course at Carnegie Mellon University revealed that the number of engaged actions within the virtual lab explained about 48 percent of the variation on the final test scores–outweighing all other factors, including gender and SAT scores, as the predictor of positive learning outcomes (Evans, Yaron & Leinhardt 2008).

Conclusion

These and other results show promise for this approach. But colleges are conservative institutions, and a general aversion to change poses a risk to institutional and faculty acceptance of these new approaches. The OLI technology is disruptive, requiring a switch from an intuitive to an evidence-based approach for course development, delivery, and assessment and from a solo content expert to an interdisciplinary team for developing, evaluating, delivering, and improving courses.

Tight funding environments may heighten this inherent resistance to innovation, as instructors and staff fear for their jobs and academic autonomy. Even when colleges recognize the power of educational technology to improve instruction, a “not-invented-here” mentality may also exacerbate reluctance to adopt the concepts central to OLI's effectiveness and power to improve productivity.

We see the OLI strategies in a very different and positive light. We believe that team-based, scientific design of courses combined with rich feedback make for a much better experience for both students and faculty. In the traditional classroom, faculty operate on their own, with little data about the product they are trying to produce: an improved knowledge state in their students. The OLI approach provides them the kind of rich data stream that has been integral in improvements in effectiveness in manufacturing. By using information technology to do detailed and timely assessment and to develop feedback loops between research science and the production process, the effectiveness and efficiency of the educational process can be improved

While there has been increasing recognition that higher education needs to more clearly define the learning objectives of instruction and find ways to measure the effectiveness of its services, a widespread shift in attention from the process of instruction to its outcomes has not occurred.

Resources

1. Barr, R. B. and Tagg, J. (1995) From teaching to learning: A new paradigm for undergraduate education. Change 27, pp. 6. (Nov/Dec)

2. Baumol, W. J. and Bowen, W. W. (1966) Performing arts: The economic dilemma, Twentieth Century Fund., New York, NY.

3. Evans, K., Yaron, D. and Leinhardt, G. (2008) Learning stoichiometry: A comparison of text and multimedia formats. Chemistry Education Research and Practice 9, pp. 208-218.

4. Chandler, A. D. (1977) The visible hand: The managerial revolution in American business, Harvard University Press., Cambridge, MA.

5. Lovett, M., Meyer, O. and Thille, C. (2008) The Open Learning Initiative: Measuring the effectiveness of the OLI statistics course in accelerating student learning. Journal of Interactive Media in Education Retrieved from http://jime.open.ac.uk/2008/14/

6. McLean, A. (2006) The science and technology of steelmaking: Measurements, models, and manufacturing. Metallurgical and Materials Transactions 37, pp. 3.

7. Nicol, D. J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 31, pp. 199-218.

8. Schunn, C. D. and Patchan, M. (2009) An evaluation of accelerated learning in the CMU Open Learning Initiative course Logic & Proofs.. Technical report by Learning Research and Development Center, University of Pittsburgh., Pittsburgh, PA.

9. Steif, P. S. and Dollár, A. (2009) Study of usage patterns and learning gains in a web-based interactive statics course. Journal of Engineering Education 98, pp. 321-333.

Websites

10. Measuring Up 2008, http://measuringup2008.highereducation.org

11. Open Learning Initiative, http://oli.web.cmu.edu/openlearning/

Candace Thille is the founding director of the Open Learning Initiative (OLI) at Carnegie Mellon University. She is the co-director of OLnet and serves as a redesign scholar for NCAT, as a fellow of International Society for Design and Development in Education, and on the Global Executive Advisory board for HP's Catalyst Initiative. She worked with the US Dept. of Education on the 2010 National Education Technology Plan.

Joel Smith is vice provost and chief information officer at Carnegie Mellon University. He also directs Carnegie Mellon's Office of Technology for Education and is co-PI on the OLI.

In this Issue

On this Topic

©2010 Taylor & Francis Group · 325 Chestut Street, Suite 800, Philadelphia, PA · 19106 · heldref@taylorandfrancis.com