The Innovative University, by Clayton Christensen and Henry Eyring

As part of a broader research project, I recently read Clayton Christensen’s and Henry Eyring’s The Innovative University: Changing the DNA of Higher Education from the Inside Out. There is a temptation to be snobbish about books that have the overused ‘innovative’ in the title, so I started reading with a skeptical mindset. But I found the book to be immensely informative and thoughtful and so I wanted to share what I learned. I’m going to summarize the three main main strands of thought that I pulled from the book: (1) A history of American universities, particularly how accredited universities today are bound to what the authors call a ‘Harvard model’ of what a university should be, a model that came to be for specific historical reasons and that does not serve all of us well today; (2) An application of the theory of ‘disruptive innovation’ (the term here being properly used by its originator) to the higher education market; and (3) Case studies and ideas on how digital technologies can improve the delivery and price of higher education. Then I’ll close with a small criticism and my heterodox take on the big market failure in higher education today.


(1) One major bias or barrier to really innovative higher education reform is that the people who accredit and run universities, the people who hire college graduates, the intellectuals who shape the conversation around higher education, and our legislators are almost all graduates of relatively elite tier universities. They have thus been acculturated into thinking that that is the way that a university must be; they have a self-serving bias in thinking that that is what equips you to be a good employee/leader/citizen; and they have no exposure to people who have different needs and expectations for higher education. The authors of and the leaders profiled in The Innovative University are mostly people who have been at one time Harvard faculty. But they make great efforts to control for this bias by (i.) revisiting in detail the history of American universities to understand why they came to be that way and (ii.) studying today’s ‘down-market’, non-selective universities and their students.

What are some things we learn when we study universities this way? First, you learn that a lot of things that we take for granted have really arbitrary historical origins. For example, why do we have summer break at four-year colleges in the U.S.? Is it because students had to go work on the farm in the summer? Not so. In fact, summer breaks can be traced to the first couple decades of Harvard’s existence, when instructors found that the couple-dozen students at the college at the time, some as young as 14 years old, were more likely to break out into fist-fights while studying ancient Greek during the hot, malarial months. Are there nonetheless good reasons for a long summer break today?  Maybe. Arguably this model works very well for Harvard, whose professors need dedicated time for valuable independent research, whose students can secure stimulating summer internships and research opportunities from their freshman years, and whose brand name allows it to rent out its dorms and lecture halls for lucrative summer schools and summer camps. But summer breaks may be a huge waste at other, ‘down-market’ universities, whose students’ feasible internships will not offer as much stimulation and advancement as classes would, whose professors’ research is arguably less valuable than their instruction, whose students are far more likely to get frustrated with the time required to get a degree and drop out, and whose empty facilities are a significant strain on their operating budgets.

Or take, for another example, the assumption that university professors must serve dual roles as teachers and researchers. This can be traced to the early 1700s, and the influence of Isaac Greenwood, Harvard’s first chair of mathematics and natural philosophy, who learned of Newton’s new Laws on a trip to London, and led a push to get Harvard faculty to use the lab equipment necessary to demonstrate these laws to Harvard undergraduates. At the time, it made sense to have instructors doing basic research, to ensure that they didn’t get basic things like ‘does a moving object lose speed if no counteracting force is applied to it?’ wrong, and because scientific knowledge was at a point where undergraduates could be taught the latest insights. But now that cutting-edge research in physics is not accessible to freshmen, it makes less sense to allocate the tasks of cutting-edge research and freshman instruction to the same group of people; in fact, it’s conceivable that today’s experts may be immersed in their specialties to an extent that disables them from communicating the basics to outsiders. Again, the dual role of the professor-as-researcher-and-instructor may still be defensible at Harvard, where professors can pull their own weight in research grants, where many students want to move onto graduate-level original research themselves. But it doesn’t make much sense for down-market universities to be inflexibly committed to this duality. Also, the early research focus ultimately evolved into today’s ‘publish or perish’ — universities are now in the puzzling situation of simultaneously (a) claiming that they are doing their best to improve undergraduate instruction and (b) making frequent publication their professors’ near-exclusive career incentive, giving almost no incentive for good instruction.

A large number of university features that we take for granted trace to the Harvard presidency of Charles Eliot in the late 19th and early 20th century. Eliot gave the Harvard “Faculty of Arts and Sciences responsibility for all college-level instruction.” Prior to this, high-school students could apply directly to Harvard College, Harvard Law School or Harvard Medical School. Now, you need a bachelor’s to apply to the latter two. Eliot coupled this move with an attempt to reduce the undergraduate curriculum from four to three years — but this was vetoed by the 1907 financial crisis, which made Harvard FAS unwilling to forego 25% of their undergraduate tuition. Requiring 7 years of very expensive education to become a general practitioner makes little sense today and is a serious financial burden on our health-care system.  Eliot initiated a move toward a system of lifetime tenure and great faculty autonomy, partly to attract scholars in a time of under-supply and low social tolerance for many ideas. Again (sorry for repetition) the situation may be different today, particularly at down-market universities. Eliot emphasized breadth, aiming to attract the world’s leading scholars in all subjects, and requiring Harvard undergraduates to begin their studies by fulfilling broad distributional requirements. Today’s down-market universities, though, could benefit from product differentiation (“we don’t serve Slavic studies here”; “we do chemical engineering, they do mechanical; apply/transfer accordingly”) and putting technical specialization up front in the curriculum (i.e., arranging the curriculum so that if a student drops out after year 1, s/he already has a few technical certificates, rather than putting the technical certificates in year 4, and the distribution requirements in year 1). Eliot found (to his disappointment) that the college’s football team was a key to raising donations from alumni, and invested in its development and league-memberships accordingly; today’s universities would do better with less emphasis on athletics. Eliot placed a ‘German-style’ research university (PhD programs) ‘on top’ of the ‘English style’ liberal arts college; he and his successor began the dual requirement of distributional requirements and a specialization in a major. While this gave students the advantage of taking graduate-level courses when they preferred and combining breadth with specialization, it also pulled the undergraduate curriculum to become preparation for PhD level research and increased the number of courses required for graduation.

All these taken-for-granted characteristics of the university and more came to be for historical reasons. They’ve since been solidified by replication (i.e., people who found new universities model them on the ones they graduated from) and accreditation and college-rankings standards. After WWII, as veterans began using their GI bills to go to college, the U.S. government empowered established universities to have a sort of ‘peer review’ process in accrediting the new universities that sprung up to serve them. Today, regional accreditors are frequently criticized on the grounds that they emphasize the colleges’ inputs over their outputs (i.e., research facilities and faculty credentials over student employment and measurable learning progress). In 1967, the Carnegie foundation created a simple taxonomy of different types of higher-ed institutions according to their emphasis on research and doctoral programs, intending the classification system to be used for its own charitable purposes. But the Carnegie Commissions’ taxonomy quickly became seen as a normative, hierarchical ranking system, with colleges desperately seeking to “climb the Carnegie ladder” and proudly announcing each new step. We all know about the problems with the U.S. News and World report ranking system, which incentivize schools to compete with luxurious student amenities and otherwise game the system. The U.S. News rankings don’t just put pressure on colleges to make irrational decisions, they also put pressure on students and parents — a student who would prefer to attend a college with lower tuition and fewer luxurious amenities (and hence a lower ranking) will know that his/her prospective employers will rate the value of that university’s degree according to the U.S. News rankings.

So, you get the gist here:  There are a lot of historical features of the ‘Harvard model’ that are not serving today’s down-market institutions and students, but which are not frequently questioned and are actively solidified by the Carnegie ladder, regional accreditors, and the college-rankings system. The book brings home the contrast between the needs of more typical college students and the bounds of the ‘Harvard model’ by following Kim Clark, who unexpectedly stepped down as Dean of Harvard Business School in 2005 to move to the unheard-of BYU-Idaho (formerly Ricks College). There, Clark has made humanitarian efforts to reach out to all high-school graduates, to women (often Mormons) who felt they should drop out of college to become mothers, and to ‘at risk’ students who might not be able to complete a full bachelor’s curriculum, but whom Kim hoped to equip with technical certificates and employability along the way.


(2) The term ‘disruptive innovation’ gets ridiculed a lot for its overuse, but the original meaning of the term actually captures a really important phenomenon. The idea is like this: One day, an innovative new product, like the computer, arrives on the scene. Because it’s new and innovative, it’s expensive, and it gets sold to businesses, governments, and the wealthy. A lot of different businesses compete to sell computers to these clients. They try to differentiate themselves and outdo their competitors by offering ever faster speeds  and ever-more widgets and functionalities, what Christensen calls “sustaining innovations.” The computer (the product) gets better and better and higher-functioning and more expensive. But then one day, someone thinks, “Who actually needs all this speed and all these widgets? Why don’t we just offer a super-stripped down computer with the bare minimum functions and sell it to regular people on the cheap?” The big established players don’t like the idea of making a low-quality product and don’t think it could ever work. So this “disruptive innovation” usually comes from a new company, not one of the established ones; the disruptive innovation first wins consumers at the low end of the market with simple low prices, but then also eventually wins the high end of the market, once the simplicity and low cost of the innovation are enough to compensate high-end users for slightly fewer widgets and whistles. The disruption of the mainframe and minicomputer industries by Macs and PCs is a classic example, but Christensen says this is a common product life-cycle.

Does this sound like something that could happen to higher education? Christensen and Eyring answer with a qualified ‘sort of.’ The evolution of the university over the past century is certainly an example of sustaining innovations going beyond consumers’ needs: Too many universities have too many departments, too many indoor rock gyms and athletic teams, and too little student-oriented faculty. There’s now widespread attention to the problem of high costs. So, Christensen and Eyring think there’s a real opportunity for universities to do well by serving ordinary students in a more cost-effective and stripped down manner. But they think that traditional universities’ ability to pass on “wisdom” from established scholars, to facilitate face-to-face interaction among peers, and to produce original research — what Christensen and Eyring call “discovery, meaning, and mentoring” — are unique.  These are things that cannot be replicated, they argue, outside the traditional university and so we won’t see any massive disruption by for-profit and online providers (more on that later).

Instead, they hope for incremental, cost-lowering changes within universities. This depends on a conceptual rethinking and a bunch of specific changes. The conceptual rethinking is that we should stop seeing the higher-ed space as a ladder, with every university competing to climb to the top rung with Harvard. Instead, we should think of the higher education space as a ‘landscape,’ with universities differentiating on their core advantages, attracting particular kinds of students with niche offerings, and competing on price as well as on rankings. Christensen and Eyring highlight some colleges that are trying to become cheaper and more stripped-down, praising BYU-Idaho, under Kim Clark, for cutting athletic teams, reducing the number of majors, and optimizing the logistics of building use in order to move from a two-year community college to a four-year bachelor’s-granting college without increasing annual costs. They also note that if we use cost-per-degree-granted as our primary metric, the number one way most colleges could improve would be to increase their graduation rates and decrease the number of students who stay on for a fifth or sixth year. To this end, they advocate some basic structural changes like a ‘modularized’ curriculum. The idea is that, right now, a lot of students who take a fifth or sixth year to graduate do so because they switched majors at some point and were unable to apply their former major’s classes for credit in their new major. Universities could instead start grouping classes into modules, any of which could be considered a component of a couple of different majors. So, for example, a ‘quantitative methods’ module, including calculus, linear algebra, statistics and/or computational statistics, could be ‘stuck on’ to any social science, science, or engineering BA. A ‘business basics’ module that included accounting and finance could be ‘stuck on’ to both an economics and a healthcare management BA, etc. If students’ interests or career goals change, they could then switch through a variety of majors without losing too much progress toward graduation.

They also suggest that there’s an opportunity for universities to change their professors’ career incentives. For example, a university might offer multiple tenure tracks — professors would be rewarded not just for their esoteric original research, but also for outstanding teaching, course development, textbook writing, integration of insights from others’ original research, and even publication for general audiences that improve non-expert access to fields. They also suggest rethinking tenure. Now, to be in favor of ‘rethinking tenure’ is not to be in favor of ‘firing professors at whim, particularly for having unpopular opinions.’ (If anything, up-or-out tenure may only increase political censorship in the academy, as faculty committees will vote down professors with unpopular views, while more practical-minded administrators would have been happy to have the professor stay on and continue teaching.) Most professions and institutions build up long-term relationships of mutual respect with their employees that prevent abusive and unfair dismissals and there’s no reason academe can’t be the same way. Rethinking tenure could improve the teaching productivity of junior faculty, who would feel less anxiety and pressure to publish prolifically, and also improve the productivity of senior faculty, who would feel less apathetic.

Finally, they suggest that brick-and-mortar universities can survive in the digital future by differentiating themselves from low-cost for-profit, online alternatives by emphasizing a commitment to moral instruction and mentoring (more on this below). Altogether, then, they list twelve “recommended alterations” for universities. They represented these as a table, with a column of “Traditional University Traits” and a column of “Recommended Alterations.” Since it’s the main takeaway of the book, I’ll represent them here, with a list in which each “Traditional University Trait” will be separated by an arrow (“–>”) from its “Recommended Alterations,” below:

  • Face-to-face instruction—>Mix of face-to-face and online learning;
  • Rational/secular orientation—>Increased attention to values;
  • Comprehensive specialization, departmentalization, and faculty self-governance—>Interdepartmental faculty collaboration and heavyweight innovation teams;
  • Long summer recess—>Year-round operation;
  • Graduate schools atop the college—>Strong graduate programs only, and institutional focus on mentoring students, especially undergraduates;
  • Private fundraising—>Funds used primarily in support of students, especially need-based aid;
  • Competitive athletics—>Greater relative emphasis on student activities;
  • Curricular distribution (general education) and concentration (major)—>Cross-disciplinary, integrated GE and modular, customizable majors, with technical certificates and associate’s degrees nested within bachelor’s degrees;
  • Academic honors—>increased emphasis on student competence vis-à-vis learning outcomes;
  • Externally funded research—>undergraduate student involvement in research;
  • Up-or-out tenure, with faculty rank and salary distinctions—>Hiring with intent to train or retain, Customized scholarships and employment contracts, Minimized rank and salary distinctions consistent with a student-mentoring emphasis;
  • Admissions selectivity—>Expansion of capacity, (for example, via online learning and year-round operation) to limit the need for selectivity.”


(3) The really big talked-about development in higher education today is the rise of MOOCs. There’s an argument to be made that all this excitement is just noise: Universities have made efforts to provide low-cost distance education online in the past, and it didn’t upend the higher-education market then; completion rates have always been very low in distance education courses. But in recent years there have been major improvements in internet connectivity, download times, and online course platforms, which could provide the basis for modestly effective but super-low cost delivery of higher education, through MOOCs and others. Christensen and Eyring are cautiously optimistic about these changes and the rise of for-profit online universities as well, but they stop short of all-out disruptive-tech-boosterism. They do not expect that students will soon take classes from Coursera for free and get their BAs and MBAs for a few hundred dollars in course registration and testing fees plus the cost of rent and an internet connection. Instead, top-tier universities will continue to provide in-person instruction (and people will continue to compete desperately and pay anything to access them), while second-tier universities will incorporate MOOCs for ‘flipped classrooms’ and similar uses. As they write,

The most powerful mechanism of cost reduction is online learning. All but the most prestigious institutions will effectively have to create a second, virtual university within the traditional university, as BYU-Idaho and SNHU (Southern New Hampshire University) have done. The online courses, as well as the adjunct faculty who teach them, should be tightly integrated with their on-campus counterparts; this is an important point of potential differentiation from fully online degree programs. To ensure quality, universities may also decide to limit online class sizes or pay instructors more than the market rate. Even with such quality enhancements, online courses will allow traditional universities not only to save instructional costs but also to admit more students without increasing their investment in physical facilities and full-time faculties.

It’s hard to predict how online courses will be used in a decade. But the authors highlight some incremental changes being made now, which they recommend to other universities. At BYU-Idaho, for example, online courses have been particularly useful in allowing women who dropped out when they became mothers to finish their degrees. The school actually made it a requirement that all students take at least one online course, as a way of proactively developing their and the university’s comfort with the medium. They’ve found that online instruction is not best used to replace in-person classes, but, rather, for blended and ‘flipped’ courses. In flipped courses, students watch lectures online (sometimes lectures that are specifically tailored for the medium, e.g., including small computer-graded quizzes throughout); students go in to class in-person to work on problem sets, to talk over more advanced applications with the instructor, etc. And BYU-Idaho has also used its online courses toward its humanitarian goal of giving people in developing countries access to its courses and technical certificates.


Here are some more thoughts and criticisms: First, I think most of my friends reading this blog post are already prepared to criticize me and these authors for advocating a pre-professional/vocational vision of the university. But we’re not. There’s zero inconsistency in maintaining two positions simultaneously: (1) that universities should pass on ethical and aesthetic learning, and facilitate students’ philosophical expansion, exploration, asking of intrinsically important questions, etc.; and (2) that universities should find ways to do so in a financially sustainable and reasonable way, and mid- and lower-market universities in particular should not yield 50% dropout rates, 6.5 year average graduation times among the graduating half, heavy student debt, and un- and under-employment of their graduates.

In fact, my main concern with The Innovative University is that, if anything, it puts too much faith in universities as providers of the ‘soft’ goods of mentoring and moral-character formation. At one point in the book, the authors observe that what they call “cognitive outcomes” (that is, measurable learning) for one particular online course are as good as cognitive outcomes for a similar course offered at a traditional university, but students pay more for the course at the traditional university. They therefore infer that this disparity proves that students are paying for value, and must be receiving moral instruction, wisdom, and mentoring in return. But there is of course a simpler, more pessimistic interpretation of this data: Employers value traditional universities more because they’re familiar and employers are skeptical of unfamiliar routes and so the students are paying a premium just to reassure prospective employers; they’re not necessarily paying that premium to truly get some pedagogical value for themselves. In other words, the evidence is consistent with higher-ed consumers being stuck in a “prisoner’s dilemma”: we all might prefer some low-cost, no-frills education, but as long as employers will give even a slight nod to students from traditional prestigious universities, we’ll all face immense pressure to choose the traditional one. I’m not prepared to say that Christensen and Eyring are incorrect that professors provide valuable mentoring and moral education. I more that their assertion could be used bolster beliefs in the irreplicability of in-person instruction and to resist calls for experimentation and change. For example, suppose there was a person who was very good at self-directed learning and got a broad liberal arts education in college in a curriculum that covered nearly all of the most significant works in moral philosophy and social thought. Should this person be required to spend two years and $140,000 moving away and receiving “meaning and mentoring” from business-school professors in order to, say, get an MBA to get promoted up from the analyst level in a consultancy, rather than spending a year or two of self-directed learning tearing through some edX and Coursera courses and textbooks to get really hard technical skills in computational statistics, financial valuation, accounting, and optimal pricing theory, etc.? While many students will still be happy to go the traditional MBA route, I think we should also find a way to properly credential self-directed learners, particularly in an era in which so much work is self-directed and unstructured. I hope education entrepreneurs will seize the opportunity to develop “competency-based” credentialing for these self-directed learners. And faculty members should recognize they have a bias when they tell accreditors, legislators, and prospective students, “No, really, you need to separate from your spouse, move cities, and receive our wisdom in person”; those of us who benefit from academe as it currently works should listen closely to those who currently see it as a barrier to their goals.

So I wish that the book had done more to highlight and promote bigger, more fundamental changes in higher education that could be facilitated by new digital technologies — particularly, any of the nascent efforts to establish certification for self-directed learners who are taking advantage of online courses, both in the U.S. and abroad. But my sense is that Christensen and Eyring did not focus on these because the evidence suggests that the users of these open-access online courses have so far been a relatively privileged, college-educated set. Christensen and Eyring think it’s a humanitarian priority to focus on less privileged students, at universities that graduate only fractions of their entering classes. The problems and frustrations of those students, and the benefits that would accrue to society if we solved them, make mine seem utterly trivial.

Finally, a theoretical question and an attendant concern: We often hear that higher education today is a ‘bubble’, but there’s a problem with this claim. If higher education is a bubble, what’s the market failure that’s to blame? After all, when people choose to pay lots of money for iPhones, we generally assume they’re rational, informed consumers paying more to get more; when some price is irrationally high, we can usually blame some monopoly or cartel or psychological bias. So what could the market failure be in higher education? This is a big question, but I’ll offer one comparison that occurred to me: Emergency rooms are a well-known market failure. The problem in an emergency room is that your life, which is the condition of your enjoyment of everything that is valuable to you, is immediately at risk, and so you can’t really shop around and ask all the emergency rooms in town to quote you a price. The hospital knows it can charge any price to the emergency-room patient, and this is why we rely on a mix of (i.) medical professional ethics, (ii.) government price controls, and (iii.) collective bargaining via insurers, to control costs here. In a similar way, in a country that aspires to meritocracy, we have made  university affiliations one of the main determinants of social status and we do not shop for a bargain when it comes to social status — humans will pay almost anything to move up the hierarchy. That’s why there’s little real competition on price in higher education–there’s no university that advertises itself as “85% as good as Harvard, at 60% of the cost” and even if there were, few who had the chance to go to Harvard would go to it. Harvard could cut out most of its in-person instruction, tell its undergraduates to take many of their courses through Coursera, and triple its tuition, and I predict that demand for entrance to the university would barely fall, because one of the college’s biggest sources of value to students is as a gate-keeper of social status. Status is virtually priceless–like your life in the emergency room, it’s the condition of so much else you hope to enjoy in life–and that’s the market failure here. There’s no competitive pressure on elite universities to control costs because status is priceless. Elite universities can charge anything they like, and as long as elite universities set the pace for the universities that imitate them, the market won’t control tuition costs. The market won’t, on its own, value self-directed learning through online classes as much as it values degrees from status-conferring, gate-keeping institutions; the market won’t pressure law schools to adopt a two-year curriculum, as President Obama has advocated; the market will put zero pressure on Yale and Harvard to lower their tuition prices. The market won’t protect us consumers here, because we’ll keep paying anything to place our kids a little higher in the social hierarchy. So, instead, it’s incumbent upon university leaders to make an ethical choice to control their costs and restrain the university arms race, even when it is against the interests of their faculty and employees.