Getting Off the Island

Fantastic in-depth NYT article today about the unconscionable gap in college completion between students from wealthy and poor families. I wish they’d invested to multimedia it up the way they did the gorgeous, groundbreaking avalanche story the other day — could’ve been even more powerful. But at least the Grey Lady wrote a great piece and had a video companion.

It’s a heartbreaking story. We know the basic narrative already, but it hits you fresh every time you read about another young person’s story. And here there are three. Each of them illustrates a different facet of the tremendously complex web that’s been woven to snag low-income students before they cross the finish line. You can read all the statistics as a citizen, but you read about Angelica or Melissa or Bianca as a parent.

It’s also a maddening story that got me riled. More on that below.

A couple of things ring out across the stories. Money is a huge barrier for all of them, and strong character is what enables kids to push through the crapstorm if they’re going to. Even just the financial challenge on its own is a complex web — navigating financial aid, dealing with the stress of debt accumulation, the social isolation of feeling like the only kid without a credit card when you’re out at the bar on Thursday night, being asked to cough up $10 for an appointment with the college counselor when your part-time job pays less than that per hour.

Beyond that, it’s a whole soup of stuff that gets in the way. Undermatching. Communication breakdowns. Parents not being in the picture or not pulling their weight. Boyfriends who add drama and draw emotional energy away from school. The “soft bigotry of low expectations.” The pain of moving 200 miles away when you’ve spent almost no time outside your hometown since you were born. It’s amazing how much life can turn on something like a car battery dying.

The image of getting off the island — ie, getting the f?#$ out of Galveston — is a powerful one. The girls in this story are dead set on that goal and — especially Angelica and Melissa — try to bounce back. Angelica is so hell-bent on Emory that she signs a $40K loan when she shows up for freshman year and finds out she’s missed the window on financial aid. That reminded me of this amazing story. We need more organizations like Portmont College and OneGoal that take non-cognitive skills seriously and treat character development as central to solving the college completion problem.

I also found myself getting angry as much as sad at points. Part of this is anger at the country for being so complacent about the problem. If the American Dream is about equality of opportunity — if that’s in theory the one thing that liberals and conservatives can agree on — then why are we letting this problem get worse? Then again, there’s a lot we’re being complacent about. This one just seems even more fundamental and potentially bipartisan than, say, gun control.

Some of the Emory folks quoted in the article also make you want to shake them. This is a place that deserves credit for admitting twice as many Pell-grant students as Harvard (shame on Harvard). But it pours $94 million into financial aid and then won’t get off its ass to take a no-excuses approach to getting low-income kids to graduation? The vice provost for financial aid comes off as if he can’t even conceive of someone not understanding the byzantine policies or feeling confident enough to fight for themselves. The dean for academic advising saying ‘we reached out to her and she didn’t respond.’ Really?! You didn’t just go find her? She’s, um, enrolled at your school.

Angelica, I really, really hope you’ll go back to Emory and finish up. You’re so close. They’re obviously not going to go too far out of their way to make it happen for you. But I hope you’ll do it for yourself anyway. Get off the island.

Oh — and in your honor, I’m putting a bit of Clay Davis in Emory’s stocking on Tuesday:

University Ventures White Paper: What about the Middle?

As Dai predicted in a recent post, the new white paper by University Ventures is a thoughtful, useful overview of the future of online higher education, cutting through the recent clamor to offer a clear-headed separation of hype from true disruptive trends. It is worth a read for anyone who is intrigued or overwhelmed by the MOOC mania of the past year.

Among their compelling conclusions are that degrees are not going to disappear soon and that MOOCs will remain more of a playground for the elite and a sideshow for everyone else until models are developed to convert them to the economic returns less advantaged students seek from higher education. They also rightly emphasize that the greatest potential of online courses rests in their ability to efficiently facilitate the move to competency-based assessment and adaptive learning rather than in the content itself.

In their predictions of the growth in market share of online programs, however, they seem to fall into the dichotomy of online only v. traditional classroom that we continue to believe is false. This graph captures their projection that, in the decade following 2017, effective, synchronous online models will have overtaken almost all of the market except for the highest cost, biggest brands that the elites will continue to fiercely protect.

Graph

It certainly seems inevitable that the current trends will erode the perceived value – and therefore market share – of most of the models below the top tier. But it would be most useful to add a third line to this graph: hybrid online-physical models. As they note at the start of their paper, the past two decades are littered with the skeletons of predictions that the internet will make physical location irrelevant (my personal point of reference is that I have to constantly jump on long plane flights for short meetings despite the increasing availability of good videoconferencing technology). It is clear that many students are unlikely to pay $30,000 more or even $5,000 more for a traditional in-person college in-experience for the same or lower quality of what they could get online. But what about $500 or $1,000.

As in any business, we need to know our customer and the extensive data on student behavior collected in the book Academically Adrift offers some insights. It found not only that students of all backgrounds were spending significantly more time socializing and pursuing extra-curricular activities than studying, but also that lower income students were working and taking out loans to enable them to have the “college experience” rather than just pay their tuition. The idea that this entire market segment will move online ignores this deep cultural resonance of the college experience in the US and the value that is likely to continue to be placed on in-person social interaction (the deep, compelling NYT piece on three students with ambitions of using higher education to escape Galveston, Texas – and the struggles one of them has in realizing her potential while in her home environment – reinforces the latter dynamic) . And, most importantly, the real and perceived educational benefits the in-person components of a hybrid model provide could be well worth a moderate premium – a dynamic that could be important for students in developing countries and other market segments (e.g., older students).

Where exactly the hybrid line will lie is unclear, but there is certainly the potential for it to be right of online only and we look forward to experimenting to get it there.

A final note on the white paper. Its useful overview of the implications of regulatory trends in the US on online models is disturbing and further highlights the potential for low and middle income countries to surge ahead in adopting this innovation. We previously summarized the byzantine regulatory system that bring misery to anyone trying to set up a new university – particularly an innovative one – in India. But compared to the barriers that Federal and State policy are now throwing up in the US, the Babu Raj suddenly doesn’t look so bad. Add to that the drag that the deeply entrenched vested interests in the US higher education system will continue to have on the pace of innovation (the authors suggest that resistance to innovation may be greater in higher education than any other sector) and you create a ripe environment for forward-looking countries with less baggage to get well ahead (could quality hybrid and online higher education be Africa’s next leapfrog follow mobile communication and money transfer?).

Navigating the Labyrinth of Learning Assessment Methods

We have previously said that we plan to robustly measure what Kepler students are actually learning so that we can improve the model, enable teachers to identify and respond to students’ needs (so-called adaptive learning), and, ultimately, ensure we are providing good educational value for money. But how do we realize that ambition? While, as I noted in a recent post, the incentives have not been in place for meaningful, objective assessment in most institutions, many talented people have been grappling with that question for decades with mixed success. An intense debate continues to rage and there are a dizzying number of tests and methods (one catalogue claims to provide an overview of more than 3,000 tests) and therefore an even greater number of acronyms (the EPP was once the MAPP, but we understand it may change its name back to Prince).

To navigate our way through this labyrinth, we have sought to work sequentially through a series of key questions to eventually – and hopefully – arrive at an assessment system that is well suited to our model.

 

What are we trying to measure? 

Most of the heated debate is on this issue, though it often manifests in arguments over the later questions. The attributes that experts propose to measure generally fall into three categories:

  1. Generic Skills – These are abilities that underpin effectiveness in all fields, including analytical reasoning, problem solving, and written communication;
  2. Domain Skills – The core skills and knowledge needed to operate effectively in a given field such as economics or history.
  3. Character – Also referred to as moral or social skills, this includes attributes such as creativity, social intelligence, and persistence (or “grit”) that can plan a key role in many modern jobs as well in life more broadly.

It is clear that an institution like Kepler, which seeks to prepare graduates for effectiveness in the workplace and other roles in society, needs to measure generic skills. It is these abilities that form the bedrock on which can then build more specialized skills and knowledge throughout their career and we suspect that it these skills that employers will emphasize when we sit with them to understand their needs in detail. While the other two categories are clearly important, it will be important  to weigh carefully if and how to actively measure them.  And the line between generic and domain skills is often blurry: the American History Association’s core standards for a history graduate – finding and analyzing evidence, reading critically, writing precisely – sound an awful lot like generic skills with a history veneer.

 

In what context do we measure these abilities?

For the generic skills in particular (though I suspect there may be similar debate over measuring character attributes if and when they attract similar attention), there is substantial disagreement over the context in which they should be most accurately measured. Specifically, three approaches are typically promoted and used:

  1. Disaggregated – This approach seeks to break the skills down to a number of component parts (e.g., verbal reasoning, quantitative reasoning, spatial reasoning) and assume that strength in them reflects similar strength when they are integrated in practice. The best known example of this is the GRE.
  2. Integrated Generic – This approach assumes that the skills are best measured as they are holistically used in practice using generic content (e.g., analysis of a business decision) with the assumption that the sum of the core components does not necessarily equal the whole;
  3. Integrated Domain – This approach also seeks to assess the skills in practice, but further assumes that skills cannot be isolated from the context of content (a common argument is that an English major will have an advantage on some generic tests over a physics major) and so must be assessed within each specific domain.

 

What broad method should we use?

The core of this question is whether the method is standardized or not. Some argue that, regardless of the answers to the above questions, standardized tests are never able to assess the totality of what we value from a college education and more bespoke methods should be used instead. A variety of such alternatives have been developed and are in use, including producing common frameworks and rubrics that educators can use to judge student products within a specific class or over a college tenure (e.g., through an e-portfolio that contains samples of the student’s work). In one interesting example, Alverno College has its students make a presentation to – and receive feedback from – the surrounding community each year to track progress in oral communication and other skills.

 

How do we select a specific method and/or test?

Finally, even after we have determined what we want to measure and how we want to broadly do so, we still need to determine which of a range of standardized tests we should use and/or how we should optimally construct a bespoke assessment method. For this, we need to define and weight a set of criteria that will ensure the final assessment system is best suited to the objectives of the institution. Initial criteria that could apply to a teaching-focused college like Kepler include:

  • Efficiency – What will the method cost in terms of staff and student time as well funding? How quickly can the results be applied?
  • Comparability – Can educators and external stakeholders interpret the results by comparing them to other samples (i.e., institutions, students) or clear standards?
  • Validity – To what degree is the method an accurate reflection of the skills measured as determined by robust validation exercises?
  • External Value – How do stakeholders who are important to students’ futures, notably employers and graduate schools, perceive and value the outcomes of the assessment?
  • Risk – What is the risk that the method could skew incentives and detract from ultimate learning goals (“teaching to the test”)?
  • Education utility – How do the findings of the assessment enable teachers and other members of the institution to make changes that will improve educational outcomes?

It is clear that no single test will optimize these criteria and achieve all of an institution’s objectives. This has led thoughtful experts in the space to advocate for the use of multiple methods concurrently. Certainly, the tools that may be used to determine mastery of specific concepts and enable adaptive learning will need to complemented by broader assessments of whether students are developing overall skills. And assessment of those overall skills could include a combination of a standardized test to enable clear, objective interpretation with bespoke methods that could capture other important skills and contribute to education (such as the Alverno presentation). However, cost and complexity may prove to be major obstacles and Kepler and others may need to make difficult trade-offs. In a future post, we discuss how we have begun to apply these concepts to the development of initial assessment choices for Kepler.

Growing Potential, Growing Hope, Growing Challenge

 

Two studies reported this week provide welcome hope during the holidays. They also accentuate the urgency of solving the higher education crisis in developing countries.

We have previously commented on several trends that will drive a likely rapid rise in demand for quality higher education in many developing countries: a huge demographic bulge of young people approaching college age (Africa and India will add a staggering 200 million people of working age by 2020); rising wealth and therefore ability and willingness to pay for education; and substantial increases in primary and secondary enrollment. These studies suggest another – potentially most exciting – trend: rising intelligence in these countries.

The first study, by Christopher Epping et al. in the Proceedings of the Royal Society, was published in 2010, but was freshly highlighted on The Economist’s website in a thematic section on intelligence (the full set of articles is worth reading).  The authors find that IQ scores are highly correlated with prevalence of 28 infectious diseases. They further attempt to demonstrate causality by controlling for a range of other environmental factors that have been posited to explain geographic differences in intelligence, showing that most of them are explained by the disease burden variable. Given the complexity of the science and intensity of debate on intelligence, it is doubtful that these diseases are the primary determinant of difference in global IQ and the “Flynn Effect” (the steady rise in overall human IQ over the past century). But the evidence is compelling that they play a significant role and aligns with our knowledge of the direct impact diseases like malaria have on cognitive ability (as well as their global retreat during the period IQs have been rising).

The second study is a massive effort to better quantify the evolving global burden of disease and disability, which takes up the entire issue of The Lancet this week. It confirms a trend which smaller studies have consistently shown over the past several years: deaths from a range of infectious diseases have been declining sharply over the past two decades, even in some of the poorest countries. This is attributed to successful health interventions (e.g., vaccines, oral rehydration salts, malaria control) and women’s education as well as general improvement of environmental conditions (e.g., sanitation, clean water) due to economic growth.

A simple combination of these trends suggests that intelligence should be rising in many developing countries. There are already millions of bright and talented young people in these countries whose potential is suppressed due to lack of quality education at all levels (work by Carol Dweck and Paul Tough, among others, provides compelling evidence that intelligence is malleable and that educational institutions can play a vital role in unlocking potential intelligence as well as simply imparting knowledge). This trend suggests that their ranks are going to grow dramatically in the coming years as a much larger cohort of children grow up under less threat (emphasis on less since most of those diseases still kill an intolerable number of people) from the pathogens that swarmed their parents’ generation.

This is cause for great hope. But it also increases the educational challenge facing these countries. Every year we do not improve the availability and quality of education at all levels the amount of human potential we are squandering grows significantly on both a population and per-capita basis. And intelligent people denied productive paths to develop and apply themselves will find other, often destructive, outlets for their abilities. Current efforts are grossly insufficient to meet this challenge. With this further impetus, we will begin 2013 with a greater sense of urgency to find solutions to the higher education dimension of that challenge. We hope others will as well.

Assessing Learning: Its All About the Incentives

 

The growing chorus of concern about the state of higher education in the US has been accompanied by a push for greater measurement of the actual educational outcomes produced by our colleges and universities. The dismal findings from the relatively limited publicly available assessment published in Academically Adrift has further fuelled this movement. But as Rich Shavelson highlighted in a useful overview, efforts to measure the learning generated by higher education institutions is nearly a century old. How then are we still, in this age of driverless cars, selecting colleges based on the amount of money they spend and papers they publish and just discovering that many of them may be doing little to build the core skills that are critical to their students’ success in the modern economy? And, more importantly, how do we make the shift to begin judging institutions on actual educational outcomes?

As with other aspects of the higher education conundrum, there are parallels with healthcare.  First, as each system grew, the services they provided need to be valuated. Outcome data was sparse and costly so valuations were built around inputs, most of them directly related to expenditure. Extensive and entrenched systems of incentives developed around those input measurements. And now that there is greater focus on reducing the inefficiencies that input-based systems produce and the necessary data is more easily available, the pervasive vested interests fiercely resist change. Second, as Margaret Miller aptly noted, doctors and professors are both acculturated to believe their abilities are superlative and are thus inherently uncomfortable with data that reveals their fallibility (usually expressed through critiques that the assessments do not – indeed cannot – measure their true value). So we have ballooning costs and weak outcomes in both systems and reform has been laborious and slow.

In this post, I will examine how new individual higher education institutions in developing countries like Kepler can overcome this morass. In the next post, I will explore how developing countries just might be able to systemically do so.

Much of the lively debate about assessment of higher education outcomes focuses on the indicators that are measured and the tools that are used to do so. However, the best test in the world will have little impact if the incentives of all the key actors involved are not aligned behind it as shown by the limited uptake and meaningful use of rigorous assessments by US institutions and the fact that student motivation is a principal challenge for those assessments that are in use. The first step for a new institution seeking to create an effective assessment system must therefore be to carefully understand the interests of each of those actors and develop a structure that will provide them with an incentive to support whatever assessment methods are used.

Lets therefore briefly consider a generic overview of those interests:

  1. The Students – Maximize learning (often while minimizing effort) and have those skills and knowledge accurately valued by employers and/or graduate schools;
  2. The Employers – Receive as accurate and nuanced an assessment of a graduates’ likelihood of workplace success as possible;
  3. The Institution – Maximize reputation among all other actors and therefore their support (i.e., funding, attendance, etc.) and the institution’s prosperity and sustainability;
  4. The Financier (e.g., government) – Maximize return on investment in the form of skilled graduates capable of driving economic growth while minimizing risk;
  5. The Faculty – Ensure stable, financially comfortable employment that maximizes personal objectives (e.g., prominent research) as well as student learning.

Each of these actors share an interest in minimizing their own costs, but not those incurred by others (in fact you could argue that each group has an interest in maximizing the expenditure of the other actors, which has led to the current state).

So what does this mean for a new institution? First, the more interests you have to align, the more complex the assessment system and greater the chance of failure. An immediate advantage of Kepler and similar models will be that it eliminates or reduces the importance of the final two interest groups: students will likely be privately financed and faculty will be employed solely as teachers and compensation and management structures designed accordingly.

Second, students and employers are intrinsically linked but usually overlooked in rolling out assessment systems. Current discussions of standardized assessment methods devote pages to ways to secure buy-in of faculty while students themselves are rarely mentioned except to bemoan their lack of participation and effort when they are asked to volunteer for a test that has no direct meaning for them. I have not seen a mention of employers. Yet effective engagement of these groups could be the key to successful assessment – and therefore improvement – of learning outcomes.

Imagine instead an approach that begins with consultation with employers about the skills and attributes they seek in graduates. Robust and efficient methods for assessing those outcomes as well as others not directly cited by employers but the institution believes are vital to student success are then developed and validated. Employers’ buy-in for the tests is secured to ensure they actively value and use the outcomes. Students are informed that the assessments and the abilities they measure will play a critical and direct role in their employment. The assessed skills are then woven into the curriculum and the assessments themselves are conducted regularly throughout a student’s tenure so they and their teachers can address gaps long before the final judgment. All of the outcomes, including the likely significant improvements in students’ scores during their time at the school and corresponding employment rates, would are released publicly to, over time, build the reputation of the school.

Such a system would address the interests of all of the relevant actors (though with a different faculty structure than at many current institutions).  There will be a range of challenges to making the system work as desired, but all of them should be surmountable with sufficient thought and effort. For example, the potential for perverse outcomes from “teaching to the test” could be mitigated by using methods like the Collegiate Learning Assessment that measure integrated skills that are developed should be developed throughout a curriculum as well as using a package of assessments rather than a single test.

This approach will be difficult, though not impossible, to institute within the deeply entrenched structures and power dynamics of many existing institutions. But a new institution like Kepler could take advantage of its blank slate to put the necessary pieces together from the start. We are looking forward to the chance to experiment with the approach and learn from others who have already begun building similar systems

University shake-up in Russia

Fascinating story here about the Russian government (read: Putin) deciding to shake up the university system and steer funding toward higher-performing universities.

There are all kinds of things you could rightly critique about the approach it looks like they’ve taken. But it would be nice to see any kind of bold push for accountability or $-for-performance back here in the US! We’ll be looking around the world for governments that are thinking seriously about how to improve higher ed productivity and are willing to create an attractive environment for new models like Kepler.