Don Hossler

Don Hossler on Measuring Outcomes

Don Hossler is a professor of educational leadership and policy studies at Indiana University Bloomington and a nationally recognized expert in the areas of student college choice, persistence, enrollment management, and higher education finance. He has served as vice chancellor for enrollment services for IU Bloomington, and over the years he has consulted with numerous public and private colleges and universities, along with the College Board, the Pew Charitable Trust, and the General Accounting Office of the United States Government.

Hossler served until last summer as executive director (and continues to serve as a consultant) of the National Student Clearinghouse Research Center, which measures longitudinal data outcomes to inform educational policy decisions that lead to improved student outcomes. He was co-author of three of the Center’s Signature Reports on enrollment trends surrounding the Great Recession, transfer and mobility behaviors, and pathways of reverse transfer students.

Hossler is the author or co-author of 12 books and monographs and dozens of articles and book chapters. He earned his bachelor’s degree in psychology from California Lutheran University and his doctorate in higher education from Claremont Graduate University (California).

Don, when you present at professional conferences, what are some of the common questions you get asked by enrollment managers in the audience?

It’s often about how to better access Clearinghouse reports, which to me speaks to the growing recognition of the power of the data. People are basically saying things like, “It’s wonderful that you send us these raw data files. But the more you can provide us reports that we can just print off in bar graphs or pie charts and take into leadership meetings, the better.” And I hear a lot more recognition that this data has become so important they need to have some in-house capacity to analyze it.

What sorts of enrollment patterns and student pathways are you able to track at the National Student Clearinghouse Research Center?

What’s remarkable about the data available through the Clearinghouse is that we can effectively track individual students across multiple enrollments and states, even if they stop out of college for a while and then start back up. Of the patterns we can illuminate, it isn’t that people haven’t ever studied and reported on some of the same enrollment trends we have, it’s that their databases only contain samples of students. The institutions that currently participate in the Clearinghouse account for 99.5 percent of all four-year public enrollments, 97.4 percent of all community college enrollments, and 93.9 percent of private, non-profit four-year enrollments—and we have access to all of our participants’ enrollment data. The data is invaluable because of its coverage. (Our participants only account for about 68 percent of for-profit enrollments. Because there are so many small “mom-and-pop” institutions, we don’t expect for-profits to ever get up around 90 percent, although most of the large for-profits participate in the Clearinghouse.)

The other thing that I think really makes our data useful and valuable is that it’s a student-unit records system. We all know that the federal government and policy groups are increasingly moving toward databases like IPEDS and trying to create annual benchmarking reports to ask who’s going [to college], are they staying, are they transferring, where are they transferring, etc. But because IPEDS is not a student-unit records system, there are real limits to creating benchmarking reports in ways that reflect some of the unique characteristics of enrollment patterns around the country. They can’t really track student enrollment mobility the way we can. IPEDS data is getting better in many ways, so I’m not saying it’s not useful. But whereas IPEDS usually takes a year or a year and a half to clean the data, the Clearinghouse usually receives enough enrollment files from institutions by mid-November to report information by November or December of the current academic year, so it’s closer to real-time data.

So in terms of comprehensiveness and ability to track student enrollment patterns and create benchmarking reports as well as illuminate a lot of understudied areas related to student enrollment, it’s a remarkable resource.

You said “understudied.” Are some things starting to be studied more now?

I really think some of the things the Clearinghouse has been doing through the Research Center actually will help enable new ways of looking at student enrollment patterns. For instance, one of the reports we produced last year looked at the difference between retention at institution of origin and persistence anywhere. We did this partly because we know community colleges, for example, often get criticized for their low retention rates, and I think it creates the impression that a lot of students who go to community colleges just completely drop out. They disappear. But when you tweak the question from retention at institution of origin to persistence anywhere, you see around a 20 percent increase in the percentage of students who started at a community college and were still enrolled the next year—they were just enrolled someplace else. So in some ways, what IPEDS can report on has helped to create an incomplete picture of what’s going on with postsecondary enrollments. And as someone who has studied retention and persistence a lot, I think the way the Clearinghouse has reported out on this means we’ll start seeing more studies about students where the question isn’t just, “Were they retained at institution of origin?” but rather, “Did they continue to persist on their journey towards completing their degree?”

That journey might include vertical transfers, with students transferring from a two-year institution to a four-year institution. Or reverse transfers, which is transferring from a four-year to a two-year. Lateral transferring is moving from a four-year to a four-year or from a two-year to two-year. And swirling could be moving back and forth between two different two-years, two different four-years, or between two-years and four-years. (There’s not really a widely accepted definition of swirling. But I think when most people hear the term swirling, they think of students moving back and forth multiple times between different schools.)

Frankly, our second Signature Report—which focused on transfers—got criticized a little bit because if students enrolled at another school during the summer months, we counted them as a transfer. Some institutions and policymakers said, “Yeah, a lot of those summer people, they go back to the same institution in the fall, so they’re not really transferring.” There are two reasons we did it that way. First, we wanted to show the total scale of student mobility among institutions, whether through official transfers or not. And second, you can only do so much in one report, and we certainly didn’t have the ability to ask students what their plans were. So we simply reported they were enrolled in a different institution in the summer than they were in the spring.

But we followed that up with a third Signature Report where we looked specifically at all students who were transferring to community colleges, and we identified those who just spent a summer there before returning. We were able to really drill down to clearly see more patterns in the data—students who were transferring and swirling, staying on a path towards graduation.

Are there certain paths students are now utilizing more than they used to?

I think we’re seeing more and more students start at community colleges and then move on to another two-year institution (a lateral transfer rather than a vertical transfer), and we are seeing more students who start at four-year institutions elect to reverse transfer back to a community college—and all kinds of patterns back and forth in between. The most interesting thing to me personally, because I have done some research along these lines before, is to ask, “Do the enrollment patterns of reverse transfers put them on a path towards graduation? Are they more like students who are going to be continuously enrolled, or are they more like students who are at risk of dropping out?”

There are no causal models we can use with our data. However, in a descriptive sense, it appears that the more frequently you transfer, the less likely you are to complete. So while the good news is that swirling students are still enrolled, I think that, in the aggregate, the enrollment patterns suggest that swirling or transferring multiple times might identify students as at risk of not completing their degree.

When the Great Recession hit, many admissions professionals at small private colleges feared they might see more students they’d normally enroll straight from high school instead choose to start at a community college. Did you see any evidence that this happened?

We did not see a pronounced effect through the fall of 2010. Our first Signature Report looked at traditional-aged students and checked whether there was evidence of significant shifts in enrollment patterns across sectors, if we looked at where students were going before the Great Recession and after. We saw a modest dip (1.4% in the total number of students enrolled in postsecondary education), but this was entirely the result of declines in the number of students attending community colleges. The private sector pretty much just held its own. We saw a sharp increase in the number of students going to two-year colleges in 2008 and 2009, but that fell off in 2010. Some community colleges have had to reduce enrollments because of state budget cuts. And 2009, I believe, was the year the number of high school students actually dropped, so you might have expected to see a decline across all sectors in 2009. But what we actually saw is that in 2009 the private four-years experienced a modest increase in their market share, if you will. Four-year publics had a modest decrease in their market share. Then in the next year, public four-years had a modest increase in market share, and privates had a small decline—approximately 1%—in market share.

So to date, there isn’t a lot of evidence that the Great Recession had a strong negative effect on privates. And we can speculate a lot about that. Of course, what we don’t know is how much additional discounting the privates were doing to maintain market share. I don’t think we have a complete empirical record on this, but there are a few studies, such as the discounting studies undertaken by NACUBO, that suggest that some privates significantly increased their discount rates, and there are questions about whether that’s sustainable over the long haul.

What will be the topic of the Clearinghouse Research Center’s next Signature Report?

It’s going to be on college completion. We’ll be reporting on national results to begin with. That report will be released in November. I think we’re going to discover that the actual number of students who stay enrolled someplace, whether or not it’s their institution of origin, is higher than most people think, because our previous view has been limited to an institution-only perspective of retention. I think we will see that in fact there are more students completing, or more degrees being earned, than we realize. And I’ll tease your readers by saying there are some interesting findings about what has to date been a completely understudied group of students: those with mixed enrollment patterns (meaning they attend full time some semesters and part time other semesters). Then in January we’re going to release the same report, but look at the results state-by-state.

Do you anticipate that persistence/completion anywhere will become the new standard for reporting?

It’s going to be interesting to see what happens, since we can present views of institutional success that go beyond what policymakers are clamoring for [retention at institution of origin]. Associations of community colleges and non-flagship publics, for instance, are very interested in beginning to present different pictures of enrollment. For example, if students were enrolled for a year and then transferred someplace where they graduated three years later, rather than punishing the first institution for the students moving on, reward it for helping the students stay on a trajectory towards degree completion.

The American Council on Education is starting to work more closely with the Clearinghouse. I mention that because individual institutions and public policymakers are after us to produce a lot more reports, but they don’t want to pay for them. This isn’t an empirical statement, but I think what’s happening is that they’re used to getting data for free from the Feds or from the state, so they want it for free from the Clearinghouse. It’s not that the Clearinghouse wants to make money—we’re a non-profit—but there are substantial costs that go with storing all these data and making sure they’re secure and so forth. We don’t get any direct money from the federal government or state governments to produce all these reports, so there are limits to what we can do on our own dime.

I think that’s what leads to the open question of the extent to which Clearinghouse benchmarking reports are going to get adopted and used on an annual basis. We haven’t got this figured out ourselves inside the organization. We know we can’t afford to produce all the reports people would like us to as often as they would like them. So the question becomes how do you fund them? And that will have a lot to do with the extent to which some of these reports become, so to speak, normative. I don’t think it’s unfair to say, “If some of these reports are valuable to you, let’s find a way to help pay for them.” For example, IPEDS data isn’t for free (although many people treat it as a free resource)—taxpayer dollars are paying for it; it just feels free to most of the users because they don’t have to pay for it directly.

Speaking of public policymakers, has the push for greater accountability and transparency changed the volume or nature of the reports that colleges and universities request from the Clearinghouse?

Well, there’s the new three-year period for calculating defaults, so my instincts are that if it hasn’t already started, institutions will soon start looking at this new metric. Through the Clearinghouse’s StudentTracker service, colleges can do queries that help them accomplish a number of different objectives. For example, sometimes a student looks like he is in default—it’s called a technical default—because the institution he was enrolled in previously can’t find him. But it turns out the student enrolled in another school and just hasn’t completed all the paperwork so that the Feds know that he is in fact enrolled someplace else. So a school could send us a file where they’re trying to identify how many of their dropouts are enrolled elsewhere—not so much for retention analysis per se, but to try to make sure students aren’t inadvertently classified as defaulters when they’re not.

We did a little data mining when the new cohort default rates were announced. We picked a couple of regions of the country and highlighted any institutions that would be “on the bubble” with the three-year cohort default rate. The number of them was pretty substantial. So I think for some institutions, part of the new task of enrollment managers is going to be to try to manage their default rates. I think we’re going to see that less selective institutions—two-years, schools with urban locations, for-profits—are going to have to analyze and monitor their default rates very carefully going forward. It may sound pretty ugly, but it’s my hunch that administrators at some institutions may have to do some data mining and say, “These are the attributes of people who are more likely to default. We’re getting close to being put on the watch list. How many of these kinds of students are we going to be able to admit?”

Stepping outside of your role at the Clearinghouse and reflecting more comprehensively on your long career as an enrollment management expert, what other changes do you see on the horizon?

Lately I’ve been intrigued with two broad developments. First, as part of the accountability and transparency movement for postsecondary education, most states in the country are building state longitudinal data systems. Initially, very few states captured private college data. But more and more states—especially those with scholarship programs that the private sector can participate in—are starting to say to the private sector (and I’m being a little tongue-in-cheek here), “What do you mean, you’re not going to send us your data? You want to participate in the state scholarship program, don’t you? If you do, we have a legitimate fiduciary obligation to ask whether state dollars are being used effectively.”

Because of the development of state longitudinal data systems, more and more private institutions, along with publics, are going to have to report their data to states. This is going to lead to more transparency as to who is enrolling in institutions, who is persisting to graduation, which schools do a better job graduating transfer students, and so forth. And what I am going to say next, I want to be careful here, because I don’t really want to suggest that every institution “cooks” their data a la some of the scandals we’ve had come out recently. But I think almost every institution has a strong temptation to try to figure out what makes them look best, and they’ll highlight that information—as well as downplay, or make really hard to get, or not talk about enrollment patterns that may not make them look so good. I think the ability of institutions and enrollment managers to kind of control the message around enrollment patterns is going to erode rapidly.

There are schools, for example, that defer a fair number of students to the spring semester because then they don’t have to report their SAT scores to U.S. News, as well as schools that actively keep the number of first-time students relatively small, recruit a lot of transfer students, and then can be really selective for those first-year, first-time students and look really good in U.S. News. Increasingly, the success rate of their transfer students is going to be out there in the public domain. And if it doesn’t look pretty, they’re not going to be able to control that message. I think one of the newest things institutions need to start being proactive about is analyzing their own enrollment data from multiple perspectives and start asking the question, “Do we like this picture, and if not, what will we do to change it?” Because other people are going to start presenting that data, whether institutions like it or not.

As a result of this trend, I’ve been encouraging enrollment managers to get ready for this “new transparency.” We also know that IPEDS is going to change and start requiring institutions to report on more enrollment-related outcomes. We don’t know what those changes will be, but we do know that senior policy makers at the Department of Education realize that its current way of measuring success only by looking at first-time, first-year students is antiquated and inadequate. So it’s not a matter of if, it’s only a matter of when they start changing some of those reporting requirements.

The other topic I have become interested in is the intersection of demographic trends, economic trends, and what I’ve been calling “the pursuit of prestige.” Given the demographics, not only are we going to see a decline in the total number of students graduating from high school, but a larger percentage of those who do graduate won’t have the attributes that will enable institutions to play the prestige game. If you just look at the characteristics of recent U.S. domestic high school graduates (without taking into account opportunities in international student recruitment), there just aren’t going to be enough of them who would have high test scores and a level of affluence where they’re going to consider going all over the country to go to school. The numbers aren’t going to be there.

So I think many institutions are going to have to recalibrate some of their strategies and at least find some new definition of “prestige.” There aren’t going to be enough high SAT scores out there in many states to sustain all the schools that have been playing the game. I’m waiting for a public flagship president to stand up and say, “We’re going to be America’s new public university and focus on educating the next generation of first-generation college students.” And behind closed doors they may be doing it for all the wrong reasons, but it will be just a dose of reality. They’re going to say, “We live in a state where most of the growth is in first-generation Latino students. They don’t do as well on tests. They’re more price-sensitive. They’re less likely to live in residence halls. They’re less likely to travel long distances to go to school. So we can’t continue on our current path, unless we’re prepared to get dramatically smaller.”

And if they can no longer play the prestige game, there’s a little less incentive to use merit-based financial aid to go after it. I think the odds are very high—and I don’t know if it will take two years or three years or five years or seven years—that institutions are going to finally figure out that discount rates of 55% are not sustainable. I don’t think we’re going to see discount rates go away, but I think we’ll see them decline, because institutions are going to figure out 1) they can’t afford the dollar value of the required discount, and 2) their discount is just too easily copied by competitors. Institutions are going to have to either get smaller or find something other than just net cost as their point of differentiation.