For the most part, what states measure for school quality has already been decided by federal law under the Every Student Succeeds Act. Every state, for example, has to track student test scores to see if they’re performing on grade level. Everyone has to look at graduation rates and step in if a school drops below 67 percent. And everyone has to give schools credit for helping students grow academically, even if they’re not on grade level. But how they measure it is largely up to each state. There’s more flexibility in which students they track, what non-academic qualities of a school they’ll measure, and how easy or hard they’ll make it for schools to get a good rating. States also control what actions they’ll take for those most in need and how to report all this information to parents, and back to the federal government. With
17 state education plans turned into the U.S. Department of Education so far, it looks like most states will count most students from traditionally disadvantaged groups—students with disabilities or low-income students, for example. Many states are also sticking with research-based ideas for the non-academic qualities of a school they’ll measure: Student attendance, and signs that they’re making it in college or on track to get there. But students can still slip through the cracks. The school rating system holds the key to the entire plan’s success. A few are getting it mostly right (D.C. and Tennessee stand out), but many are setting low bars for schools to be considered good. Many are developing complicated reporting systems that risk alienating parents, and some have even left important pieces “To-Be-Determined,” like what earns a school the highest rating vs the second highest.
[pullquote]The key to successful accountability rest largely with the rating systems, and many states appear to be setting a low bar.[/pullquote] For the 17 plans submitted, this brief highlights the trends in three of the most important parts of any plan: counting students, the “other” school quality factor, and the rating system.
Which Students Will States Pay Attention to?
Most states want to see a minimum of between 10 and 20 low-income, minority, or special ed students at a school before they start tracking that group. Tennessee and Michigan stand out here (not in a good way) as having the highest minimum: 30 students. That’s too high for some, but it’s still within an acceptable range based on earlier federal guidelines. Tennessee stands out for another reason. Instead of counting racial and ethnic minority students separately, they’ve chosen to combine Black, Hispanic, and Native American students into one “super subgroup” for accountability reporting. Critics argue that the needs of each group, and the reasons they might struggle are different (e.g., rural challenges versus inner city challenges versus language challenges). There’s also a good chance students from at least one group at a school become invisible. Tennessee says they’ve chosen to do this to “hold as many schools as possible accountable for the performance of students from historically underserved backgrounds.” In other words, it’s better to count them in a combined group, then to not count them at all. If they didn’t combine, more than 43,000 Black, Hispanic, and Native American students would not be counted under Tennessee’s plan.
The ‘Non-Academic’ Measure
Every state that’s submitted so far has listed either chronic absenteeism or college and career readiness as a non-academic indicator. Many states have both. The definition of chronic absenteeism is pretty consistent across the board: missing 10 percent of the school year. New Mexico considers it chronic if students miss 10 days a year. Tennessee also looks at out-of-school suspensions. Nevada, North Dakota, and Vermont don’t look at absenteeism at all. For states looking at college and career readiness, Delaware and Tennessee want to track college entrance exam scores, and New Mexico plans to check how many students stay in college and how many need remediation. Massachusetts and Connecticut will look at how many students complete AP and IB courses. While Michigan and Connecticut want to look at how many students enroll in college or some kind of professional training program after high school. Four plans do not list "college and career readiness" as one of their indicators: Colorado, Maine, New Jersey, and Oregon. States also plan to look at dropout rates, performance in ninth-grade, or survey results that get at student engagement, teacher quality and school climate. Most states look at several non-academic indicators, but a couple, Maine and New Jersey, only have one. With seven indicators, including physical fitness and access to the arts, Connecticut has the most. Having too many will water down the overall rating, and may be too much to track. “To borrow the car dashboard metaphor,” wrote one critic of Connecticut’s plan, “Can drivers keep their eyes on this many gauges?”
School Rating Systems
The federal government no longer requires an overall school score or rating, but most states—all but North Dakota and Oregon so far—have decided to give one anyway. Still, that doesn’t guarantee parents will have an easy way to understand how their school is doing. Most states are choosing to rate a school based on how that school compares to the rest of the state. So, you could have most schools doing very well, or most doing very poorly, but in those states, what really matters isn’t the raw score, it’s whether they’re better or worse than the next school. Some states rate schools based on how close the school gets to specific goals. Either way, [pullquote position="right"]most states draw the line for getting the state’s second-highest rating at around 50 percent.[/pullquote] So if your school is in the top 50 percent of all schools in the state, or if your school scored about 50 percent of the total possible points—even if it’s 51 percent—it’ll probably get the equivalent of a “good” rating. In Illinois, schools can get the second highest rating, “Commendable,” as long as they don’t have any subgroups of students, like low-income students or minorities, performing as bad as the average student in one of the bottom 5 percent worst schools in the state. To put that in context. Let’s say the best school in the bottom 5 percent (so the best of the worst) has 13 percent of kids reading on grade level, and 11 percent of them passing math. If Illinois had a school with all low-income kids, where 14 percent of them are reading on grade level, and 12 percent of them are passing math, this school could be considered “commendable,” and get the state’s second-highest rating. Some states, like Arizona and Maine, still haven’t figured out what schools will have to do to get the state’s highest rating, or any rating.
Reporting to Parents
While some plans, like Louisiana’s and Tennessee’s, continue or strengthen existing school report cards, others, like Vermont’s, are less than intuitive, and some may even mislead parents to think a school is doing okay when it’s not.
Vermont will give schools a “Near Target” rating if the school is doing worse than half of the schools in the state (but not in the bottom 25 percent). With proper context, parents might see that “Near Target” is one step away from the state’s worst rating of “Off target,” but without that context, they might mistakenly believe “Near Target” means the school is doing pretty good. Also, some reports may only show the color-coded symbols for each rating without the terms attached. Overall, states included information about what types of systems states plan to use to show school performance, but most haven’t shown what that report will look like yet, so it’s hard to say how user-friendly states will make their reports to parents. With the peer-review process, states will still have an opportunity to improve their rating systems.