This document has been formatted for printing from your browser from the Web site of the Illinois Association of School Boards.

COPYRIGHT NOTICE -- This document is © copyrighted by the Illinois Association of School Boards. IASB hereby grants to school districts and other Internet users the right to download, print and reproduce this document provided that (a) the Illinois Association of School Boards is noted as publisher and copyright holder of the document and (b) any reproductions of this document are disseminated without charge and not used for any commercial purpose.


Illinois School Board Journal
September-October 1998

Finding the truth in the numbers

by Jessica C. Billings, Editor
Illinois School Board Journal.

    If you’ve kept up with the political and ideological battles that have raged for two decades now over the value of America’s public schools, you’ve repeatedly heard and read statements like these.
    Perhaps you’ve suspected they didn’t tell the whole story, but you weren’t sure how to refute them.
    Or maybe you’ve believed them. It’s not surprising if you did. They’re proclaimed by impressive people in respectable publications, and – most importantly – are backed up by "research" and "statistics."
    We Americans have a love affair with numbers and a simultaneous suspicion of them. "Statistics show" – "research reveals" — "studies indicate" — statements like these are bandied about by news media and in conversation as if they were the final word in any argument. And indeed, often they are the final word, because numbers are hard to refute.
    On the other hand, Mark Twain’s statement that there are "lies, d*mned lies, and statistics" and the observation that "figures lie and liars figure" touch a chord with many of us who suspect we are being hoodwinked but aren’t sure how to prove it.
    The statements above have been backed up with figures and charts. They come from people with impressive credentials and have been accepted as gospel by news media and politicians. The thing is, they aren’t true – or, at the very least, there is another way to look at them.
    So says Gerald W. Bracey . Bracey has made a career of taking on those who debunk public education. Among his recent efforts are Setting the Record Straight: Responses to Misconceptions about Public Education in the United States and "Understanding Education Statistics," a crash course in statistics that provides guidelines for making your own decisions. (For details about these and other sources used in this article, see "Resources" below.)
    In "Setting the Record Straight," Bracey challenges the statements that open this article (among others).

The U.S. spends more money on education than any other nation.

    When he tracked down the source of this assertion, Bracey found it was based on a listing of the United States and only five other countries.
    Bracey provides listings based on numbers from the Economic Policy Institute and the National Center for Education Statistics. They show that, no matter how you break the numbers down, the U.S. ranks about average in spending. One ranking, based on k-12 spending as a percent of per capita income in 16 industrialized nations, shows the U.S. ranking 14th, ahead of only Australia and Ireland. This is true despite the fact that the U.S. provides services that other countries don’t provide or provide to a lesser extent, such as food, transportation, counseling and – the big one – special education. Special education accounts for more than one-third of the new money spent on education in the last 25 years – spending that is mandated by statute and by court rulings that keep expanding the services that schools must provide.

There’s no relationship between spending and academic achievement, or we can’t solve the problems by throwing money at the schools.

    This myth, says Bracey, is based primarily on two studies, one conducted by University of Rochester professor Eric Hanushek and one by former Secretary of Education William Bennett.
    Bennett noted that many low-spending states have high SAT scores, while many high-spending states do not. He did not note that in the high-scoring states "virtually no one takes the SAT" except the very small percentage of students (ranging from four to ten percent) who are interested in attending Ivy League and Seven Sisters schools that require the SAT. The rest take the ACT. In New Jersey, on the other hand, 76 percent of students take the SAT. In other words, Bennett compared scores of an elite few in some states to those achieved by three-quarters of the student population in others.
    Hanushek’s results, based on a survey of 187 studies, was flawed in many ways, Bracey maintains. First, only 65 of the studies actually dealt with the relationship between money and achievement. Secondly, Hanushek never demonstrated that the studies showed money is not related to higher achievement – he merely asserted it. Another researcher, Keith Baker, revisited Hanushek’s review, and found that of the 65 relevant studies, 38 show that spending does affect achievement, 14 show it does not, and no conclusions could be drawn from the rest.

Compared to German and Japanese students, Americans don’t know much.

    Bracey offers numerous bases for refuting this common belief. One of the most interesting is the distinction between ranks and scores. "Ranks not only tell you nothing about performance, they obscure performance," says Bracey. "If your child ranks at the 60th percentile on a test, you know your child did better than 60 percent of the other children taking the test, but you don’t know anything about how well he or she did in absolute terms. Maybe all the kids were awful and yours was just better than 60 percent of a bunch of nincompoops."
    One of the pitfalls of looking at ranks is that if several countries are very close in actual scores, a big difference in rank can result from a small difference in scores. For example, the results of the second International Assessment of Math and Science (IAEP - 2) shows nine-year-olds in the U.S. ranked ninth among ten countries in mathematics – but the actual score was 58, compared to an average of 64. Scores were even more telling in science, where U.S. thirteen-year-olds ranked 13th among 15 countries, with a score of 67, compared to an average 70. Bracey offers several other examples where scores from several countries, including the U.S. clustered in the middle. The result: U.S. students ranked low – but their scores were only a few points lower than many countries that ranked higher.
    These are just three of 18 examples of misleading statements that have been widely accepted as fact that Bracey analyzes and refutes in his book. But from these three, we can extract some guidelines for evaluating those seemingly irrefutable numbers.

    1) Consider the source. The notion that funding is irrelevant to achievement is propounded by social conservatives, who by definition oppose government spending as a solution to social problems. Bracey has a habit of going head-on-head with some of these conservatives, especially William Bennett, whom he accuses of deliberately and maliciously manipulating the facts. But you don’t need to ascribe evil motives to understand that conservatives will look for evidence that more spending is not the solution to problems – just as social liberals don’t mind spending money on social problems.

    2) People generally will believe what they want to believe, and politicians generally will tell people what they want to hear. Wouldn’t you like to believe that there is some simple, no-cost solution that will "fix" public schools once and for all? That notion has entered the public consciousness and won’t be relinquished easily. It is too good an excuse for being unwilling to pay more taxes to properly support schools.

    3) Be sure you know what is being compared, and how it is being measured. William Bennett’s comparison, described above, of the SAT scores of an elite few with states where the bulk of students take the SAT is a good example. Another common instance of comparing apples to oranges is the claim that students "don’t know as much as they used to." In 1943, the high school graduation rate was about 45 percent; in 1964, around 70 percent. By 1987, it was 83 percent. So comparing today’s students to those in the "good old days" is once again comparing a small elite to the majority of school-age children.

    4) Know what different types of numbers mean. Bracey’s example above makes clear that comparing rankings can paint a far different picture than comparing actual test scores. For other examples, see "Stalking the Stats."

    5) Look for what isn’t there. In "Understanding Education Statistics," Bracey observes that even a straight-forward statement, seemingly easily verified, may have been carefully selected to meet an ideological standard. For example, the report

    A Nation At Risk, that started the school reform wave we’ve ridden for the past 15 years, asserts that "there was a steady decline in science achievement by 17-year-olds as measured by national assessments." Says Bracey: "But wait a minute. Why pick on science? Why pick on 17-year-olds?" The answer is simple, he says. "Only the science scores of 17-year-olds supported the crisis rhetoric of the document. There was no such decline for 9- or 13-year-olds in science, and there was no decline for any of the three ages in reading and math…" If all the data are not given, ask why the selected items were chosen.

    6) Remember that the news media have paper and broadcast time to fill. "U.S. Students Rank Next to Last in Math and Science" makes a much better head-line than "U.S. Scores Cluster Around Average Along With Several Other Nations."

    7) Along the same lines, academics have reputations to make – and one of the best ways to do that is to find (or create) data that "proves" some astonishing or alarming premise. It doesn’t matter much if the data are later discredited – the alarming premise will linger in people’s minds far longer than the discrediting and the academic’s name will gain more and more credibility as the erroneous results are quoted and re-quoted.
    The ways that numbers can be manipulated to convey a predetermined message are endless. As we approach a national election in which education promises to be a hot issue, we undoubtedly will find examples without end. The important thing to bear in mind is that statistics that appear to be objective statements of truth are in fact selected, interpreted and reported by flawed human beings. Even with the best intentions, they are likely to be distorted or to tell only part of the story. It is up to those who believe in public schools to tell the rest.

Resources

    Setting the Record Straight, by Gerald W. Bracey, Association for Supervision and Curriculum Development, 1997. To order, request stock # 197020; ASCD, 1250 N. Pitt St., Alexandria, Virginia 22314-1453; telephone: 1-800/ 933-2723; fax: 703/299-8631; price: $16.95, members; $20.95 non-members. This book is organized around 18 common charges against public schools and Bracey’s suggested responses. Although apparently not intended as such, it also serves as a useful textbook for understanding how numbers can be used to mislead. It is readable and easily understood, with plenty of data to back up the assertions, marred somewhat by Bracey’s unfortunate habit of taking unnecessary potshots at those whose statements he disproves.

    "Understanding Education Statistics: It’s Easier (and More Important) Than You Think," by Gerald W. Bracey, Educational Research Service, 1997. This 42-page monograph is somewhat heavy going for the lay reader. Bracey’s attempts to simplify statistical concepts are more successful in some instances than others. Overall, he goes into more detail than most school board members want or need, but this is a good resource for administrators. To order: # FR-0253, ERS, 2000 Clarendon Boulevard, Arlington, Virginia 22201; or call 1-800/791-9308 or fax 1-800/791-9309 (orders only).Web site: www.ers.org; price: $8, with quantity discounts.

    Other sources consulted:

    "Testing: What board members need to know," by Dr. Debra Hamm, from Critical Issues: What Board Members Need to Know about Testing, South Caroline School Boards Association.

    "The Use and Misuse of Test Scores in Reform Debate," a RAND policy brief summarizing work by Daniel Koretz, "What Happened to Test Scores and Why?" Educational Measurement: Issues and Practice, Winter 1992, pp. 7-11, National Council on Measurement in Education. The summary is found at: http://www.rand.org/publications/RB/RB8008/

    "Questionable Statistics," by John G. Keane, American Demographics, June, 1985.

    "Five Myths About Test Score Comparisons," by Iris C. Rotberg

IASB ARCHIVES HOME