This document has been formatted for printing from your browser from the Web site of the Illinois Association of School Boards.

COPYRIGHT NOTICE -- This document is © copyrighted by the Illinois Association of School Boards. IASB hereby grants to school districts and other Internet users the right to download, print and reproduce this document provided that (a) the Illinois Association of School Boards is noted as publisher and copyright holder of the document and (b) any reproductions of this document are disseminated without charge and not used for any commercial purpose.


Illinois School Board Journal
September - October 1998

Some tips on how to evaluate the numbers

Research, surveys and polls

Stalking the stats

Research, surveys, and polls

    Educators are awash in surveys, studies, and polls, among other sources of information. It’s important to know how to judge their credibility.
    For example, the terms "study" and "research" (as in "research shows") don’t really mean a thing. To judge any results or conclusions, you need to know exactly what the researchers did – Review existing data? Observe a single classroom? Ask a bunch of people on the street?
    Surveys and polls also don’t mean a thing unless you know (1) how the sample was selected and (2) how the question was worded. For example, surveys by pro-voucher organizations consistently show that parents by large majorities favor vouchers that allow children to attend private and denominational schools. When other pollsters re-word that question, asking if respondents want to see tax money used to send children to private schools, the answer is a resounding "no."
    The sampling method is equally important. "Nine out of ten dentists surveyed use Gumrot Toothpaste" is a survey finding that few would accept as credible. Yet, some surveys use equally questionable sampling techniques. Accurate sampling, that allows the responses of a few to be extrapolated to a larger group, is a science. The reason for the good reputations enjoyed by Gallup and Roper, to name two respected polling organizations, is that their sampling is highly refined and that their questions are worded in a neutral fashion. Reputable pollsters report the level of confidence that can be attributed to their figures: "65 percent responded yes, plus or minus five percent," means that 65 percent of the sample said yes, and that if the whole population were surveyed, between 60 and 70 percent would say "yes."
    Be wary, also, of surveys that invite self-serving answers. As this is being written, headlines are reporting a survey that shows one in five teenagers carries a weapon. More than 16,000 teenagers filled out confidential questionnaires in a study by the Centers for Disease Control that also reveals that one in five drove after drinking and that one in ten had attempted suicide. Those are pretty alarming results, all right.
    But if you’ve ever known a teenager, it might occur to you that (1) some of those who said they carry a weapon might like to think of themselves as tough kids who would carry a weapon and (2) some might choose the most alarming response just to scare their elders. None of the news media headlining the survey raised those possibilities, however. Prediction: this "information" will enter the debate on school violence and will be the basis for further alarm among those who think the public schools are going to the dogs.

 

Stalking the stats

    Most school board members don’t want to be statisticians, nor do they need to be. But board members do need to know how to evaluate the trustworthiness of statistics, and they need to know what questions to ask.
    As Gerald Bracey puts it in "Understanding Education Statistics," "Statistics often mask a political or ideological agenda. When there is something rotten in the state of Denmark, you need to be able to sniff it out by understanding the statistics used."
    To sniff, you need to know certain information that often is not provided. Bracey’s monograph, written for Educational Research Service, offers a crash course in basic statistics. Following is a brief discussion of some of the concepts.
    One of the trickier concepts is correlation. It is tricky because human beings are predisposed to find relationships and to assume causality. If A follows B, people tend to assume that B caused A.
    For example, if a new reading program is introduced and reading scores improve, it’s natural to assume that the program is responsible for the increase in scores. But it’s not necessarily true. The mere fact of a change, or knowing they were being observed, may have inspired the teacher and students to do better. It is important to sort out these variables before expanding the program to the entire district, where it might prove to be an expensive failure.
    One thing you need to know to judge whether the program is responsible for the improvement is the statistical significance of the difference in test scores. To use Bracey’s example, you might teach reading in two different ways and then test the students. You find there is a difference in the average score of the two groups. Statistical significant tests tell you how likely it is that you would have seen a difference as large as the one you saw if there had been no difference between the groups. If the test tells you the difference could easily have been due to chance or to differences in the students themselves, then the test scores don’t really tell you that one way of teaching is better than the other.
    Following is a brief discussion of other terms and concepts that will help you find the truth in the numbers.

    Grasping these concepts won’t make you a statistician, but will help you know what questions to ask. And they will help you identify a questionable statistic when you see one.

 

IASB ARCHIVES HOME