This document has been formatted for printing from your browser from the Web site of the Illinois Association of School Boards.
COPYRIGHT NOTICE -- This document is © copyrighted by the Illinois Association of School Boards. IASB hereby grants to school districts and other Internet users the right to download, print and reproduce this document provided that (a) the Illinois Association of School Boards is noted as publisher and copyright holder of the document and (b) any reproductions of this document are disseminated without charge and not used for any commercial purpose.
Illinois School Board Journal
September/October 2006
Develop a scorecard for what matters most
by Mark Van Clay
Mark Van Clay is in his fourth year as superintendent of La Grange Elementary District 102 in western Cook County and has been a public school educator for 29 years, all in Illinois.
Data-driven decision making is rapidly becoming the mantra for public education. Given the scrutiny required under federal and state No Child Left Behind legislation and the commitment of more and more school districts to continuous organizational improvement, the identification of valid data on student performance and organizational improvement efforts is a necessity for visionary school leadership.
Educators have dealt with various types of data for decades. However, the potential for data as a means to inform decision making is richer than ever before, especially given technological advances in compiling, sorting and describing data and the opportunities presented by having ISAT data for reading and writing from consecutive grades for the first time in 2006.
This richness, however, may or may not reach the board room unless data can be used specifically to help the board do its work: strategic and visionary leadership at an organizational level. How does a board make best use of data-rich resources?
The need
A board of education needs to have its own data-driven instrument to assist with leadership work. Typically, a board may have access to data — either a little or a lot — but it tends to be someone else's data. A board often extrapolates what it wants to see happen by interpreting someone else's data.
The board's time and focus thus becomes a function of the quality of the data to which it is exposed and the ease with which that data can be accessed.
Why do boards, for example, spend so much time on budget issues? Notwithstanding the fact that money collected and spent is important to taxpayers, financial data is relatively easy to compile and present. Boards can become deeply involved in financial issues in part because good financial data is readily available.
In contrast, boards are equally interested in student achievement data but typically that data hasn't been as good. Until last spring, for instance, boards could not easily compare the same students year-to-year using ISAT data because tests in the same subject areas — reading and math — were not given in consecutive years to the same students. So what boards — and everyone else, including the press — did was compare the achievement of different children as if they were the same.
If data is timely and good, boards can act more proactively and strategically. If data is hard to get, incomplete or does not properly measure what needs to be measured, boards will act reactively and generally poorly because they are making decisions based upon bad data. And even more significantly, the more reactive boards are forced to be, the less strategic and visionary they can be in truly affecting the futures of their schools.
So what if a school board could have a data-based tool of its own to help set strategic and visionary charges for its district? What would it look like and what could it do?
Creating a scorecard
A "board scorecard" helps school board members use data to analyze the aspects of a district's operation that the board values most. With a scorecard, the board can identify:
And all of this on a single sheet of paper!
Behind the sheet is the "data behind the data" — supplemental information on how the scorecard data expression was derived. This information gives context to the scorecard, but it is the scorecard itself — not the supporting data — that is the primary focus of the board. District 102 tries to keep supporting data to a page or two per scorecard measure — less if possible.
The board scorecard concept is borrowed for education from the health care industry. Perry Soldwedel, a continuous improvement consultant for the Lombard-based Consortium for Educational Change and a retired Pekin, Illinois, superintendent, serves on a hospital board that developed a scorecard system. As a consultant for District 102, Soldwedel brought the concept to LaGrange last year.
We believe the board scorecard is the missing piece in our search to put a data-driven approach in the board room as well as the classroom.
Component parts and how they work
Think of a board scorecard like the dashboard of a car. A dashboard has a host of gauges that give a driver specific, ongoing feedback regarding speed, available fuel, miles driven, etc. Likewise, a scorecard gives a board specific, ongoing feedback regarding the district's journey toward continuous improvement. In both instances, gauges measure progress toward a predetermined destination.
The La Grange 102 scorecard is based on the district's strategic plan, a process in place for the past 13 years. The first two columns on the scorecard (see Illustration on page 26) identify areas of strategic priority and reference wording of objectives from the strategic plan. These "areas of focus" represent the parts of district operation most important to the board.
The next two columns define what is being measured (indicators) and the frequency by which those measures are taken. Linking indicators to the areas of focus is key to developing the scorecard. Once measured, the indicators need to give a valid view of the areas that are the board's focus. The measures can be valid but not truly represent the area, or the measures can represent the area but not be valid. Only when they are both valid and representative will a board have useful data on which to plan the district's future.
The next three columns on the scorecard reflect measurements. The "baseline" column shows where the district is currently and is derived from past data. It represents what was true — good or bad — yesterday, last week or last year. (Because District 102 was finalizing some of its scorecard targets at deadline, some of the numbers reflected in this example are hypothetical.) The sample scorecard assumes targets have been established and at least one "dipstick" reading has occurred for those indicators that are measured more often than annually.
The "Future" column represents the target for improvement. This is key to the scorecard because it represents what that desired improvement will look like. It is purposely a district target because the school board is responsible for the entire district, not just one or two schools.
The "Current" column represents "dipstick" measures — defined by the previous "When" column that determines the frequency of each measure. The scorecard depends on annual as well as more frequent measures. For some, such as receiving an award for budgetary excellence from the Association of School Business Officials International, only an annual measure makes sense because the award is only granted once a year. For other measures, such as student achievement or student attendance, more frequent "dipsticking" might allow mid-year adjustments based on the data.
Supporting data illustrated
The 2006 ISAT data for La Grange 102 is broken out by school and by grade to show the board how the percentages on the scorecard from the district's reading and math scores were derived. Two "black boxes" on the supportive data sheet correspond to ISAT percentages on the scorecard.
Student attendance data from May 2006 establishes a baseline for the scorecard for 2006-07. This data, broken out by district, also contains "black boxes" that correspond to the comparable boxes on the scorecard.
Most supporting data is broken out only by district — the focus of the board's long-range strategic efforts. The exception to this is student achievement data that is broken out by schools and grades similar to the Illinois State Report Card.
In like fashion, each of the items under "Indicators" will have similar supporting data to explain the derivation of the scorecard measure. When complete, the scorecard will actually be a packet of supportive data, much like a typical board meeting packet is made up of supporting materials covered by the meeting's agenda.
The idea is to give a board frequent looks at progress toward improvement being made by the organization. A board should receive a scorecard each time one of the "Current" measures changes.
Clearly not all measures will — or are designed to — change at the same time. But frequent looks for a board have two advantages. First, the board is continually analyzing what it has determined as most important to it. The board is linking its time spent to its publicly stated priorities. Second, frequent looks mean looking only at what is different from one look to the next — a board can concentrate on a few things at a time and give those few things proper and serious attention.
Our recommendation is that a board review its scorecard each time a "Current" measurement changes. That means, for most boards, the scorecard should be addressed as a regular part of each meeting agenda.
Board benefits
By frequently having a board's stated priorities — with data attached — placed before it, a board's ongoing focus can stay where it most needs to be. This makes a board more proactive and allows at least seven benefits:
1. A board can directly measure and link its time spent to its publicly stated priorities because those priorities will be expressed within the scorecard. This allows the scorecard to be a potentially powerful communications tool as a quick, easy way to share a board's priorities with the community in data language that is understandable to the non-educator. It also allows the public to witness a board spending time clearly focused on its stated priorities.
2. The scorecard is all about continuous organizational improvement: How can we continually strive to be better? The scorecard states up front where we are, where we want to go and what progress we're making to get there. It allows improvement to become measurable and tangible instead of theoretical and wishful.
3. The scorecard's design focuses the board on organizational changes for concrete and measurable improvements rather than identifying people to blame for things that didn't happen or went wrong. Organizational improvement comes predominantly from improvements in organizational targets, structures and procedures, not from changing people in the organization. Indeed, most people want to succeed if they know and understand in advance the target to be reached. More often than not, those targets are not clear enough to the people responsible for meeting them.
4. A board now has the opportunity — and the challenge — to determine its true priorities. In order to have strategic vision, the board must determine priorities from among its myriad responsibilities. If a board cannot determine its own priorities and address them, it cannot lead itself, much less the school district. The scorecard becomes invaluable to a board in terms of identifying and staying focused on its priorities.
5. A board scorecard, used properly, keeps a board strategic in its outlook and clarifies the line separating strategic leadership from detailed micromanaging. The scorecard keeps a board looking forward — setting goals for the organization — rather than backward trying to change things from the past.
6. A board scorecard teaches patience for change. The scorecard makes improvement more visible and tangible, but it doesn't make change happen faster than the organization can realistically absorb it. Board discussions during scorecard development will likely highlight the complexities of change and the reality of incremental steps in making lasting change happen.
The secret here is any positive change is good change. Because change can now be incrementally tracked and measured, it doesn't become necessary to change all at once. Boards can "go slow to go fast" and watch change develop steadily and realistically over time. A board can keep its collective eyes on a clear target, looking at tangible measures to tell it if and when progress is being made.
7. A board scorecard provides an excellent opportunity to develop a strong, productive board/administrative partnership. While a board can and should take the lead in identifying the scorecard's priorities, indicators are selected and measures are obtained in partnership with administrators. And it is the administration's responsibility to determine the supporting data to back up the scorecard. Indeed, the joint development of a scorecard is the ideal illustration of the difference between strategic board responsibilities and tactical administrative responsibilities.
Potential complications
The board scorecard is not without potential rough spots. Here are some barriers for boards to consider:
Take time in development. La Grange 102 took a year just to develop a draft of a scorecard. Because the district had a strategic plan in place, board priorities were easy to identify: they came straight from the plan. Yet it still took the district an entire school year to develop a draft scorecard to pilot in 2006-07, in large part because we had no educational prototype from which to work. Ours, therefore, is an original effort.
For those districts without a strategic plan, determining board priorities could be a lengthy process in itself. Once priorities are identified, it will still take serious time and thought to determine indicators and set targets.
Try a pilot before implementation. This degree of data tracking and measurement likely will be new for everyone, especially the board. Indicators that make sense in the design stage may not add value after actually measuring them for a year. Initial targets may not be realistic; they may be either too easy or too ambitious. Or maybe the data collected still won't give a realistic view of the priority to be measured. The board itself may react to the information in unintended ways. Take a year to work out the bugs before doing anything "official."
Don't be afraid to change or adjust the scorecard if it isn't working the way you hoped. Flawed or not, the scorecard will produce analyzable data to make the process better the next time around. Remember, the goal is to capture reality through data. Don't let data define reality; it should only describe reality.
Remember, the board scorecard is an effort to embark on continuous improvement. You will never be done improving, which means you don't have to improve everything all at once. The scorecard should increase organizational focus — not organizational tension. In fact, it should reduce organizational tension through clear, attainable targets and greater clarity.
Also remember that continuous improvement is about making the organization better, not about identifying slackers and assigning blame. Unless you are already certain you don't have the right people in place, generally assume that you do. The scorecard will point to where you want to improve organizationally. Accountability should mean meeting predetermined targets, but the failure to meet those targets means digging deeper into the data to find out why and how the organization as a whole can improve — not to identify scapegoats.
There is a danger for boards in having too much and too detailed supporting data. In general, the deeper one digs into data, the less strategic and the more tactical or operational one becomes. The board's role is not tactical or operational. The board needs to stay with the strategic "big picture." It is staff's role to dig into data to find answers to big picture "why" questions: Why didn't we hit our reading percentage target this year? Why have we come in over budget when we predicted we would have a budget surplus? Why did we have more staff turnover than predicted by our target?
The tricky balancing act for a board scorecard is to give boards enough good data that they can more clearly focus on and analyze progress toward board priorities without giving them too much data so that they abandon their strategic responsibilities for tactical and operational decision making. If data causes a board to abandon that strategic charge, board micromanaging will be the certain result.
Beware of setting scorecard targets independent of staff. The greatest perceived threat to staff is that a board using a scorecard will set unattainable or unreasonable targets and punish staff if the targets are not met. Therefore, especially in the developmental stages, be aware that staff will have an instinctive fear of a board with access to good data. Expect some push-back from staff, at least initially.
Go back to the secret discussed above in the sixth "Board benefits." Strategically, a board is committed to continuous organizational improvement. Setting targets is primarily tactical. As long as improvement is measured and made, a board is meeting its strategic objective. It can afford to be less prescriptive regarding the target itself.
Let the tactical and operational people (staff) help set targets for things near and dear to them, like student achievement. A board wants continuous ongoing improvement. That is more important than arguing over how much improvement should be made in a given year.
Summary
District 102 believes the board scorecard is the missing link in bringing the board of education to the same level of data-based decision making that now involves staff.
We also believe that, used properly, the board scorecard will:
District 102 now has a completed board scorecard draft, including baseline data for all indicators, ready to pilot beginning this fall. By next May, the district will have revised that scorecard based on a year of piloting and should be ready for "official" deployment in 2007-08.
We are patient but persistent. Above all, we want our board scorecard to "get it right." We won't flag in our efforts to achieve scorecard success, but we will take the time necessary to make sure it works as intended. The payoff will be worth it: data-based vision and clarity of purpose throughout the organization promulgated by the natural district visionaries — the board of education.
What is good, what is bad data for the board?
Put simply, any data that causes a board of education to go beyond its charge as a strategic leader and long-term visionary is likely bad data for that board even if it isn't bad data for someone else.
Data needs for a school board are two-fold. First, data should allow a board to act more proactively than reactively. Any time a board can anticipate something rather than react to it that board is more likely to fulfill a visionary role. The best example of a proactive approach is a board's commitment to policy development.
Policies allow a board to anticipate the future rather than simply respond piecemeal to the past. It is the difference between moving a vase before a 2-year-old visits or picking up pieces of the vase after the two have been introduced. For a board, a proactive decision is always better than a reactive decision.
Second, too much data can lead board members into areas that are staff responsibility. Yet little or no data does not allow a board enough objective information to do its own leadership job well. What the board scorecard offers is data specific to that board's strategic responsibilities. It is data designed so that the board doesn't have to "borrow" someone else's data in order to do its work in an informed fashion.
Formative vs. summative data
A board scorecard is likely to include two kinds of data — formative and summative — which each have different purposes and different implications for analysis.
Formative data is generally diagnostic: a measurement taken before intervention is going to occur. It is a preparation and planning measure. This data allows teachers, for example, to assess what needs to be taught to whom and has implications for student grouping and instructional approaches. It describes what needs to happen and what targets to set.
Summative data is generally about what has happened after an intervention. It is a final assessment measure. ISAT scores are an example of summative data. Such data describes what has already happened and whether targets were met or not.
Formative data should not be used to assess what one wants to be learned because the instruction hasn't yet taken place. Likewise, summative data shouldn't be used to plan future instruction unless a very clear connection exists between what was just learned and what will be taught next.
Because a scorecard is for a board of education's use at a strategic level, it is not concerned with the formative diagnosis and planning for instruction that a teacher will find important. Therefore, its "formative" data is defined more by the frequency of collection.
Formative data, for scorecard purposes, should be viewed as data collected more often than once a year. Viewing the sample scorecard, the "When" column references "3x per year," "Trimester" and "Quarterly." These measures would be considered formative because they occur multiple times during the school year. The "Annually" references would be considered summative.
The point here is not to hold to a technical definition of formative and summative as much as it is to vary the times by which different indicators are measured and to insure some indicators are measured more than once per year. By so doing, a board can give ongoing analysis to the data reported on the scorecard rather than waiting for a once-a-year review of the previous school year.
The board's strategic decision making role and where it fits
In La Grange 102, both the board and staff have spent considerable time defining organizational roles to more clearly define decision-making responsibilities. At a district level, we define specific strategic, tactical and operational decision-making roles and we believe that the best and most effective organizational decisions are made when all three roles are represented in a comprehensive decision-making process.
The strategic role belongs to the board and is 50,000 feet, big picture with little detail, multi-year and visionary. The board needs to set the vision and district direction by focusing on all-encompassing, big-picture issues: policy development; the formation of a strategic plan; the board scorecard; community values and their effects upon a public education system; and the long-term fiscal health of the district. The board then charges staff with determining how to reach those board-defined strategic end points.
The tactical role for administration is about plans and planning: it's 1,000 feet, with a good-sized picture and a fair amount of detail, and year-by-year. The administration needs to develop the plans required to carry out strategic charges of the board: implementation plans, needs assessments, data research, creation of deployment options and reality assessments.
The operational role generally belongs to teachers and is about what works in a classroom: "on the ground" with lots of detail; week-by-week if not day-by-day. Teachers need to supply the reality of what life is like in a classroom filled with real children. Teachers need to enrich tactical plans with operational realities: feedback on plans, detail-based suggestions for modifications regarding plan implementation. Analyzing what really worked or didn't work is the ultimate reality check.
These decision-making roles define responsibilities for each group and each group is channeled responsibilities where their expertise is the greatest. Strategic expertise, by definition, belongs to the school board; tactical expertise to administration; operational expertise to teachers. Groups can make recommendations outside their areas of expertise, but they should not make decisions that belong to another role.
The scorecard is a strategic instrument designed for strategic purposes. It depends on staff to assume tactical and operational functions required underneath it to deploy a fully representative final decision. It should not be used as a means for a board to assume tactical and operational decisions that belong to other roles.
Choosing indicators and targets
Choosing indicators is not as easy as it sounds. Yes, indicators should accurately and credibly measure the priority area. But many times the ideal indicator doesn't exist within the organization — because such information has never been collected before, because the district doesn't have the organizational or technical ability to collect the information, because the information isn't attainable, or because the organization would have to create something new.
While developing and piloting a board scorecard, it is better to start with currently available data, even if that data falls short of the perceived ideal, rather than creating something new. With any scorecard development process, a board should look at what it can already produce before insisting on data that never existed before.
Remember, your board scorecard has never been tried and tested. You don't really know whether your "ideal" indicator is a good one or not. It is better to use what you already have and pilot those indicators through use. The pilot will generate data and the data can be used to determine future scorecard-desired indicator changes.
Similarly, a board setting targets independent of staff input can cause real consternation. Setting targets is largely tactical, so let the administration establish initial targets. Remember, a board wants continuous improvement as a strategic goal. If improvement is measurable and continuous, the particular increment selected for one year's improvement should not be strategically important to the board. Once again, as a scorecard is put into use, the resulting data will guide changes for future scorecards.
Generally, it is better to start using a scorecard to generate data than it is to quibble about design and never get to test it. Most important, realize the scorecard itself needs to be a continual process of tinkering and revision. It should not be static. Like everything else in a continually improving organization, it needs to evolve to improve.