Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

PREVIOUS PAGE TABLE OF CONTENTS NEXT PAGE


EXECUTIVE SUMMARY

The overall purpose of this monograph is to lay the groundwork for developing a series of indicators for education that can be used to monitor progress in education projects, in country specific education systems, in developmental spin-offs from investment in education and in terms of poverty reduction. In the current policy climate, the focus is on basic education.

In Chapter One, a 'conceptual framework" is sketched out. This includes, in the first three sections, an analysis of the reasons for the resurgence of interest in educational performance indicators, identifying the problems of definition and development and reviewing the literature about the use and abuse of performance indicators.

We show that performance indicators and their critics are not at all new. The issue is not whether performance indicators are good or bad, but what questions are being asked to which performance indicators might provide an answer. The most generic definition is preferred: information that can be used for understanding, and eventually for decision-making. The potentially distorting effects of too rigid a system of performance indicators are identified in terms of seven characteristics observed in the management literature: tunnel vision, sub-optimisation, myopia, convergence, ossification, gaming and misrepresentation.

The second half of Chapter One discusses possible frameworks for performance indicators drawn from the experience of a selection of countries and contexts. Key questions are identified as:

· Is the performance indicator about a significant aspect of the education system or the impact of education?

· Can it be readily understood by everyone involved both in-country as well as by external parties?

· Will the data be reliable and not subject to significant modification as a result of response error, or changes in the personnel generating it?

· To what extent is the data reported under the control of operational managers?

The apparent similarity of the problems in different DFID programme countries and the similarity of the solutions proposed by 'international experts' would suggest that there could be agreement on a set of indicators. Indeed, it is not technically difficult; but, insofar as partnership and collaboration with developing countries themselves are valued, then the appropriate indicators should be defined through a process of negotiation, not a priori.

While concrete sets of indicators are not developed, a framework is proposed based on distinctions between:

· context, aims, inputs, processes, outputs and outcomes;

· the range of possible stakeholders; and

· types and levels of decision-making.

This framework can be used at the sectoral level based on DFID's overall aims; at the planning and pre-planning stage; and at the project implementation and monitoring stage. Detailed specification of the indicators within this framework should be seen as a collaborative effort.

In Chapter Two, case studies of the experience in Kenya (with a long-standing DFID involvement in various projects), Andhra Pradesh (where there has been a large scale unified programme for over six years), and South Africa (where appropriate structures are being developed) are examined.

The Kenyan example shows that, despite several decades of project involvement in Kenya, there is little understanding as to exactly what the Government gets out of spending nearly 30 per cent of its recurrent budget on education; and little movement towards some basic monitoring.

The Andhra Pradesh case study was based on the experience of designing a 'participatory' monitoring and evaluation scheme. Experience with the previous project had shown that a top-down scheme of monitoring and evaluation was of only limited utility. The problem was, therefore, to design mechanisms for collecting data at several different levels that would allow the construction of 'appropriate' performance indicators immediately useful for local project management. We demonstrate that, in this situation, the usual distinction between monitoring and evaluation breaks down; and the possibility of indicators is strictly limited by the constraint of identifying simple yet robust data collection techniques.

The South African example shows the difficulty of developing sets of performance indicators at the same time as appropriate structures for the education system.

In Chapter Three, we move beyond the education sector to develop a framework of overall social indicators. The rise of what has been called the 'social indicator movement' in the 1960s is discussed, drawing attention to the major split between those focusing on a uniform method of valuation (usually money) across the social sectors and those concerned to reflect the diversity of living patterns. The concerns that led to the development of social indicators in the 1960s continue to be relevant today. Examples of different approaches to developing social indicators systems are reviewed.

We conclude that the basic problem remains the comparability and coverage of data that are meant to be the basis for the indicators. The experience both in the 1960s and now is that composite indices based on combining different data hide more than they reveal.

While recognising that it is time consuming, we recommend celebrating diversity in the approach to indicator development. The final two sections consider, therefore, the different kinds of problems that arise when attempting to develop a modern framework for monitoring social conditions top down and for monitoring the satisfaction of basic needs at the local level.

The overall message of the report is that whilst anyone can develop performance indicators, the problem is to identify the social forces which have led to the generation of data, and therefore to take into account the misuses to which they can be put by arbitrary authority.


PREVIOUS PAGE TOP OF PAGE NEXT PAGE