Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

PREVIOUS PAGE TABLE OF CONTENTS NEXT PAGE


Chapter 2 - The Quality of Education in Developing Countries - a review of the literature

Introduction

‘Education Quality’ as a concept has always been difficult to define. Most public debates on the quality of education include concerns about a student’s level of achievement, the relevance of learning to the world of employment or the social, cultural and political worlds occupied by the student Frequently they often also include concerns about the conditions of learning, such as supply of teachers or facilities.

In the light of this, researchers have suggested that the concept of educational quality is complex and multidimensional (Grisay & Mahlck, 1991; Hawes & Stephens, 1990). Grisay and Mahlck (1991) argue that the notion of quality should not be limited to student results alone but should also take into account the determinant factors which influence these, such as the provision of teachers, buildings, equipment, curriculum and so on. As such, the general concept of quality of education is made up of three interrelated dimensions. These are: the quality of human and material resources available for teaching (inputs), the quality of teaching practices (process) and the quality of results (outputs and outcomes).

Thus, studies which set out to assess the quality of education need to be treated carefully. Some studies which purport to assess the quality of education are in fact simple measures of input to education (teachers, equipment, materials, etc.) Many of these studies are problematic because they focus on formal rather than actual quality characteristics. For example, one school might have a larger number of highly qualified teachers than another, but they may be less motivated. Similarly, one school might have fewer facilities than another, but use them more efficiently (Carron & Ta Ngoc, 1981).

Another set of studies are those which use indicators such as repetition rates and drop-out rates as proxy measures of educational quality. The attractiveness of such studies is the availability of data, often contained in educational statistics collected through Educational Management Information Systems in most developing countries. According to Lockheed and Hanushek (1987), these data are useful for making aggregate comparisons between regions of a country, and between countries, but are less relevant for analysing differences in performance between schools and between children in the same grade. They are even less useful for explaining such differences (Alexander et al, 1999).

Many studies do collect data on student achievement. However, most such data are based on standard achievement tests and tend to focus on the acquisition of traditional knowledge and skills. According to Ross and Mahlck (1990), the attainment of more complex educational objectives, such as ‘individuals capable of working in co-operation with others’ or ‘demonstrating ability to solve problems’ are rarely evaluated. Indeed, looking at student outcomes alone does not tell us how schools operate. A school whose students achieve a higher score than those of another is not necessarily a better school. Higher scores may be explained by ‘out of school’ factors such as the fact that students enter school with higher academic abilities. In other words, a school’s ‘effectiveness’ should be judged by its contribution to a students achievement independent of the student’s home background. In this sense, it is the ‘value added’ by the school to the student’s literacy, academic and social skills (Grisay and Mahlck 1991) which should determine its standing

Research Paradigms: School Effectiveness, School Improvement and Improving Educational Quality

From the discussion above it is clear that there are potentially many different approaches to the study of educational quality. According to Grisay and Mahlck 1991, three principal research paradigms can be discerned from the literature. These are:

i. Experimental studies which measure the effect of an innovation, such as the introduction of a new curriculum or new teaching method, on the educational system. Included in this research paradigm are overviews, or meta-analyses, which re-analyse the size of the effects of a specific factor, for example the use of textbooks, on educational outcomes.

ii. Large Scale, ‘input-output’ research, which sets achievement against natural variations in input. Non-school features of the input (age, IQ, home background, etc.,) are statistically controlled in order to identify the educational variables associated with better achievement.

iii. Qualitative studies on school improvement, the so-called second generation school effectiveness studies (Riddell 1996) which rely on techniques of observation and case studies. These studies differ from first generation school effectiveness research in that they place less emphasis on the identification of input variables that can be altered separately (supplying more textbooks, increasing the numbers of teachers, improving their training) and focus rather on process variables and systemic factors (school climate, nature of leadership, style of management, teaching practices etc.).

In developing countries, there have traditionally been far fewer examples of experimental research than in more industrialised countries. However, there are increasing examples of quasi experimental studies, mainly those carried out as rigorous evaluations of substantial educational innovations. Currently, the World Bank is carrying out a number of such evaluation studies in developing countries based on the use of randomised control groups (Newman et al, 1997).

The most common research tradition in developing countries is still that of the input-output survey. However, there is increasing criticism of the value of these studies in determining the quality of education in developing countries.

Jansen (1995) has reviewed the field of School Effectiveness research in developing countries and concluded that it does not address the central question necessary for the development of educational quality in the developing world. He argued that school effectiveness research ‘has reached a ‘cul-de-sac in the 1990s’.

Jansen is concerned by the fact that most School Effectiveness research in developing countries continues to suffer from the criticisms levelled against ‘first generation’ studies of this kind, ranging from methodological critiques (Purkey & Smith, 1983) to ‘insensitive cultural transfer’ (Fuller & Clarke, 1993).

The main methodological criticisms include, sample bias, narrowly focused outcome measures, lack of control for background characteristics, single level analysis of effects and so on. The trans-national character of school effectiveness research has also attracted a lot of criticism ‘insensitive transferral of methodologies from industrialised countries to developing country contexts.

While Fuller and Clarke (1993) argue for a more ‘culturally situated model of school effectiveness’ (p. 119), Jansen points to the growing dissatisfaction with the effectiveness paradigm and calls instead for a research paradigm which focuses on quality rather than effectiveness. He suggests that ‘studies of effectiveness and studies of quality represent competing and incompatible agendas for school and classroom-based research’ and makes a distinction between a ‘school effectiveness’ and a ‘school quality’ approach. He suggests that education quality is ‘concerned with processes of teaching, learning, testing, managing and resourcing which must be investigated on their own terms’, i.e., through in-depth qualitative investigations of such processes and draw on more insider perspectives of what happens inside schools and classrooms. Thus, unlike Fuller and Clarke who argue that school effectiveness research should be broadened and ‘culturally situated’, Jansen dismisses the approach as being incompatible with assessing quality. He captures the differences between the two approaches as follows

Figure 1 - Differences between school effectiveness and school quality approaches

School Effectiveness

School Quality

Origins in economics, using the production function model

Influenced in part by anthropology, descriptive procedures

Studies the effects of a set of inputs (e.g. textbooks) on a specified output (e.g. student achievement)

Studies school and classroom level processes and their interactions, and the impact on achievement

Utilises large-scale statistical methods, e.g. multiple regression models to ‘determine’ the relative effects of different inputs on achievement

Uses ethnographic instruments adapted for particular contexts e.g., interviews, observation schedules, questionnaires etc.

Results are often aggregated for a large number of schools offering generalisations across contexts

Results often specific for particular schools or classrooms, though generalisations are also sought across schools and classrooms

From Jansen 1995:194
Riddell (1996) who also reviews the value of school effectiveness research in developing countries is however much more optimistic about the future of the research paradigm in this context. She argues that the promise of a ‘third wave’ of school effectiveness research in developing countries is in danger of being lost without being fully understood or explored. Riddell points to ‘a trickle’ of research which has utilised multilevel analysis as a method, but adds that these studies have never really taken root.

Riddell feels strongly that school effectiveness research has not been given an adequate chance in developing countries and that it is in danger of dying an ‘untimely death’. Most critics, Riddell argues, are calling for a return to ‘second wave’ classroom-based observational studies.

To this end she is critical of the stance adopted by Jansen in relation to research methodologies. She accuses him of playing quantitative and qualitative methodologies against each other and questions his belief that qualitative approaches are necessarily better for policy interventions than quantitative approaches (see Johnson, 1995).

Riddell argues that the so-called ‘second wave’ of school effectiveness research, which places emphasis on process variables, has been overlooked in developing countries although it has had marked success in industrialised countries. As such, she proposes that the third wave of school effectiveness research (multi-level modelling) offers a way in which the false dichotomy between quantitative and qualitative approaches can be bridged and that ‘multi-level analysis is capable of doing what Jansen suggests we should be moving towards, i.e., the possibility of delineating complicated networks of cross-level relationships within classes, within schools and contextualising the different multi-cultural backgrounds in evidence’.

The separation of qualitative and quantitative approaches in studying quality is a problem. Indeed, any research which sets out to fly a particular methodological flag regardless of the specific research questions being addressed is missing the point. The search for suitable methodologies to assess quality is an important consideration of the present study. It is thus useful to review a small selection of studies on educational quality in developing countries, which span the continuum from quantitative input-output studies to more qualitative, case study approaches.

Case studies of school effectiveness, school improvement and educational quality

1. IIEP Primary School Quality Study

This national study of primary education in Zimbabwe was carried out in 1980. The aim of the research was to produce a list of the most effective schools in Zimbabwe and target these for further study. The study was conducted by the International Institute of Education Planning (IIEP) as part of an in-country training exercise in the development of research skills (Ross & Postlethwaite, 1992). According to the report, the study was concerned with the level and distribution of inputs to schools which were considered by the Ministry of Education and Culture to be central to the provision of basic education. Thus data were collected on school buildings, teachers and their living conditions, and resources in schools and classrooms. The study was also interested in measuring student outcomes and a reading test was designed as an outcome variable.

The sampling strategy ensured a representation of urban and rural schools, government and non-government schools stratified by region. One hundred and fifty schools were selected on this basis. To administer the reading test, 20 pupils from grade 6 classes were chosen randomly from each of the 150 schools.

Schools were ranked according to average reading attainment scores as well as the average socio-economic level of the school (these indicators were derived from a questionnaire completed by the 6th grade students and based on seven possessions at home). After controlling for student economic status, school averages of the residual scores were computed in order to rank them from ‘the most effective to the least effective’.

Many criticisms of the IIEP study have been made, most notably from Riddell (1997) who points out that the study ignores several elements of good research design for school effectiveness studies. Perhaps the two most important omissions of the IIEP study are, first, the failure to employ a multilevel analysis of the data rather than the single level analysis performed, and second, as argued by Riddell, the problem of constructing valid and reliable baseline measures of student level intake.

2. Namibian National Learner Baseline Assessment

The Namibian National Baseline Assessment was conducted in 1992 to answer the question ‘how much do Namibian children learn in schools?’ (Namibian Ministry of Education and Culture, 1994). The purpose was to collect information about the English and Mathematics proficiencies of students. A nationally representative sample of 136 primary schools was selected. One grade 4 class and one Grade 7 class were randomly selected from the 136 schools and all the pupils in these classes were given criterion-referenced tests in English and Mathematics. The only other information gathered was the age, sex and home language of the pupils surveyed.

The baseline study provided information of student learning beyond that which teachers already had provided information about in schools which allowed their comparison to others in English and Mathematics proficiencies. The study also suffered methodological problems including a lack of school-level intake measures.

The two studies described above, according to Riddell (1996), fit the model of school effectiveness study, which Cuttance (cited in Riddell, 1996) describe as the ‘Standards Model’. The Standards Model is simply a league table which compares the average performance of pupils in a given school with the average performance of pupils across all schools. By way of contrast, the second and third models outline by Cuttance, the so-called ‘School-level Intake Adjusted Models’ and ‘Pupil-level Intake Adjusted Models’ could have been achieved in the IIEP study if school achievement (derived from the reading test scores) had been subjected to a regression analysis in relation to the school-level socio-economic data. The residuals would have constituted a ‘school-level intake adjusted measure’ (see Riddell 1996:12).

3. The Botswana Junior School Study

The Botswana Junior school study was carried out by Fuller, Hua and Snyder. It was aimed primarily at establishing the extent to which the achievement of girls is affected by particular classroom practice. This study represents a combination of qualitative methodologies based on extensive classroom observations of 214 teachers in 31 junior-secondary schools and outcome measures derived from achievement tests in English and Mathematics administered to the same students on two occasions, the second occasion occurring a year after the first. To its advantage, the study has a pupil level intake measure. Four sets of variables were used: material conditions and classroom inputs; teacher characteristics and training; teachers pedagogical beliefs and efficacy; and teaching practices and classroom rules. According to Riddell (1996), the study represents one of a small number of sophisticated school effectiveness designs in the third world.

Although, the results of the study are disappointing - it would appear that teachers asking ‘open-ended questions’ reduced the gains in maths scores for girls - the study did however raise important questions about the cultural relevance of certain pedagogical practices deemed to be ‘effective’ in the context of industrialised countries.

4. World Bank Project Designs and the Quality of Education in Sub-Saharan Africa

The study carried out by Heneveld and Craig (1996) abandoned quantitative approaches to school effectiveness research for a qualitative approach to monitoring effective schools. Heneveld and Craig took as given the different sets of factors related to school effectiveness which had arisen in various studies, including, Dalin (1994), Fuller (1987), and Schreens and Creemers (1989). Based on these factors, the authors developed a conceptual framework based on 18 key factors that influence student outcomes. The factors were divided into four categories - supporting inputs from outside the school and enabling conditions, i.e., school climate and the teaching and learning process inside the school.

A sample of World Bank projects was selected on the basis of the degree of attention paid to the characteristics of school effectiveness identified in this conceptual framework. Twenty-six country projects were eventually identified and appraised in this respect.

Riddell (1996) remarks that the Heneveld and Craig study is different to that of the Botswana Junior School Study which queried, rather than accepted, the relevance of school effectiveness factors in developing countries. She also remarks that the study does not look directly at student achievement but concedes that Heneveld and Craig stated that they had no intention of doing so. Rather the intention of their framework seeks to assess the presence and dynamics of conditions that have been identified as conducive to effective education. Riddell (1997) is adamant however, that the conceptual framework leaves itself open to misuse by those ‘unschooled in its dangers’ and argues that ranking schools using Heneveld’s descriptors to monitor educational quality based on ‘conducive conditions’ is entirely different from ranking effective schools based on the relationship of such conditions to pupil outcome (1997: 195).

5. How schools Improve

Dalin’s (1994) study of ‘How schools improve’ is a qualitative study carried out in Bangladesh, Columbia and Ethiopia. In-depth studies were made of 31 rural primary schools. On the basis of the following indicators: ‘degree of implementation of key aspects of the reform; degree of impact on the students, teachers and the school as an organisation; and the degree of institutionalisation of the reform or the ‘routinisation’ of practices’. Schools were sorted into three categories (excellent, very good and good schools).

The study sets out to account for educational change in a much broader sense than student outcomes in the quantitative research tradition. Also, the study gave credence to a wide range of perspectives of change. For example it contrasted the perspectives of key informants with those of local informants.

The study is an important addition to a relatively limited number of qualitative studies of educational quality. The focus is on accounting for educational change and its strength is the importance given to different perspectives on the change process.

6. Profiles of Learning in South Africa

A study carried out by Johnson (1998) sought to develop a framework in which education quality could be assessed through the use of ‘insider perspectives’ (Little, 1995). Working with South African primary school teachers, the researcher developed a set of indicators of achievement for literacy and numeracy within five broad levels of progression: initial, developing, independent, complex and advanced. These levels are meant to be reflective of increasing competence and when they are related to children’s age, the resultant matrix essentially constitutes a ‘growth model’ (Rowe and Hill, 1996) which projects a child’s developing levels of performance.

An illustration of the levels of achievement is provided in figure 2 below. According to the matrix, the average seven year old child for example, could be considered to be working within the ‘independent’ level. The ‘O’ indicates the average or median while the ‘whiskers’ indicate the range. Thus some seven year old children may still be achieving within the ‘developing’ level while other may be progressing towards the ‘complex’ level.

Figure 2 - The Profiles of Learning Framework

From Johnson, 1988: 388
This approach to assessing educational quality through a developmental or growth framework is summed up by Masters (1994) as one which:
“seeks to provide a more explicit identification of outcomes and a framework against which... the progress of an individual, a school, or an entire education system can be mapped and followed. But this approach is built not around a notion of an outcomes check list, but around the concept of growth... Student progress is conceptualised and measured on a growth continuum, not, as the achievement of another outcome, on a check list” (Masters, 1994: 9, cited in Rowe and Hill, 1996: 315).
Johnson (1998) shows how teachers, on the basis of evidence assembled through portfolios, profiled the learning achievements of their students. The difficulty with the study is that it was not able to calculate the achievements of all learners. Rather, it demonstrated, through case studies, what selected children were able to achieve. These data are thus indicators of learning achievement of children in different age categories, but not overall scores.

The six studies reviewed above are examples of approaches to assessing educational quality from a range of methodological perspectives. The IIEP study in Zimbabwe, and the Botswana Junior School study are both examples of input-output research. Student achievement is of concern only in relation to the inputs provided. Heneveld and Craig’s study and that of Dalin both adopt qualitative methodologies. In the case of Heneveld and Craig, they review World Bank projects in relation to effectiveness indicators derived from the existing School Effectiveness Literature. Dalin adopts a case study approach and is interested in processes. Johnson’s study in South Africa is an example of a study which draws more substantially on insider perspectives. Although limited in its methodology to provide overall scores of achievement, it is distinguished by being part of a tradition of developmental and interventionist research which seeks to set up activities designed not only to understand the underpinning factors which promote quality in an education system but also to enhance it.

It is on this latter, more interventionist approach, that the present study is based. A central concern is the development of a profiling system in which student progress can be monitored. However, underpinning this aspiration to develop more meaningful and manageable systems for recording student progress, is the increasingly powerful message now emerging from the literature concerning the capacity of assessment to be, in itself, a mechanism for enhancing quality. As Black and William (1998) have shown, the sensitive use of formative assessment in the classroom can lead to a significant rise in the overall level of achievement of the class. The effective communication of learning goals to students, regular monitoring of learning progress and feedback to students that helps them understand how to improve, are key elements on this respect. Thus an exploration of teacher’s current practice in this respect in the countries studied, and how good practice could be developed and encouraged, was another central concern of the Assessing the Quality of Learning and Teaching in Developing Countries Project.

This objective was felt to be particularly important in an international context in which concern with the measurement of quality, not nationally and comparatively, seems in danger of eclipsing the arguably more important questions concerning how such quality can best be promoted at individual pupil and classroom level and hence ultimately the school and system level.

Another important feature of this study, therefore, was the aspiration to combine the insights into how best to both monitor and promote quality which the school effectiveness research tradition has generated, with those which have arisen from the more specific focus of assessment research concerning how it can best be used to support learning (Gipps, 1994).


PREVIOUS PAGE TOP OF PAGE NEXT PAGE