Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

PREVIOUS PAGE TABLE OF CONTENTS NEXT PAGE


Appendix C - The testing of reading abilities

Reading is a difficult skill to test, since the product of reading in "real life" is usually not an observable response, but a change in cognitive structure (ie the reader has acquired some information). In a test of reading, however, the reader has to provide an observable response according to the test format.

A reading test is an indirect measure of comprehension of text. The reasons are:

(i) the test items might not adequately represent the text.

(ii) the testee might understand the text, but fail to understand the test items themselves.

(iv) the testee might understand the text and the test items but not have the productive ability to answer the test items.

Additionally, there is the general point that all reading tests interfere with the process of reading, though to different degrees. Some of the most common group test techniques (ie those which permit a group of individuals to be tested simultaneously) are listed below, together with the principal drawbacks:

(i) wh- open-ended questions

- require written production, therefore test more than reading

(ii) yes/no questions

- limited in scope; answers may be guessed

(iii) true/false

- limited in scope; answers may be guessed

(iv) multiple choice

- the options offered distract from the text, and may confuse readers who have actually understood the text

(v) identifying main points

- not very sensitive; requires writing; difficult to assess objectively

(vi) summary writing

- a very indirect measure of reading; requires writing; difficult to assess objectively

(vii) cloze test

- see next section below.

Cloze Formats

In the "standard" cloze deletions are made every nth word, and testees have to fill the gaps with no options provided. This format was piloted for this project, but rejected on the grounds that:

(i) nth word deletion can generate a large proportion of items with largely syntactic function. Speakers of Bantu languages learning English often have difficulty with such elements in English, particularly the pronoun system, determiners, and prepositions. Although learners can generally interpret these elements in context, it is probable that a test where a significant proportion of items requires production of syntactic elements will under-represent the reading comprehension of children in Zambia.

(ii) "standard" cloze requires testees to be able to produce words from their own knowledge to fill the gaps. Since learners generally are believed to recognise more words than they can produce, then a reading test which aspires to construct validity should require testees to produce as little as possible.

(iii) marking of standard cloze tests poses problems. Marking on an "exact word" basis (ie accepting only the word originally deleted as correct) tends to generate very low raw scores at elementary levels, which gives rise to insufficient discrimination between testees, while marking on an "acceptable word" basis is subjective.

However, the cloze format can be modified so as to overcome the difficulties mentioned above. After piloting of two cloze formats it was decided to construct a modified version in which deletions were manipulated so that there was a low proportion of syntactic elements in the test items. In addition the correct answers were provided in jumbled order in a box above each paragraph. The box also contained an extra 50% of distractors, so that the last item could not be completed by elimination. To avoid overloading the testee's memory no passage had more than 6 deletions.

The technique is best illustrated by looking at an example taken from the project test, for example "Msekas's father" (see Appendix E, p 37).

A test of this type requires not reading aloud, but understanding of the text. It also requires no production of language, but simply identification and copying from the box. In order to fill the gaps successfully the testee has to understand the immediate sentence context, and in some cases the inter-sentence context.

Providing the correct answer is sometimes a matter of sensitivity to discourse, rather than a matter of grammatical acceptability. Thus in the case of the first item above "white" would be a grammatically acceptable response, but "seven" is preferred as demonstrating awareness of the sentential context. Again "trees" is grammatically acceptable for item 3, but demonstrates a lack of awareness of the previous discourse, and furthermore means that there is no sensible referent for "They" which begins the following sentence.

It is obviously not "natural" reading, in that a gapped text is not a "natural" text. Nevertheless it seemed to be the group test format that did least violence to the process of reading a text, while at the same time yielding a reasonable number and proportion of items per line of text. It is also a format that is similar to the exercises that children would be familiar with from exercises and tests that the teachers set them.

Read Aloud as a Testing Technique

This is a widespread technique whereby children read aloud a text individually and are then usually asked to answer questions or talk about what they have read. Elaborate systems of analysing deviancies from the text (or "miscues") have been constructed (eg Goodman 1973), and standardised tests based on this techniques are used in the United Kingdom (eg Vincent and de la Mare, 1990). The testee is assessed in terms of the reading and also the answers to the questions.

The main problems with this techniques are:

(i) that the correlation between accuracy of reading aloud and degree of comprehension is difficult to establish particularly in second/foreign language teaching

(ii) answering the questions depend not only on understanding the text, but also on understanding the questions, and also on having the productive capacity to formulate and answer. They are therefore indirect measures of comprehension.

Having pointed out those objections, however, the read aloud plus questions format can, assuming the testee has adequate language knowledge, be helpful, certainly at the extremes of reading proficiency. For example if, when faced with a very simple text, a testee says nothing, or something quite unrelated to the text, then this could suggest that the child cannot read that passage, and possibly cannot read. On the other hand, if a testee reads a text fluently and accurately, and answers all the questions correctly, then it is reasonable to assume that he or she can read.

If a testee's performance is between these two extremes, however, as is most often the case, then assessing reading comprehension can be a rather subjective process, especially if the testee is reading in a foreign or second language.


PREVIOUS PAGE TOP OF PAGE NEXT PAGE