The assessment of English reading

These pages consider the assessment of English reading, in five sections.

Firstly, it considers the two major purposes of assessment - why we assess. Here, assessment in general is in focus, not necessarily that of English reading.

The second section moves the focus to the assessment of English reading, and looks at what we can assess.

In section 3, we look at how we might carry out the assessment of what was identified in Section 2, always bearing in mind that the assessment may be for either purpose defined in Section 1.

Following these, a fourth section is added, with some reflections on the National Tests of English reading, again considering the why, what and how of the assessment.

Finally, Section 5 gives  a brief summing up of what has been discussed.

Author: Angela Hasselgreen, Director of the Norwegian Study Centre in York (2010)

1. On assessment generally – the purposes of assessment and how these are characterised

This section is devoted to assessment generally, regardless of the subject. It applies equally to maths or science, as to English reading. It discusses assessment in two types, distinguished by the purpose of the assessment. Most educationalists would agree that the ultimate goal of assessment – whether this takes the form of feedback on a classroom reading task, or of a national test - ought to be to enhance learning.  However, leading us to this ultimate goal there are numerous and varied assessment ‘events’ that give us evidence of what our pupils know or can do; these can be roughly categorised as having two different  purposes:

  • Assessment FOR learning (formative assessment)
  • Assessment OF learning (summative assessment)

Assessment FOR learning refers to what we find out in order to be in a better position to support the learning in the classroom. Assessment OF learning refers to the collection of evidence to show what a learner has learnt. A third purpose, Assessment AS  learning, is sometimes identified; this refers to elements of assessment which actually contribute directly to the learning process, frequently involving self or peer assessment. In this discussion, however, such elements are included in the concept assessment for learning, as a good task used in assessment for learning will normally directly promote learning.

These distinct purposes complement each other, and at times overlap, but both are necessary if our assessment is to achieve its ultimate goal. In the remainder of this section, most attention will be given to assessment for learning, as it is assessment with this purpose which most clearly involves the day to day work of the classroom teacher.  

 

1.1. Assessment FOR learning

Assessment FOR learning looks (slightly) ahead. It includes all the activities teachers and pupils carry out which shed light on how far a pupil has achieved an immediate goal, so that the teaching and learning which follow will be adapted to what the pupil needs to know or do, to reach a newly defined goal.

The following account of assessment FOR learning is cited, largely directly, from the ‘Assessment for learning guidance’ published by the UK Qualifications and Curriculum Authority. Direct citation from this document is marked by using italics.

Effective assessment for learning happens all the time in the classroom. It involves:

  • sharing learning goals with pupils
  • helping pupils know and recognise the standards to aim for
  • providing feedback that helps pupils to identify how to improve
  • believing that every pupil can improve in comparison with previous achievements
  • both the teacher and pupils reviewing and reflecting on pupils' performance and progress
  • pupils learning self-assessment techniques to discover areas they need to improve
  • recognising that both motivation and self-esteem, crucial for effective learning and progress, can be increased by effective assessment techniques.
  • Four key characteristics of assessment for learning can be identified:
  1. Effective questioning techniques. This involves the teachers using questions to find out what the pupil has understood, and what misconceptions they have. It also involves using pupils’ own questions as a source of revelation. Yes/no type of questions from the teacher elicit little information about what pupils think. Questions of the type: how can we be sure that...?, what is the same and what is different about...?, is it ever/always true/false that...?, how do you...?, how would you explain...?, what does that tell us about...?, what is wrong with...? or why is...true? are much more likely to provide the teacher with good assessment opportunities.
  2. Effective feedback. Characteristics of effective feedback are cited as follows:

    Feedback is most effective when it confirms that pupils are on the right track and when it stimulates correction or improvement of a piece of work.
    Suggestions for improvement should act as 'scaffolding', i.e. pupils should be given as much help as they need to use their knowledge. They should not be given the complete solutions as soon as they get stuck and should learn to think things through for themselves.
    Pupils should be helped to find alternative solutions if simply repeating an explanation continues to lead to failure.
    Feedback on progress over a number of attempts is more effective than feedback on one attempt treated in isolation.
    The quality of dialogue in feedback is important and most research indicates that oral feedback is more effective than written feedback.
    Pupils need to have the skills to ask for help and the ethos of the school should encourage them to do so.

  3. Shared learning goals. Most schemes of work emphasise the need to clearly identify the learning objectives for a lesson. Teachers should ensure that pupils recognise the difference between the task and its learning intention (separating what they have to do from what they will learn).

    Assessment criteria or learning outcomes are often defined in formal language that pupils may not understand. To involve pupils fully in their learning teachers should:

    explain clearly the reasons for the lesson or activity in terms of the learning objectives
    share the specific assessment criteria with pupils
    help pupils to understand what they have done well and what they need to develop
    Looking at a range of other pupils' responses to the task set can help pupils understand how to use the assessment criteria to assess their own learning.

  4. Peer and self-assessment. Research has shown that pupils will achieve more if they are fully engaged in their own learning process. This means that if pupils know what they need to learn and why, and then actively assess their understanding, gaps in their own knowledge and areas they need to work on, they will achieve more than if they sit passively in a classroom woing through exercises with no real comprehension either of the learning intention of the exercise or of why it might be important. 

1.2 Assessment OF learning

Assessment OF learning involves the processes of establishing what learners have learnt, for reasons which do not primarily involve influencing the teaching in the immediate future. Some form of reporting is normally involved, either to authorities, as in the case of national tests, or to parents, or anyone who may need to know a pupil's ability for further schooling, jobs etc.

The Learning and Teaching in Scotland (LTS) give guidelines for the assessment of learning. Citations from this document are marked by using italics. According to the guidelines, judgments about pupils’ learning need to be dependable. This means that:

they are valid (based on sound criteria)
they are reliable (accuracy of assessment and practice)
and they are comparable (they stand up when compared to judgments in other departments or schools). 
Where the assessment is not administered externally, as in the case of national tests,  and teachers are directly involved in this process, the LTS guidelines state:

It is important that staff have a shared understanding of the criteria for success, and that quality assurance takes place to ensure assessments are consistent between classes and schools. This practice is called local moderation.

The best moderation practice would involve staff in discussing pupils' work produced in the course of a class activity, evaluating the effectiveness of the learning and teaching that has taken place, and agreeing appropriate feedback on next steps in learning. These discussions focused on assessment of learning help build teachers' confidence in their own judgments and support them in planning more effectively for learning and assessment.

The following key features are cited as being characteristic of effective assessment of learning carried out by teachers:

Using evidence
Staff use a range of evidence from day-to-day activities to check on pupils' progress.

Sharing standards
Staff talk and work together to share standards in and across schools.

Monitoring and planning
Staff use assessment information to monitor their establishment's provision and progress, and to plan for improvement.

2. Assessment of reading in English: what to assess

This section concerns the assessment of reading in English, and focuses on WHAT we can assess. The purpose of the assessment may be either ‘for’ or ‘of’ learning. Four aspects of reading are considered here: overall reading level, personal use of reading, skills and strategies, and reading comprehension.

2.1 Overall reading level

The ‘level’ of a pupil’s reading can be defined in a number of ways. In the UK, pupils’ ability is measured on a scale linked to key school stages. Graded readers, on the other hand, follow a scale defined by the publishers themselves. For younger pupils, or even older ones who struggle to read ‘normal’ books written for English speaking people their age, it can be useful to use sets of graded readers, e.g. Penguin Young Readers, Oxford’s Bookworm series or Damm’s Galaxy series. These are generally consistent, so that individual pupils can normally be aware of which level they can comfortably manage at any time. The pupil can work their way up the levels, with their progress clearly defined.

The most well-known ‘universal’ scale to have been defined for documenting reading ability (as well as other abilities) in a foreign language is the Common European Framework (CEFR) (Council of Europe 2001). This scale, with its six levels, has the advantage of being stable, well established and validated. It is also readily understandable, each level being described in a concrete way, mainly in terms of the kind of text a reader can cope with.

However for the purposes of using this with pupils, especially in primary school, the CEFR in its official form has its limitations. The six levels on the CEFR range from near beginner to very high academic level, and a pupil may spend one or two years on or around the same level. This gives rise to the need for 'in between levels', which are not usually defined in the CEFR material. Moreover, the scale was originally designed for use with adults, which means that the types of reading cited are not always suitable for youngsters (and do not take into account e-reading!); moreover, many children would not be cognitively mature enough to reach the higher levels on the scale.

These limitations have been investigated and solutions have been presented in a number of instances where the CEFR has been used as a basis for assessment in Norway. Research has been carried out in a number of projects on the adaptation of the CEFR for young learners, e.g. the Bergen ‘'Can do' project (Hasselgreen 2003).

The two versions of the European Language Portfolio (Språkperm), designed for secondary and primary school pupils, have adapted the CEFR reading scale for youngsters in an effective way, allowing pupils to identify the level they are at or on the way to, in terms of the kind of texts they can read and purposes they can read for. Both the levels of reading ability in the national tests of English reading (explicitly), and the aims of the school curriculum, Kunnskapsløftet (implicitly), relate to levels of the CEFR, adapted for young learners (see part 3 of the guidelines for using the National Tests of English)

2.2 Personal use of reading

By the personal use of reading I refer to the uses an individual pupil may put their reading to, over and above the ‘set’ reading they may have to do as part of the English course. This side of reading is hugely important: Reading breeds reading, and those who read for pleasure or interest are likely to become competent readers! Yet, in assessment it is all too easily forgotten.

Getting pupils to let us know what they read (internet, magazines, books, etc), how much, how often, what they (would) like to read and what they have enjoyed, provides a valuable source of information for the teacher. We can find out what kind of material pupils like, to ensure that they have access to this. And by identifying pupils who do not read, or enjoy reading, we are able to start finding out why; the reason may be lack of interest in the material available or lack of reading skills. Or there may be a mismatch between the level of a pupils’ ability and the material available.

Without this kind of information, we cannot have a complete picture of our pupils as readers, or give them access to stimulating material. 

2.3 Reading skills/strategies

Reading skills and reading strategies are grouped together in this section. This is because they both involve knowing how to tackle a text, in order to create meaning from it. In order to do this we need to be able to do certain things, some at a 'microlevel', i.e. to help us make sense of small units: sentences, words or even letters, while others might be considered ‘macrolevel’ skills, which, together with our personal knowledge and experience, help us make sense of larger stretches of text. The notions 'bottom up' and 'top down' have been used to describe the respective use of these different kinds of skills; the consensus is that readers apply both.

Skills and strategies have been described in numerous ways by writers such as Grellet (1981), Grabe (1991), Nuttall (1996), with no single agreed way of how to define them.

Grabe 1991 (in Alderson 2000:13) identified the following elements as important for fluent reading:

Grabe 1991

  • Automatic recognition skills
  • Vocabulary and structural knowledge
  • Formal discourse structural knowledge
  • Content/world background knowledge
  • Synthesis and valuation skills/knowledge
  • Metacognitive knowledge and skills monitoring

Grabe’s list, includes skills and knowledge relating to the 'mechanics' of  words and texts, as well as to the language itself and other necessary knowledge and skills that may be needed to fully appreciate a text. His metacognitive knowledge and skills include adjusting speed, skimming, and recognising what is important information. For the purposes of this discussion, I will focus on the most concrete skills, which teachers may be able to assess, and which can be trained in pupils. I will use terminology largely based on Nuttall (1996): word attack skills, text attack skills, and using a text effectively.

Word attack skills include word recognition, predicting within a sentence, having the confidence to ignore difficult words, using structural clues and inferencing word meaning from the context.

Text attack skills include making predictions, interpreting cohesive devices, and understanding text organisation.

Using a text effectively includes skimming and scanning.

It should be stressed here that these skills are generally used simultaneously in reading. This means that they cannot easily be identified and singled out when a pupil is reading a text. This makes them less suited to inclusion in assessment OF learning, when we are normally most interested in what the pupil can actually read and understand, using whatever skills are necessary. However, they lend themselves well to assessment FOR learning, where they are best assessed by giving specific tasks designed to test the skill in question. This should identify any weaknesses, which should then be worked on.

2.4 Reading comprehension

The last of the ‘things’ which can be assessed in reading is, in fact, the one which is best known and most tested. It can give us an overall idea of whether a learner is able to understand a whole text. This kind of assessment is most often associated with assessment OF learning, where we wish to know if a learner has reached a particular level or not. It has also some value in assessment FOR learning; we can evaluate whether the pupil should move on to more demanding, or be allowed  less demanding texts or tasks.

Nuttall (1996:188-189) list six types of questions which can be posed on texts:

  1. Straight forward questions on literal comprehension, where the answer can be found directly in the text.
  2. Questions that involve reorganising or reinterpreting the information to find the answer
  3. Questions that require inferencing; the answer is not actually in the text
  4. Questions which involve evaluating how well the writer has done what s/he set out to do
  5. Questions of the readers’ personal response to the content of the text; here there is no ‘right answer’.
  6. Questions which involve how the writer says what s/he means; here the pupil has to comment on the use of language in the text 

3. How to assess reading in English

In the previous section (section 2), we considered what we might assess, identifying four distinct ‘things’ to assess: overall reading level, personal use of reading, skills and strategies, and reading comprehension. In this section we take each of these in turn and consider how we might assess them.

Who does the assessment should ideally be a balance of teacher and pupil. Why we do the assessment (FOR or OF) learning) will be decided by our own circumstances, but remember, if assessment is intended FOR learning, it should have an impact on whatever follows in class. The characteristics of the two types of assessment (FOR or OF learning) described in Section 1, should be borne in mind when planning and carrying out the assessment. A good rule to follow in considering how to carry out assessment FOR learning is this: a good assessment task should also be a good learning task. In this way, assessment FOR learning becomes assessment AS learning.

The focus of this section will be on 'classroom' assessment, with some discussion on more formal assessment, in the form of National Tests, left to Section 4. 

3.1 How to assess overall reading level

Overall reading level will be discussed here with reference to graded readers and to the CEFR (Common European Framework of Reference for Languages). In the case of placing pupils on a level using readers, we are normally assessing FOR learning, as the pupil is in a process, and the teacher needs to ensure that s/he is reading books at a suitable level, both for purposes of motivation and in order to help him/her to progress through the levels. The CEFR levels are more likely to be referred to at intervals, and do not give information on day to day achievement, and so are more relevant for assessment OF learning, e.g. at points in a school year.

Graded readers
A supply of reading books (see suggestions in Section 2.1) with a wide range at each level, can be a great asset in both developing and assessing pupils’ reading. These can be used alongside ‘native speaker’ books and other material (comics etc) for free reading sessions. While it is always desirable to allow children to browse, and choose books which appeal to them, regardless of level, it can be sensible to have one or two books at each level reserved for assessment purposes.

If the teacher has the opportunity to ‘hear’ reading from time to time, which may take 5-10 minutes per pupil, an adaptation of the method presented in Clay's (2000) 'Running records' is useful. The teacher should make a copy of 100 words taken from the book 8(starting a little after the beginning in order to allow key words, such as names, to be introduced). The pupil reads from the start, and as s/he goes through the 100 word section the teacher has copied, the teacher should mark any words that appear problematic. The teacher may go back and check for understanding of words s/he suspects may be difficult. As a rule of thumb, if the pupil finds more than 10 words to be difficult, the book is probably too difficult and the pupil should try a level below. If there are fewer than 5 difficult words, then the pupil can probably manage a level above. 5-10 difficult words indicate that the level is about right (in line with Krashen’s (1982) theories on comprehensible input).

An alternative way of judging whether a pupil is at a level, is to talk to the pupil while or after reading. Ask them if it is difficult or easy and get them to tell something about the content, to see if they seem to be getting the gist of the text. This could be done in a group, where children are reading the same or different books.

Records should be kept of what pupils have read. The pupil can assist in this by keeping a simple chart with titles, dates and symbols (faces?) to show if they liked it, and if they found it easy to read.

A word of caution: pupils may decide to read books (of any kind) which we may think are too difficult for them, but which they get pleasure from. Don’t stop them! This is part of how we develop as readers. However, for assessment purposes, we need to be realistic and place them at the level which we are most confidently feel they are ‘at’.

The CEFR (Common European Framework of Reference for Languages). 
The CEFR is accessible through the two versions of Språkperm. For the younger learners (6-12 years), 'I can do…' checklists are presented for A1 as a series of bubbles, with phrases such as I can match some words and pictures, for the pupil to colour in when they have achieved this. For A2 and B1 statements are given in a table, with a column for when they set an objective and columns to indicate how well they can do the thing in question. Pupils should do this in consultation with their teacher, and under supervision, at intervals. The version from 13-18 years has checklists from A1 to C2 in table form. Both versions have the official CEFR self assessment grid included.

Teachers may also wish to have the CEFR levels accessible in the classroom so that pupils can try to identify their own level (although the youngest pupils cannot be expected to do this). While the original adult levels may not be suitable for most Norwegian pupils (at least in the compulsory school), it is possible to download the levels from the Bergen *Can do' project for this purpose, as these have been adapted for lower secondary school learners in Norway. 

3.2 How to assess personal use of reading

In order to be able to assess the personal use of reading, it is important that pupils are encouraged and supported in their individual, non-compulsory reading. A class or school library should have books that appeal to the pupils, and other material such as magazines and comic strips. Fact and fiction should both be represented, as should a wide range of topics and levels of difficulty. Pupils should also be able to browse the internet (and of course, most certainly are!). And finally, time should be set aside every now and then to allow pupils to read what they want.

As an important part of assessment OF learning, we should have records of the learner’s reading achievement, habits and attitudes. There are many ways of keeping records of what is read. Individual charts can be kept to record the essentials of what the pupils has read. An example of this is found in the Bergen can do material. Here, there is a focus on encouraging quantity of reading. It matters more how much pupils read than what they read!

A survey of reading habits and likes/dislikes can be done from time to time, and the pupils themselves can be responsible for collecting and presenting the data, e.g. in groups, or by gender, as a good short project . If the results are made visible in the classroom, these can be the basis for discussing the ordering or borrowing of material or finding interesting websites, and furthering the reading in the class. It is also a way of highlighting any need for material for weaker or stronger readers.  Used this way, it becomes an assessment FOR learning. 

3.3 How to assess reading skills/strategies

As mentioned earlier, it is very difficult to know which skills/strategies a learner is applying when reading a text. Hopefully s/he will apply many, or at least a sufficient number to make sense of the text. Reading comprehension questions may be designed to see if a reader has a specific skill, but the reader’s own abilities or knowledge may direct him/her into using completely different skills. For this reason, the surest way of assessing whether our pupils possess particular skills is to target these in isolation. If pupils are found to have these skills, there may be no need to work on them further, but pupils lacking them should be given more training. The exercises shown here can be used as both assessment and skills training. Many are taken from Nuttall 1996 and Grellet 1981. This section is very clearly on assessment FOR learning.

Word attack skills
Word attack skills concern coping with difficult or unknown words. We will consider how to assess word recognition, predicting within a sentence, having the confidence to ignore difficult words, using structural clues and inferencing word meaning from the context.

Word recognition
A pupil’s recognition of a word involves making the connection between a string of letters and the spoken word (so that we can 'hear'’ (in our head) what we see on the page). This in turn depends on basic mechanical reading skills (also applied in the pupils’ first language (L1) reading), as well as the ability to interpret the English sound-spelling relationship, and indeed having met the English word before.

Here I will assume that a pupil has the basic ability to read in the L1, and is familiar with the word in the spoken form. In my experience a lot can be learnt by teachers who listen to pupils reading words or phrases aloud. Many pupils will systematically misread certain words, e.g. pronouncing ‘the’ as ‘there’, or parts of words, e.g. always sounding the –ed at the end of a verb as ‘ed’ (in walked etc). They may also struggle with a wide range of problematic sounds, such as those spelt as igh, ay, or  ow . This can be discovered quite easily, for example if a teacher is ‘going round’ while pupils are working, and simply asking them to read what they have written.

Another way of checking this is for teachers to give pupils a set of written words (where the focus can be on common, new or problematic words), and asking them to identify a word when they hear it spoken aloud. This can be done as a game (e.g. word bingo) and in fact can be carried out entirely by the pupils in groups. This is time very well spent!

Predicting within a sentence
This is a very important skill to develop. Fluent readers anticipate what is coming next in a sentence, and in fact much of this is highly predictable. Pupils need to realise they can do this, and trust themselves and their predictions. This skill can be assessed and trained quite simply. It can be done if the teacher is reading to the class, and simply stops to allow children to suggest what comes next (this works particularly well with younger pupils, in a story). Or the teacher can write a sentence on the board, stopping from time to time, and asking for suggestions of what may come next.

Nuttall (1996:14) gives an exercise which allows pupils to predict on a word to word basis in a sentence. The sentence is written down on a card. On the top line is the first word An, followed by 3 alternatives, only one of which is possible (large, animal, eat). The pupil has to choose one, and say why they made their choice. On each new line a further word is added on, with the same procedure. Pupils have to uncover the text a line at a time. This task makes us realise that the choice of each word in a sentence is limited by what comes before, both in terms of meaning and structure.

A better way of doing this may be to use technology, either with an overhead transparency for a whole class exercise, or by computer (e.g. power point) for individual or group tasks. This can be differentiated, using different levels of sentence complexity.

Having the confidence to ignore difficult words (and knowing which to ignore)
An avid reader in a foreign language meets unfamiliar words all the time. In fact, this is a rich source of learning vocabulary. While texts with too many difficult words may defeat the reader (see Section 3.1 above), it is important to realise that we do not actually need to understand every word in a text. A reader without the confidence to ignore a certain number of difficult words may give up quickly or get 'hung up' on these words.

Certain tasks can be given to see if readers are able to get as much meaning as they need from a text, despite some difficult words, and to build up their confidence to ignore these. Nuttall (1996: 6)) presents a number of these. One task is to give readers a text with words omitted here and there, with the gaps indicated (these words should be lexical items, with ‘stand alone’ meaning, excluding ‘form words’ such as articles and prepositions). The readers have to read the text and answer some questions which show that they have understood the overall meaning despite the gaps. Another task is to present a text with some obviously difficult words, which are not to be looked up, and again give questions on the gist.

In order to find out whether readers understand which words can be ignored, Nuttall suggests giving a text with a few new words, and questions which require understanding some of these. The task is to answer as many questions as possible without looking up the words. This forces the reader to be selective about which words they look up, while ignoring others. Points can be given based on the number of correct answers and the ‘fewness’ of looked-up words.

Frequently, we do not need to look up a word in order to work out its meaning, but rather use clues in the text itself to ‘guess’ their meaning. This will be the subject of what follows.

Using structural clues to help work out word meaning
The structure of a word or its position in a sentence can yield a lot of clues about its meaning. The grammatical function of a word is usually quite easy for a learner beyond the most elementary level to identify, which in turn narrows down its possible meanings. One way to assess or work with this ability is to give a sentence with ‘nonsense words’ that are grammatically correct but have no actual meaning. Nuttall (1996:69) gives the example: The spoony urdles departed.

The tasks given are:

  1. How many questions can you make about this sentence?
  2. Think of three real words that might replace urdle in this context. Think of three words that could not replace urdle without violating the grammaticality of the sentence.
  3. For each suggestion for urdle, think of possible meanings for spoony that would fit the context.

The discussion round this task should reveals the extent to which pupils are able to use clues of the words’ position and form to identify spoony as an adjective and urdle as a noun (and to further narrow down the meaning of these as something or someone that could depart!). Unlimited examples can be derived, and pupils can help make these nonsense sentences!

The morphology of a word is also an indicator of meaning. Pupils need skills in interpreting affixes (prefixes such as misunderstood, and suffixes, such as happily). Exercises can be given to assess and train this ability. A simple task is to give some base forms, such as act, courage, form, able, and get pupils to se how many words they can make. Grellet (1981:41-42) cites a number of tasks requiring this skill. One is simply to pick out words with affixes in a text and ask what the base form is, and what other words could be made from this.

A variation on this task is to put some words with affixes from the text in a table like that shown below, and add others. The students have to complete the table. 

Noun    Adjective    Person    Verb    Adverb
Hypnosis
hypotism    hypnotic    hypnotist    hypnotise    hypnotisingly
          employer          
     psychological               
science                    
     free               
(Grellet 1981:41-42)

Inferencing word meaning from the context
Readers often gradually work out the meaning of a word by accumulating meaning as they meet it repeatedly in a test. Nuttall (1996. 72-75) suggests some tasks to assess or develop this skill. A rather contrived, but effective way of doing this is to use a text like the one below, made up of a sequence of sentences containing a nonsense word to be worked out, with these instructions:

Cover the numbered sentences below with a piece of paper. Now read them one at a time. After you have read the first one note what information you have about the meaning of the word tock. Then go on to find how much more you know after reading sentence 2, and so on.

  1. She poured water into a tock
  2. Then, lifting the tock, she drank.
  3. Unfortunately, as she was setting the tock down, it slipped from her hand and broke.
  4. Only the handle remained in one piece.

Nuttall (1996: 72)

This task can easily be adapted to texts the pupils may be using. Find a text where a keyword is repeated often. Replace it with a nonsense word. Students try to work out its meaning, either with alternatives or not.

Sometimes an essential word is not repeated, and the meaning has to be inferred by the context alone.  To assess this skill, find a short text with an unfamiliar word. Offer a choice of explanations/definitions adjusted to the level of the students. An example of this is:

A coelacanth is a kind of living fossil, first discovered when one was caught off Madagascar in 1038.

A coelacanth is a kind of a) rock, b) plant, c) fish,  d) animal

What clues did you use?

Nuttall (1996:75)

Text attack skills
A reader will not be able to mange longer texts unless they have some knowledge of the way texts are normally organised and structured. Here we will consider how to assess making predictions, interpreting cohesive devices, and understanding text organisation.

Making predictions
Just as we make predictions within a sentence to take some of the burden off working it out word by word, we are also helped greatly by being able to predict roughly what course a text will follow. This facilitates a top down approach to our reading.

A simple task in this is to take a short text and allow students to see a paragraph at a time and discuss what the next paragraph may be about. Expose this and continue. Or show a few sentences and present alternatives for the next sentence, or alternative questions that the next sentence may answer.

Grellet (1981:57-58) suggests removing punctuation from a text, and asking students to put this in. This forces students to predict where a new sentence/paragraph will begin. (This is done most easily on computer!)

Interpreting cohesive devices
Cohesion in a text is brought about by a complexity of devices. One is the syntax of the text. Another is the use of pro-forms (his, this, etc) with cross reference. Another is the use of lexical relations between words. For a deeper discussion of the skill of interpreting cohesive devices, see Nuttall, 1996: 78-124.

One of the most important devices for making texts cohesive is the use of link-words (or 'discourse markers'), such as although, however, instead, yet, because, anyway,  still,  in spite of…

The following two tasks for assessing and developing the use of link-words are suggested by Grellet (1981: 51-52) :

  1. Remove the link-words from a text, leaving marked gaps and get students to replace them. The students can be given suggestions for link-words which may fill the gaps.
  2. Take out link-words from a text and rewrite the text without them (i.e. as a series of ‘unlinked’sentences). Give students some ideas for what to use. This task could in fact be done from scratch by students, making the task for each other.

Text organisation
Text organisation here refers to the way texts are arranged sequentially (which of course involves the use of cohesive devices). A simple way of assessing a pupil’s ability to see how parts of a text fit together, is by presenting them with sentences to be arranged into paragraphs and paragraphs into longer texts. This can best be done on computer, where the text can easily be assembled from the parts.

Using a text effectively
Here we will briefly consider the assessment of skimming and scanning.

Skimming
Skimming a text means running our eye over it quickly to get the gist of it. It is something competent readers do frequently – e.g. to see if a text is relevant or interesting for closer reading, or simply to inform ourselves of the essentials, as we often do when reading a newspaper.

Grellet (1981: 74-75) offers two tasks to assess or train skimming. In the first, she aims to show that, despite several unknown words in a text, just knowing a few words here and there is enough to give us the essentials of what it is about. She lists the following words and phrases, which she believes the students will understand:

Professor
Institute of Biochemistry
Hard-working man
Results of experiments
Published
Confession
Invention
Different results
Fraud
Regrets it

The task is for the student to guess if the article is about:

  • A well-know professor who has just published his confessions
  • A scientist who has admitted inventing the results of his experiments
  • A scientist who has killed himself because he couldn’t get the same results as everybody else
  • A scientist who regrets the publication of the results of his experiments

This task can of course be adapted to any level. It can give confidence to weaker students who read slowly and do not skim, and who shy away from any text  with a lot of unfamiliar words.

The second task offered by Grellet aims to demonstrate that a meaning can be created by just reading a few sentences here and there. She takes a text and simply removes whole sentences or even paragraphs, and asks the reader to make suggestions for what these missing parts may contain.

Scanning
Scanning is similar to skimming, in that we run our eye quickly over a text and do not ‘read’ most of it. According to Nuttall (1996:49), the term ‘scanning’ is used to describe ‘glancing rapidly through a text either to search for a specific piece of information (e.g. a name or date) or to get an initial impression of whether a text is suitable for a specific purpose (eg whether a book on gardening deals with a particular plant disease)’. The main difference between scanning and skimming is that in scanning we are looking for something specific and aren’t particularly interested in the rest of the text, whereas in skimming we are interested in finding out what the text is about.

Assessing and training scanning can be done by presenting pupils with material containing information, from which they need to find something particular, e.g. in a catalogue, ‘ a set of small adverts’ or film announcements. 

3.4 How to assess reading comprehension

Assessing reading comprehension is familiar to teachers, many of whom will have been involved in making or at least administering reading comprehension tests at some point. This section will look at some features of reading comprehension tests, and will offer some advice on making or judging such a test. Here the concern is not on whether the pupil apply particular skills, but whether they are able to make sense of the text using whatever skills they possess. This kind of testing is most normally associated with assessment OF learning.

Before considering tests however, it should be pointed out that the simplest and truest way of assessing reading comprehension is by asking the learner whether s/he understood a text in the way s/he would expect to understand it, depending on the purpose of the reading. If the text is describing a missing person, has the pupil got a mental picture of the person? If it is advertising holiday camps, can s/he identify which (if any) she might like to go to? If it’s a news report, does s/he know what’s happened? This kind of assessment has an important place in the day to day classroom practice.

For a more comprehensive self assessment on a self chosen reading text, see example here. This form also contains questions on the purpose of the reading. However, the questions on whether and why it was difficult, and which strategies were used can be applied to any text.

The ease or difficulty of understanding a text can depend on a number of factors (Alderson, 2000); these include linguistic properties (such as vocabulary and complexity of grammar), content properties (such as familiarity with the topic), and other properties (such as the genre and how the text is organised). If we are making one test for a whole class, we should try to take into account what kind of texts and topics we can expect them to be familiar with, in the interest of fairness.

When making tasks to test comprehension on a reading text, we have to be aware that we are actually testing whether the learner can do the task, and only indirectly the extent to which they understood the text. Some easy tasks can be made for a very difficult text, while more demanding tasks can be made for a fairly simple text. This can be useful to bear in mind if making tasks for a text which will be read by pupils at different levels. In Section 2.4, six types of questions/tasks were listed, which could be used to assess the comprehension of texts:

  1. Straight forward questions on literal comprehension, where the answer can be found directly in the text.
  2. Questions that involve reorganising or reinterpreting the information to find the answer
  3. Questions that require inferencing; the answer is not actually in the text
  4. Questions which involve evaluating how well the writer has done what s/he set out to do.
  5. Questions of the readers’ personal response to the content of the text; here there is no ‘right answer’
  6. Questions which involve how the writer says what s/he means; here the pupil has to comment on the use of language in the text.

Nuttall (1996:188-189)

Type 1 is generally the easiest task, with type 2 being slightly more difficult, followed by type 3. Types 4 to 6 require other abilities and level of reflection than simply ‘understanding’ the text. The list is intended as a checklist, and can be used to ensure that questions are varied, and target the most central aspects of reading comprehension. This can safeguard the validity of the test.

Certain basic formats of questions are cited by Nuttall (1996:184-5)  as typically used in paper test:

  1. yes/no (or true/false)
  2. alternative answers (multiple choice)
  3. wh-question (who, what, when, where)
  4. how/why questions

The first format is the most clear cut and easy to use (as testers), but has the disadvantage of having a high ‘guessing’ factor. The second format, by increasing the number of alternatives, reduces the guessing potential in theory. However, it can be very difficult to find plausible alternatives. If possible, these should be based on ‘real’ answers acquired in a trial run of the question. Wh-questions can turn out simple or difficult, depending on what is asked, and how accessible the answer is. How/why questions are normally the most demanding for pupils, and are often used with question types 4-6 above.

In more recent tests, computers are often used, with a fully computerised test being implemented in the National Tests of English (see Section 4). This offers many new formats, such as matching, highlighting, colouring, moving objects.

Finally, it is worth presenting a checklist to use when making or judging reading comprehension questions (based partly on Nuttall, 1996: 190):

  1. Can the questions be answered without reading the text? (hopefully not)
  2. Are there several questions for each part of the text?
  3. Are there enough questions?
  4. Are the question varied in type?
  5. Do the questions reflect what we would normally expect to have understood from reading such a text?
  6. Do the questions help students to understand the text?
  7. Are the questions written in language that is more difficult than the text? (hopefully not)
  8. Do open answers require language that is beyond the students’ proficiency? (hopefully not)
  9. Are all alternatives in multiple choice questions plausible? Do some ‘stand out’ (regarding length, type of wording)

And a word of caution about testing individual skills/strategies: research on trying to test individual skills shows this to be very difficult – even agreeing on which skill is being tested (Alderson 2000:11)  Therefore it is important that our tests have items that, together, attempt to activate a wide range of skills/strategies.

4. The National Tests of English reading

In this section, I will present the National Test of English (NTE) reading for 5th and 8th grade, following the why, what and how organisation I have used in this document.

4.1 The purpose of the National Test of English (why)

The primary purpose of the NTE is assessment OF learning. The test gives information to pupils, teachers, parents, schools and other authorities, on the on the level of English reading ability of individuals or groups of pupils. All those concerned will use the information in the way which best serves their needs, hopefully, to ultimately enhance learning.

However, an important secondary purpose is assessment FOR learning. Very comprehensive guidelines are worked out for teachers. These explain in detail what is being tested, and how to interpret the results for individual pupils. Moreover, teachers can actually go into the tests and see what each pupil has answered. This puts teachers in a position to adjust subsequent teaching/learning aims for individuals. The guidelines (part 3) offer concrete advice for follow-up work for pupils placed at the different levels, as well as advice on teaching/promoting English reading generally.

4.2 What is being tested in the NTE

A short test, by computer, taken in the space of an hour or so, cannot assess every aspect of reading. It cannot, for example, assess personal use of reading, including attitudes etc, or the individual skills and strategies involved in reading. By definition, it cannot test extensive reading. Nor can it capture aspects of reading comprehension which can only be expressed in individual open ended answers. The NTE makes no claim to assess such aspects, which must be assessed using other means (such as Språkperm).

What the NTE does assess is a range of aspects of reading comprehension, and, ultimately, reading level.

Aspects of reading comprehension.
In Section 2.4, six question types were listed, each representing an aspect of reading comprehension. The 40 or so items in the NTE can be thought of as using the first three types of question:

  1. Straight forward questions on literal comprehension, where the answer can be found directly in the text.
  2. Questions that involve reorganising or reinterpreting the information to find the answer
  3. Questions that require inferencing; the answer is not actually in the text

In the 5th grade NTE, questions are made to test the ability to read for detailed information and overall understanding. In the 8th grade, they also test for reflection of the content. For each level of the test (5th and 8th grade), a range of ‘kompetansemål’ from the school curriculum, for the end of 4th and 7th grade respectively, are tested (see guidelines part 3).

Reading level
The total score of each pupil is, after the testing, equated with a level on a reporting scale, with three levels for 5th grade, and five for 8th grade. For each level, there is a description what a reader ‘can do’ if they are ‘at’ this level (see guidelines part 3). Furthermore, each level is linked to a level on the CEFR; for instance, at 5th grade, level 1 is equated with CEFR level A1 and under. Level 2 is equated with A1-A2 and level 3 with A2 and over.

4.3 How the assessment is done in the NTE

The assessment is done by using a computerised test, delivered on-line. For security reasons there are three equivalent versions of the test, given randomly to pupils on logging into the test with an individual user id.  The tests contain around 40 items, which are scored automatically. Each item consists of a text and a task (see guidelines part 2). The texts are normally short, and at each level of testing span a range of difficulty which represents the range of reading ability which it is assumed pupils at this age will normally cover. The tasks have a range of formats, which frequently involve identifying by clicking on a text, a picture or even a word in a text. Although completely open (written) answers are not possible, many of the tasks are designed to be fairly open, in that there are many possible answers that can be chosen. The items are spread over a wide range of difficulty, representing all the levels on the reporting scale, so that the scores indicate clear differences between pupils’ abilities, from the weakest to the strongest.  

5. Summing up

This document has attempted to give an overview of the assessment of English reading. I started by identifying two principal purposes of assessment: FOR and OF learning. In order to clarify how these actually differ, I presented an overview of the typical features of each of them. This first section did not focus on English reading, but all the subsequent sections did. Section 2 looked at what we can assess in English reading, and came up with four categories: overall reading level, personal use of reading, skills and strategies, and reading comprehension. Each of these was analysed, exposing a very wide range of what can (and should) be assessed. Section 3 looked further at the categories presented in Section 2, this time focussing on how assessment might be carried out. In this section, reference was made, where appropriate, to the purpose of the assessment, making a link with Section 1. Finally a short presentation was given of the National Tests of English reading.

It is hoped that the document will give teachers a clearer framework for think about and carrying out assessment in class. We need always to be mindful of why we assess our pupils. Any assessment should have a clear purpose, and if it is FOR learning, which the day-to-day assessment in our classroom normally is, it should have some impact on how the teaching/learning proceeds in the immediate future. We also need to be aware of what can be assessed, and this is much more than comprehension of texts. There are many skills involved in reading, which our pupils may be lacking, and these need to be identified and worked on. We also need to know how pupils feel about reading - what their personal likes, dislikes and habits are. It is also important to be able to identify ‘where’ a pupil is at in his/her reading. It is useful to have some kind of yardstick to refer to, whether it is a reading scheme or the Common European Framework, CEFR, albeit in adapted form. Having  identified what to assess, it is important to have a range of methods to use in assessment, including advice on how to make a test. Finally, most teachers will, at some point, have pupils taking the National Test in English reading, and the more familiar they are with this, the better able they are to help their pupils succeed, and to support them where they most need it! Hopefully, somehow, this document will enhance the assessment of reading, and ultimately, reading itself.

References

Alderson., J. C. 2000. Assessing Reading. Cambridge: Cambridge University Press.
Clay. M. M. 2000. Running Records for Classroom Teachers. USA: Heinemann.
Council of Europe. 2001. Common European Framework of Reference for Languages. Cambridge: Cambridge University Press.
Fremmedspråksenteret: Språkperm  http://www.fremmedspraksenteret.no/elp.
Grellet, F. 1881. Developing Reading Skills: A Practical Guide to Reading Comprehension Exercises. Cambridge University Press.
Hasselgreen, A. 2003. Bergen ‘Can do’ project. Graz: ECML publications. Also: http://www.ecml.at/cando 
Krashen. S. 1982. Principles and Practice in Second Language Acquisition. Oxford: Pergamon.
Nuttall, C. 1996. Teaching Reading Skills in a Foreign Language. Macmillan.
Learning and Teaching in Scotland (LTS) ‘Assessment OF learning’ http://www.ltscotland.org.uk/assess/of/index.asp
Qualifications and Curriculum Authority.  ‘Assessment for learning guidance’ http://www.qca.org.uk/qca_4334.aspx 
Utdanningsdirektoratet.  ’Veiledningsmateriell i lesing-på-engelsk’
http://www.utdanningsdirektoratet.no/Artikler/_Nasjonale-prover/Veiledningsmateriell-i-lesing-pa-engelsk/)

Publisert 4. mai 2020 12:04 - Sist endret 12. juni 2020 08:16