You are on page 1of 16

Introduction

Informational feedback, that is, information students can use to improve their performances,
is intrinsically motivating (Ryan et al., 1985; Ames & Archer, 1988; Covington, 1992;
Pintrich & Schrauben, 1992). This is important, given the nature of the assessment process.
Black & Wiliam (1998) de. ned the core of formative assessment as two actions:
the student must recognise that there is a gap between his or her current understanding or
skill level and the desired understanding or skill level;
the student must take effective action to close that gap.
Self-assessment is essential for progress as a learner: for understanding of selves as learners,
for an increasingly complex understanding of tasks and learning goals, and for strategic
knowledge of how to go about improving (Sadler, 1983). Learners are motivated both by
intrinsic interest and by the desire to succeed at school (Ames & Archer, 1988). Persistence
depends on expectations of success. Using selfassessment information requires control over
ones cognitive activities or metacognition:
understanding what strategies and skills are required for a task, and knowing how and when
to use them. Metacognitive skills begin to develop around ages 57and continue to develop
through the school years (Schunk, 1991).
Perceptions of self as a learner depends in part on the quality of feedback students have
received over the years. If feedback has been judgemental, rather than informational, and the
judgement was not good, most students will simply consign themselves to the not a good
student category. Conversely, if feedback has been informational, and students learn to use
feedback to verify their sense of ef. cacy for learning, as a guide or check for their own selfassessment, students will learn how to learn and achievement will grow (Hattie & Jaeger,
1998). Students do this in one of two ways: they either (a) seek con. rmation of the views
they already hold about themselves as learners, or (b) test their conception of themselves as
learners against the con. rmation and discon. rmation in the feedback they receive. Successful
students are more likely to do the latter. While challenging standards generally lead to higher
performance (Natriello, 1987), different types of learners respond better to different types of
feedback. Self-referenced feedback systems work best for students with low self-esteem and
an internal locus of control. Criterion-referenced systems work best for students with low
self-esteem and an external locus of control.
Norm-referenced systems work best for students with high self-esteem, whatever their locus
of control (Natriello, 1987).
Summative assessment is an overview of previous learning (Black, 1998, p. 28), either by
accumulating evidence over time or by testing at end-phase or other transition times. Using
classroom assessment for summative purposes can create tension for the relationship between
teacher and student (Gipps, 1994; Black & Wiliam, 1998). Yet, for grading and other times of
accountability, teachers do indeed collect and use summative assessment information.
The Relationship Between Summative and Formative Assessment
There is a counter-argument to this point of view, namely that Sensible educational models
make effective use of both FA [formative assessment] and SA [summative assessment]
(Biggs, 1998, p. 105). Formative and summative assessment need not be mutually exclusive
if ones model of assessment is inclusive: Instead of seeing FA and SA up close as two
different trees, I would zoom to a wider angle conceptually. Then, in the broad picture of the
whole teaching contextincorporating curriculum, teaching itself and summative
assessmentinstead of two tree-trunks, the backside of an elephant appears. (Biggs, 1998, p.
1

108) Summative assessment is often assumed to have entirely negative consequences, but if it
is aligned to instruction and deeply criterion-referenced, incorporating the intended
curriculum, which should be clearly salient in the perceived assessment demands (Biggs,
1998, p. 107), then classroom summative assessment, such as a test at the end of a teaching
episode or unit, can have positive effects.
Black (1998) argued that teachers have to be involved in both formative and summative
assessment, and must keep the two in tension. Formative assessment is private and focused
on the needs of the learner. Summative assessment is a response to external pressures and
constraints, and the need for accountability. Teachers Downloaded by [Northeastern
University] at 20:17 20 November 2014 ultimately are responsible for both guiding their
students and judging how successful their guidance has been.
Research Purpose
The purpose of this study was to document students perceptions about the formative and
summative aspects of classroom assessments. Student interviews were examined to document
students views of the purpose, usefulness, relevance, and importance of speci. c classroom
assessments and their performance on those assessments. Interviews were structured around
individual classroom assessment events.

Literature Review
Over the past several years, a growing emphasis on the use of formative assessment has
merged, yet formative assessment has remained an enigma in the literature (Black & Wiliam,
1998; Leung & Mohan, 2004). When reading formative assessment literature and focusing on
the issue of solidifying a definition of the term, an interesting and problematic theme arose.
Formative assessment and its various manifestations (i.e. self-assessment, peer-assessment,
and interim assessment) were defined not only by inherent characteristics, but also by the use
of the assessment.
Formative assessments status as an ethereal construct has further been perpetuated in the
literature due to the lack of an agreed upon definition. The vagueness of the constitutive and
operational definitions directly contributes to the weaknesses found in the related research
and dearth of empirical evidence identifying best practices related to formative assessment.
Without a clear understanding of what is being studied, empirical evidence supporting
formative evidence will more than likely remain in short supply.
Formative assessment and its various manifestations (i.e. self-assessment, peer-assessment,
and interim assessment) were defined not only by inherent characteristics, but also by the use
of the assessment. Formative assessments status as an ethereal construct has further been
perpetuated in the literature due to the lack of an agreed upon definition. The vagueness of
the constitutive and operational definitions directly contributes to the weaknesses found in the
related research and dearth of empirical evidence identifying best practices related to
formative assessment. Without a clear understanding of what is being studied, empirical
evidence supporting formative evidence will more than likely remain in short supply.
Although assessments may be designed for formative or summative purposes, the authors
argue that resultant data may be interpreted either formatively or summatively. The authors
2

further argue that the early mentioned definitions of formative and summative assessments
that include how the data is used leads to issues in the literature due to the possibility of
evaluating or using either type of assessment data formatively or summatively. The authors
define the terms ormative evaluation and summative evaluation in terms of the use of
assessment data and separate the issue of assessment instruments from assessment use.
For our purposes, summative evaluation was defined as the evaluation of assessment based
data for the purposes of assessing academic progress at the end of specified time period (i.e.,
a unit of material or an entire school year) for the purposes of establishing a students
academic standing relative to some established criterion. Formative evaluation was defined as
the evaluation of assessment-based evidence for the purposes of providing feedback to and
informing teachers, students, and educational stakeholders about the teaching and learning
process. Formative evaluation also informs policy, which then affects future evaluation
practices, teachers, and students. The reciprocal relationship between policy and formative
assessment is graphically represented by the Key Model for Academic Success. This model
supports Shepards (2000) assertion that it is not necessary to separate assessment from
teaching; instead, teaching practices can and should be informed by and coincide with
assessment practices and outcomes.
Having defined what is meant by formative evaluation, it is important to separate the issue
of assessment from the issue of formative or summative evaluation. In doing so, we hope to
provide a more clearly defined nomenclature to frame the investigation of both effective
implementation of formative evaluation as well as the effect of formative evaluation on
student performance.
Since the publication of Black and Wiliams (1998) review of formative assessment, minimal
scientific research on the impact of formative assessment on student achievement has been
completed in the traditional classroom. However, formative assessment has been researched
somewhat more thoroughly in the educational technology literature. For example, Sly (1999)
investigated the influence of practice tests as formative assessment to improve student
performance on computer-managed learning assessments. More specifically, Sly (1999)
hypothesized that students who selected to take practice tests would outperform students
who did not select to take practice tests on the first and second unit exams in a first year
college Economics course. The students who selected to take practice tests did significantly
outperform those who did not take practices tests on both unit exams one and two.
While Slys (1999) results provide support for the impact formative assessment may have on
achievement, this study also suffered from methodological issues. The primary issue with this
study is the self-selection of participants to treatment or control groups. This is a problem
because students who self-selected to take practice tests may be systematically different from
those students who do not select to take practice tests.
Although Sly did discuss this issue, there were no design efforts implemented to control for
self-selection, through the use of instruments that measure constructs that lead to selfselection such as motivation,self-regulation, or grades prior to the use of formative
assessment. In addition, while the students who selected to take practice tests did
significantly outperform students on unit exams one and two, they did so by only five and
four points respectively.
In another Web-based study, Henly (2003) studied the impact of Web-based formative
assessment on student learning in a learning unit about metabolism and nutrition. She found
that overall students in the top ten percent of the class accessed formative assessment twice as
often as students in the bottom ten percent of the class. While this does reflect a significant
3

difference in usage of formative assessments, it suffers from the same self-selection issue as
Slys (1999) study. The group that used formative assessment twice as often and ranked in the
top ten percent of their class was a systematically different group from the bottom ten percent
of the class who rarely accessed the formative assessment. Similar to Sly (1999), this study
would have been improved by controlling for factors such as motivation, self-regulation, and
prior performance. Further, in most school systems the current trend is to use formative
assessments for the lowest performing students. The Sly (1999) and Henly (2003) studies
have based their conclusion of the impact of formative assessments on the higher performing
students, with limited evidence of
their utility for these lower performing students.
Buchanan (2000) also examined the influence of Web-based formative assessment on an
undergraduate introductory psychology module exam. When controlling for classroom
attendance, he found that students who engaged in voluntary Web-based formative
assessments significantly outperformed students who did not participate in Web-based
formative assessments. However, the effect size for this difference was very small at .03. In
light of the issue of self-selection in this study and the small effect size, further research with
greater controls is warranted.
Velan, Rakesh, Mark, and Wakefield (2002) examined the use of Web-based self-assessments
in a Pathology course. More specifically, the researchers hypothesized that students would do
better on their third attempt at the Web-based self-assessment when compared to the first
attempt. While significant improvement was seen from the first to the third attempts on the
assessment, this study also had a few methodological issues. First, the sample size was very
small consisting of only 44 students. Second, there was no control group. Third, the students
took the same test each time, and each time they received feedback on their responses.
Because the students took the same exam, it is impossible to tell whether the students gained
greater understanding of the material or if they only gained expertise in taking that particular
test.
Ruiz-Primo and Furtak (2006) found that students in classrooms where teachers engaged in
assessment discussions performed significantly higher on embedded assessments and posttests. Assessment discussions were defined as a four-stage process in which the teacher asks
a question, the student responds, the teacher recognizes the response, and then uses the
information collected for student learning. While these explorative results are promising,
there are some issues that prevent generalizing the findings beyond the participants of the
study due to the limited sample size of four.
Moreover, a great deal of assessment literature is aimed at delineating between formative and
summative assessment, yet summative assessment can be used for formative purposes (Bell
& Cowie, 2000). It is important to note that we acknowledge that the purpose for which any
assessment is developed and validated is an important aspect of assessment. However, a test
that was designed to give formative feedback is only formative if the teacher uses it to
provide feedback for the student. If the teacher only uses the formative assessment to provide
a grade, is that assessment still formative? By most definitions the mere assessment of
performance into a grade category (i.e., A or B) is formative because it provides
information on the achievement of the student and may be used for future instructional
interventions. However, is this what is intended by the various definitions? Although an
assessment may be designed and packaged as a formative or summative assessment, it is the
actual methodology, data analysis, and use of the results that determine whether an
assessment is formative or summative. For example, Wininger (2005) used a summative
4

assessment as a formative assessment by providing both quantitative and qualitative feedback


about the results of the exam. Wininger (2005) called this formative summative assessment.
This article exemplifies the complications that arise when one defines an assessment by its
usage. An assessment is an assessment, and the manner in which an assessment is evaluated
and used is a related but separate issue.
Methodology
2.1 Subjects
The sample in this study consisted of 300 (150 male and 150 female) Junior Secondary Three
(JS-3) students drawn from public secondary schools in Akwa Ibom State of Nigeria.
Stratified random sampling technique was used to obtain the sample for the study. All the 300
participants completed the 15-item Likert-type rating scale designed by the researchers to
elicit information on students perception of teachers formative evaluation practices.
The possible responses to the rating scale items ranged from 5 (strongly agree) to 1 (strongly
disagree). However, the rating scores were reversed for negative statements. The neutral
score was 3, thus, the composite score for each participants rating ranged between 75
(maximum) and 15 (minimum). A favourable direction for teachers evaluation practices was
therefore placed between 46 and 75 while unfavourable perception was between 15 and 45.
Sixty-five percent (194) of the sample indicated favourable disposition while 35% (106) was
of unfavourable perception. All the participants in the study offered social studies as one of
the core subjects in the junior secondary curriculum; their age averaged 13 years.
2.2 Hypothesis
The study tested one null hypothesis at the 0.05 level of significance which reads: There is no
significant difference in students academic performance in Junior Secondary Certificate
Examination in social studies based on their perception of teachers formative evaluation
practices.
2.3 Instrumentation
The research instruments consisted of Formative Evaluation Rating Scale and the Social
Studies Summative Evaluation Scores.
2.3.1 Formative Evaluation Rating Scale
The rating scale contained 15 items of a 5-point Likert scale with declarative statements on
teachers formative evaluation practices. Content validity was established by a panel of
experts consisting of university faculty members in the Tests and Measurement Unit of the
Department of Educational Foundations. Five options were available for rating ranging from
strongly agree (5 points) to strongly disagree (1 point). Pilot testing for suitability and
reliability was carried out with junior secondary students in schools not included in the
sample. Cronbach alpha reliability coefficient for the formative evaluation rating scale was
0.82.
2.3.2 Social Studies Summative Evaluation Scores
The social studies end-of-programme (Junior Secondary Three) examination scores of the
sample in the Junior Secondary Certificate Examination of 2010/11 academic session which
5

was set and marked by the Examination Unit of Akwa Ibom State Ministry of Education
provided the summative evaluation data for the study.
2.4 Data Collection
Data were collected with the assistance of subject masters in the schools involved in the
study. A duration of 15 minutes was allowed for the completion of the formative evaluation
rating scale. Participants summative evaluation scores in social studies were culled from the
Ministry of Education examination records.
2.5 Data Analysis
Data were described using means and standard deviation. The independent t-test was used to
establish the significant difference in students academic performance in Social Studies based
on their perception of teachers formative evaluation practices of either enhancing to
learning (Positive) or not enhancing to learning (Negative).
3.0 Result
Data in Table 1 indicate that there was significant difference in the academic performance
mean scores of students in the Junior Secondary Certificate Examination in social studies
based on their perception of teachers formative evaluation practices (t = 12.40, p < 0.05).
Students with positive predisposition to the teachers formative evaluation practices
performed better than their counterparts who perceived the practice as not enhancing to
learning (negative) (63.08 vs. 40.38).
4.0 Study Limitations
The finding of this study suggests interesting difference among students academic
achievement in summative evaluation in social studies programme in respect of students
perception of teachers formative evaluation practices. However, it is worthy to note that the
study has some limitations associated with the sample of students in the analysis, the data
collection methods and the overall study design approach.
The sample for the study was drawn from a set of students primarily in the public secondary
schools. The finding may not generalize to students with different characteristics, such as
those who attend private and more advantaged staffed schools (model secondary schools).
The data used in the analyses were based on students emotional reports (with exception of
academic performance data). They did not involve examination of students continuous
assessment records (formative evaluation) only students perceptions of formative evaluation
practices. Without additional data, it is difficult to determine to what extent teachers
demonstrated high-quality pedagogy and objective assessment of students learning under
formative evaluations. Hence, the validity of the participants responses becomes susceptible.
Finally, the source of the data collected and the data analyses used here cannot yield
definitive conclusions. While the ex-post facto design allows for testing hypothesis based on
the construct of the independent variable (perception of teachers formative evaluation
practices) which had already occurred and is investigated retrospectively, only with caution
should the finding be interpreted as causal. It is possible for a condition to precede an
outcome without causing the outcome.
5.0 Discussion of Results
6

The result of the data analysis indicated that students perception of teachers formative
evaluation practices has a role to play in differentiating their academic performance in
summative evaluation as typified in this study predicated on the Junior Secondary Certificate
Examination result in social studies. Subjects who rated the teachers on this criterion as
positive performed better than those who perceived same as negative. This finding supports
the contention that the way students approach learning is often shaped by the evaluation
tasks, and the way they feel about their learning and themselves as learners is also shaped by
the evaluation task (Erwin, 1995). Because students perceptions of their capacity for success
are key to their engagement in school learning, formative evaluation strategies should be
designed to enhance students feelings of accomplishment. Teachers whom students see as
supportive and who set clear expectations on learning should help create an atmosphere in
which students feel in control and confident about their ability to succeed in future
educational endeavours (Akey, 2006).
The finding of this study supports the assumption that evaluation can have a formative
function, which is to say that it can help the learners to improve their learning (Erwin, 1995;
Allal, 1988). The situation which arises when a mark or grade is returned to a student is at
first sight a simple application of an essential feedback process; a method by which students
are informed on how well they are progressing (CERI, 2008). The feedback serves as a reality
test and a motivator for further learning. In this study it could be concluded that students who
perceived their teachers formative evaluation practices as enhancing to learning (positive)
and have it reflected in their better performance in social studies summative assessment
examination might be those who found such evaluations and feedbacks inspiring and
boosting to their studies. Weiner (1979) pointed out that the causes to which students attribute
their success or failure affect their future performance. Verbal persuasion, especially praise
and encouragement from significant others, for example teachers, may help to link perceived
results to causes that will increase students motivation (Boekaerts, 1991). Motivation is
crucial to cognition and performance because motivation directs individuals behaviour.
Competence - related beliefs are motivational because when learners believe they can
accomplish a given task or activity they are more likely to continue to do the activity,
overcome obstacles to complete it, and choose more challenging activities on subsequent
occasions. (Wigfield, Battle, Keller, and Eccles, 2000).
The low performance of the negative raters of the teachers formative evaluation practices
could be accounted for as an expression of the way the students approached their learning and
the perception of their likelihood of failure. The reflection of the students negative rating of
the teachers formative evaluation practices on their low performance in the achievement test
indicates such students lack of self-efficacy. Self efficacy according to Bandura (1982) refers
to a persons specific beliefs about his/her ability to perform certain actions or bring about
intended outcomes. As might be expected, it could have been the academically weak students
who rated their teachers formative evaluation practices as not enhancing to learning
(negative) in defence for their intellectual inability hence, their low performance in the
summative evaluation examination.
Students beliefs about their competence and their expectations for success in school have
been directly linked to their emotional states that promote or interfere with their ability to be
academically successful (Akey, 2006). Students who believe they are academically
incompetent tend to be more anxious in the classroom and more fearful of revealing their
ignorance (Abu-Hilal, 2000; Harter 1992; Hembree 1988). In addition, such students are
more likely to avoid putting much effort into a task so that they can offer a plausible
alternative to low ability or lack of knowledge as an explanation for failure for example, I
7

could have done it if I tried, but I didnt feel like doing it (Covington, Spratt and Omelich,
1980).
In sum, the difference in the academic performance mean scores of the students in this study
could be attributed to the psychological mediator of the relationship between students selfconcept and academic achievement.
Also, it would be logical to assume that the extraneous effect of the personal value of the
social studies curriculum content might have contributed to the students mode of perception
of teachers formative evaluation practices hence the reflection on their performance
differences. The low performance of the subjects who rated the teachers formative evaluation
negatively could therefore be interpreted to mean that such students might
not have found something of meaning and value for themselves in what was taught in social
studies programme, whereas their counterparts did. This assumption is consistent with
research findings that childrens reasons or purposes for engaging (or not engaging) in
achievement activities are crucial to their motivation. Individuals must value the activity,
have goals for doing it, or find it intrinsically or extrinsically motivating in order to engage in
it (Wigfield, Battle, Keller and Eccles, 2000:4).
6.0 Implications for Research and Practice
Evaluation of students learning is an essential part of the teaching process. It has been
contended that the quality of learning and the evaluation systems used in schools are
conceptually related. The finding of this study has important implications for understanding
how students perceive the feedback they obtain from teachers for their learning. The process,
it seems, borders on students developing a sense of efficacy and confidence about their ability
to do well in academic work. When students become confident in their ability to succeed,
they become more involved and learn more. On the other hand, students are not likely to
attempt educational tasks when the feedback from learning indicates that they cannot
succeed. The implication for practice is that the earlier schools and teachers begin to build
students confidence in their ability to do well, the better off students will be.
Irrespective of the form of evaluation envisaged, it is relevant to allow students express their
views on the teaching strategies used. These views should be analyzed by the experienced
teachers of the school who will make appropriate recommendations to the teachers concerned
to help improve upon their teaching skills. Teachers need expansion repertoires to meet
identified student needs. They need a healthy repertoire of approaches to setting up learning
situations and responding to students learning needs. Teachers and researchers may form a
healthy partnership for research in this area. Formative evaluation requires greater
transparency in teaching and learning. The approach is ideal for researchers who may wish to
investigate the practice of teaching and learning in normal classroom settings.
In order to improve the quality of social studies learning, students should be given useful
feedback on their work through discussion with their teachers and their peers. This will
enable learners to receive constructive guidance about how to improve their learning. The
finding of this study makes a case that formative evaluation practices have a significant
impact on student learning there is a need therefore for further studies which may address
connections between students emotions and learning. The connections between positive
emotions and improved learning are a major theme of neuro-scientific research on learning.
This research, along with work in the area of educational psychology, can bring to the fore
the need for further studies on the effect of different formative evaluation methods on
students emotions, motivation, self-concept and academic achievement.
8

Finally, formative evaluation of students work in social studies should be approached more
as a process of decision-making rather than as a process of measurement. Hence, teachers
need to pay close attention to helping students understand their own learning and develop
appropriate strategies for learning to learn skills skills that are increasingly necessary as
knowledge is quickly outdated in the information society (CERI, 2008).

Summative assessments are cumulative evaluations used to


measure student growth after instruction and are generally given at
the end of a course in order to determine whether long term
learning goals have been met. Summative assessments are not
like formative assessments, which are designed to provide the
immediate, explicit feedback useful for helping teacher and student
during the learning process. High quality summative information can
shape how teachers organize their curricula or what courses schools
offer their students.1
Although there are many types of summative assessments, the
most common examples include:

State-mandated assessments

District benchmark or interim assessments

End-of-unit or -chapter tests

End-of-term or -semester exams

Scores that are used for accountability for schools (AYP) and students (report card grades) 2

o
o
o

o
o

According to the North Carolina Public Schools, summative


assessments are often created in the following formats:
Selected response items
Multiple choice
True/false
Matching
Short answer
Fill in the blank
One or two sentence response
Extended written response
Performance assessment3

The North Carolina Department of Public Instruction explains that


information collected from summative assessments is evaluative
and is used to categorize students so performance among students
can be compared.4

The most common procedures of formative assessment include the following.


Feedback. A teacher provides oral or written feedback to student discussion or work.
For example, a teacher responds orally to a question asked in class; provides a
10

written comment in a response or reflective journal; or provides feedback on student


work.
Curriculum-based measurement (CBM). This set of standardized measures is used
to determine student progress and performance (Deno, 2001). An example is the use
of oral reading fluency (the number of words a student can read correctly during a
timed reading of a passage) as an indicator of a student's overall reading ability
(Fuchs et al., 2001).
Self-assessment. Students reflect on and monitor their progress. This activity may be
performed in conjunction with a CBM, in relation to predetermined academic and
behavioral goals, or with learning contracts.
PROCEDURES USED IN SUMMATIVE ASSESSMENT
HOW OUTCOMES INFORM INSTRUCTION AND EDUCATIONAL PRACTICES
Assessment is the use of a variety of procedures to collect information about
learning and instruction. Formative and summative assessment represent two
classifications of assessment, each with a distinct purpose. Formative assessment is
commonly referred to as assessment for learning, in which the focus is on monitoring
student response to and progress with instruction. Formative assessment provides
immediate feedback to both the teacher and student regarding the learning process.
Summative assessment is commonly referred to as assessment of learning, in which
the focus is on determining what the student has learned at the end of a unit of
instruction or at the end of a grade level (e.g., through grade-level, standardized
assessments). Summative assessment helps determine to what extent the
instructional and learning goals have been met. Formative and summative
assessment contribute in different ways to the larger goals of the assessment
process.

PROCEDURES USED IN FORMATIVE


ASSESSMENT
Formative assessment includes a variety of procedures such as observation,
feedback, and journaling. However, there are some general principles that constitute
effective formative assessment. Key requirements for successful formative
assessment include the use of quality assessment tools and the subsequent use of
the information derived from these assessments to improve instruction. The defining
characteristic of formative assessment is its interactive or cyclical nature (Sadler,
1988). At the classroom level, for example, teachers collect information about a
student's learning, make corresponding adjustments in their instruction, and continue
to collect information. Formative assessment can result in significant learning gains
but only when the assessment results are used to inform the instructional and
learning process (Black & William, 1998). This condition requires the collection,
analysis of, and response to information about student progress.
The most common procedures of formative assessment include the following.
Feedback. A teacher provides oral or written feedback to student discussion or work.
For example, a teacher responds orally to a question asked in class; provides a
written comment in a response or reflective journal; or provides feedback on student
work.

11

Curriculum-based measurement (CBM). This set of standardized measures is used


to determine student progress and performance (Deno, 2001). An example is the use
of oral reading fluency (the number of words a student can read correctly during a
timed reading of a passage) as an indicator of a student's overall reading ability
(Fuchs et al., 2001).
Self-assessment. Students reflect on and monitor their progress. This activity may be
performed in conjunction with a CBM, in relation to predetermined academic and
behavioral goals, or with learning contracts.
Observation. A teacher observes and records a student's level of engagement,
academic and/or affective behavior; develops a plan of action to support that
student; implements the plan; and continues to record observations to determine its
effectiveness.
Portfolios. A growth portfolio can be used to create a record of student growth in a
number of areas. For example, a teacher may use writing portfolios to collect
evidence of a student's progress in developing writing skills.

PROCEDURES USED IN SUMMATIVE


ASSESSMENT
Summative assessment also employs a variety of tools and methods for obtaining
information about what has been learned. In this way, summative assessment
provides information at the student, classroom, and school levels. Defining
characteristics of effective summative assessment include a clear alignment
between assessment, curriculum, and instruction, as well as the use of assessments
that are both valid and reliable. When objectives are clearly specified and connected
to instruction, summative assessment provides information about a student's
achievement of specific learning objectives.
Summative assessments (or more accurately, large-scale, standardized
assessments) are frequently criticized for a variety of reasons: 1) they provide
information too late about a student's performance (Popham, 1999); 2) they are
disconnected from actual classroom practice (Shepard, 2001); 3) they suffer from
construct underrepresentation (Messick, 1989), meaning that one assessment
typically cannot represent the full content area, so only those areas that are easily
measured will be assessed, and hence, taught; and 4) they have a lack of
consequential validity (Messick, 1989), meaning that the test results are used in an
inappropriate way. This last concern is related to state accountability systems
because high stakes, such as student retention or teacher performance pay, are
attached to performance on state assessment systems, yet most of these
assessments have not been designed for the broad and numerous purposes they
serve (Baker & Linn, 2004). Nevertheless, summative assessments can provide
critical information about students' overall learning as well as an indication of the
quality of classroom instruction, especially when they are accompanied by other
sources of information and are used to inform practice rather than to reward or
sanction. Examples of summative assessment include the following.
End of unit tests or projects. When assessments reflect the stated learning
objectives, a well-designed end of unit test provides teachers with information about
individual students (identifying any student who failed to meet objectives), as well as
provides an overall indication of classroom instruction.
12

Course grades. If end of course grades are based on specified criteria, course
grades provide information on how well a student has met the overall expectations
for a particular course.
Standardized assessments. Tests that accurately reflect state performance and
content standards provide an indication of how many students are achieving to
established grade-level expectations.
Portfolios. When used as part of an evaluation of student learning, portfolios provide
evidence to support attainment of stated learning objectives.
Although formative and summative assessments serve different purposes, they
should be used ultimately within an integrated system of assessment, curriculum,
and instruction. To be effective in informing the learning process, assessments must
be directly integrated with theories about the content, instruction, and the learning
process (Herman et al., 2006) and must be valid and reliable for the purposes for
which they are used. Summative assessments should be created prior to instruction
to capture and identify both the content and process of learning that represent the
desired outcomes. In this way, summative assessment can serve as a guide for
directing the curriculum and instruction. Performance on summative assessments
must serve as a valid inference of instructional quality. For example, teacher grades
generally have strong validity when compared to student performance on other
academic measures (Hoge & Colardarci, 1989).
Formative assessments are more informal in nature but must also serve as valid
indicators of student performance if they are to be useful in informing the teaching
process. Curriculum-based measurement represents a standardized process of
formative assessment that relies on the use of valid measures of student progress in
a given academic area. Additionally, a strong evidence base supports the use of
interactive feedback (Black & William, 1998) to increase student achievement.

HOW OUTCOMES INFORM


INSTRUCTION AND EDUCATIONAL
PRACTICES
A consistent feature of the research findings on formative assessment is that
attention to the interactive nature of formative assessment can lead to significant
learning gains (Black & William, 1998; Herman et al., 2006). Reviews of research on
formative assessment processes support the use of questioning, observation, and
self-assessment. Similarly, research has demonstrated positive effects on student
achievement with the use of CBM (Stecker et al., 2005). Frequent monitoring of
student progress to a determined goal and performance level results in higher
achievement for students, particularly when teachers use the data collected to inform
their instructional practices (Stecker et al., 2005).
Formative assessment can be most directly used at the individual student level
because it measures how a particular student is progressing in the instructional
program and identifies where support may be needed. The focus on individual
students provides immediate feedback on their progress within the curriculum.
Formative assessment may also be evaluated at the classroom level to inform
teaching practices because it reveals how many students may be experiencing
difficulty. If several students are having difficulty, then perhaps a more general
change in instruction is needed. CBM in particular serves in these dual roles, but
13

other types of formative assessment such as portfolios and journals can be used in a
similar way.
Summative assessment informs instructional practices in a different yet equally
important way as formative assessment. Critics of large-scale assessments argue
that they adversely affect the classroom and remain disconnected from instruction
(Shepard, 2001) to the extent that they are not useful in the instructional process.
However, summative assessment can serve both as a guide to teaching methods
and to improving curriculum to better match the interests and needs of the students.
A primary use of assessment data is in planning curricula. For example, if a school's
performance on a state assessment indicates high percentages of students who do
not meet standards in writing, then the school could collect more information on its
writing curricula, student writing performance (through portfolios or other classroom
work), and professional development needs for its teachers. After collecting such
information, the school may then review and adopt new writing curricula as well as
provide professional development to its teachers in order to support stronger student
achievement in writing. Ongoing evaluation of the writing program would be
conducted through the use of formative and sum-mative assessment. In this manner,
when summative and formative assessments are aligned, they can inform the
instructional process and support both the daily instructional practices of teachers as
well as the longer-term planning of curricula and instruction.
Assessment entails a collection of procedures that inform the learning process.
Formative and summative assessment entail integrated components of the larger
process of assessment, instruction, and curriculum. However, an ample research
base suggests that practitioners have difficulty implementing formative assessments
(Marsh, 2007) and responding to data collected through summative assessments
(Popham, 1999). When formative assessments are used in conjunction with
summative assessment, the potential exists to improve outcomes for all students
(Stiggins, 2002), both those meeting a minimum performance standard and all other
students across the spectrum. Assessments can only serve this purpose, however,
when teachers are supported to implement and respond to the procedures through
corresponding adjustments in their instruction (Herman et al., 2006; Marsh, 2007).
See also:Criterion-Referenced Tests, Standardized Testing

14

BIBLIOGRAPHY
Baker, E. L., & Linn, R. L. (2004). Validity issues for accountability systems. In S. H.
Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for
education (pp. 4772). New York: Teachers College Press.
Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in
Education, 5(1), 774.
Cronin, J., Gage Kingsbury, G., McCall, M. S., & Bowe, B. (2005). The impact of the
No Child Left Behind Act on student achievement and growth: 2005 edition.
Technical Report. Northwest Evaluation Association.
Deno, S. L. (2001). Curriculum-based measures: Development and perspectives.
Retrieved April 18, 2008,
from http://www.progressmonitoring.net/CBM_Article_Deno.pdf.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as
an indicator of reading competence: A theoretical, empirical, and historical
analysis. Scientific Studies of Reading, 5, 239256.
Herman, J. L., Osmundson, E., Ayala, C., Schneider, S., & Timms, M. (2006). The
nature and impact of teachers' formative assessment practices. CSE Technical
Report #703. National Center for Research on Evaluation, Standards, and Student
Testing (CRESST).
Hoge, R. D., & Coladarci, T. (1989). Teacher-based judgments of academic
achievement: A review of literature. Review of Educational Research, 59(3), 297
323.
Marsh, C. J. (2007). A critical analysis of the use of formative assessment in
schools. Educational Research and Policy Practice, 6, 2529.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed.,
pp. 13103). New York: Macmillan.
Popham, W. J. (1999). Where large scale assessment is heading and why it
shouldn't. Educational Measurement: Issues and Practice, 18(3), 1317.
Friendships Sadler, D. R. (1988). Formative assessment: Revisiting the
territory. Assessment in Education, 5, 7784.
Shepard, L. A. (2001). The role of classroom assessment in teaching and learning. In
V. Richardson (Ed.), Handbook of research on teaching (4th ed., pp. 10661101).
Washington, DC: AERA.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based
measurement to improve student achievement: Review of research. Psychology in
the Schools, 42, 795819.
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning.
Phi Delta Kappan International. Retrieved April 18, 2008,
from http://www.pdkintl.org/kappan/k0206sti.htm.
Copyright 2003-2009 The Gale Group, Inc. All rights reserved.

15

Notes

1.
2.

3.
4.

"The Value of Formative Assessment." Retrieved 6 April 2009 from FairTest: The
National Center for Fair and Open Testing Website: http://www.fairtest.org[return]
Garrison, C. & Ehringhaus, M. (1995). "Formative and Summative Assessments in
the Classroom." Retrieved from the National Middle School Association
website:http://www.nmsa.org/Publications/WebExclusive/Assessment/tabid/1120/
Default.aspx [return]
"A Vision for 21st Century Assessment." Retrieved from North Carolina
Department of Public Instruction
website:http://www.ncpublicschools.org/accountability/educators/vision/ [return]
"A Vision for 21st Century Assessment." [return]

16

You might also like