You are on page 1of 57

R ESEARCH

Assessing School Leader and Leadership


Programme Effects on Pupil Learning
Conceptual and Methodological Challenges
Kenneth Leithwood
Professor
OISE/University of Toronto

Ben Levin
Deputy Minister of Education
Government of Ontario

Research Report RR662

Research Report
No 662

Assessing School Leader and Leadership


Programme Effects on Pupil Learning
Conceptual and Methodological Challenges

Kenneth Leithwood
Professor
OISE/University of Toronto

Ben Levin
Deputy Minister of Education
Government of Ontario

The views expressed in this report are the authors and do not necessarily reflect those of the Department for
Education and Skills.
Kenneth Leithwood and Ben Levin 2005
ISBN 1 84478 527 0

Table of Contents
Summary ..............................................................................................................3
Introduction ...........................................................................................................6
The Meaning of Leadership...............................................................................6
Leadership Effects ............................................................................................7
Leadership Programme Effects.......................................................................10
A General Framework to Guide Research on Programme and Leader Effects
on Pupil Learning ................................................................................................12
Leadership Practices: The Independent Variables .............................................14
Leadership Effects: The Dependent Variables....................................................18
Mediating Variables for Leadership Effects Research ........................................23
Variables Moderating Leadership Effects ...........................................................29
The Antecedents of Leaders Practices ..............................................................33
Evaluating Leadership Programme Effects.........................................................35
Methodological Challenges.................................................................................40
Conclusions and Recommendations...................................................................44

Summary
Leadership is widely considered a variable critical to school improvement; considerable
evidence now justifies the claim that leadership has important effects on pupil learning.
While largely indirect, evidence summarized in this paper indicates that such effects
explain as much as a quarter of the variation in pupil learning across schools accounted
for by school factors. These leadership effects, furthermore, are usually greatest where
they are needed most; leadership acts as a catalyst unleashing the potential of other
factors contributing to the improvement of pupil learning.
Given the key role of leadership in school improvement, the Department for Education
and Skills (DfES) commissioned this paper to identify the pathways by which school
leadership programmes and practices influence, or positively impact on, pupil learning
and to inform future research and development in this field. More precisely, our goals
were to develop a model or framework of variables and relationships clarifying how
leadership programmes and practices impact on pupil outcomes.
Our paper describes challenges associated with conceptualizing the relationship between
leadership programmes, changes in leader practices and the effects of such changed
practices on schools and pupils. We also grapple with some of the methodological
challenges facing evaluators and researchers with an interest in programme and leader
effects, offering suggestions about how these challenges might be addressed.
Conceptual Challenges
Four conceptual challenges are addressed in our paper:
How to most usefully frame the relationship between leadership programmes,

leadership practices and pupil outcomes? We offer a relatively generic response to


this question, suggesting that any comprehensive framework would include
independent, dependent, antecedent, moderating and mediating categories of
variables. A review of research illustrates what we know about each category of
variables and the relationships among them. Frameworks guiding a handful of current
and recently completed studies illustrate variations on the generic framework;

How the nature of leadership practices might be conceptualized and what


intellectual resources are available to assist in such conceptualization? We review a
wide range of alternative leadership models to assist in thinking about this challenge;
these are models developed in both school and non-school organizational contexts;

What can leadership theory and research developed outside of schools contribute to
our understanding of school leadership practices and their effects? Building on our
earlier description of leadership models in non-school contexts, we identified
perspectives on leadership yet to be explored in school-based leadership research but
with potential.

How to define and what do we know about - dependent, moderating and mediating
variables in school leader and leadership programme effects studies? We illustrate
the state-of-the-art of knowledge about these variables through a review of leadership
research carried out exclusively in U.K. contexts. It is clear from this review that

recently published U.K. leadership research is almost exclusively small scale and
qualitative in nature.
Methodological Challenges
Although not as extensive as our treatment of conceptual challenges, the paper also
responds to a series of methodological challenges typical of field based leadership
research; these challenges are addressed more fully in a subsequent paper1. One of the
problems we take up in this paper is the measurement of leadership practices, examining
a small set of instruments commonly used for this purpose.
We also discuss such common difficulties in conducting both programme and leadership
effects research as the narrow and unreliable nature of commonly used student
assessment instruments and how to deal with missing data for individual schools. The
paper presents evidence that, while the school is the unit of analysis in much leadership
effects research, there is typically greater within - than across - school variation in
measures of leadership an important, largely unaddressed, challenge for future research.
Promising approaches to the evaluation of leadership programme effects are also
outlined.
Conclusions and Recommendations
Our conclusions and recommendations bear on selected aspects of future DfES sponsored research that might be carried out on both leadership, and leadership
programme, effects.
Future leadership effects research should:

measure a more comprehensive set of leadership practices than has been included in
most research to date: these measures should be explicitly based on coherent images
of desirable leadership practices. Such research is likely to produce larger estimates
of leadership effects on pupil outcomes than has been provided to date;

measure an expanded set of dependent (outcome) variables: these are variables


beyond just short-term pupil learning including, as well, longer term effects.
Examples of such long - term effects include pupil success in tertiary education,
employment and commitment to learning over the life span;
systematically describe how leaders successfully influence the condition of variables

mediating their effects on pupils: we now have considerable evidence about what are
the most potentially powerful variables mediating school leader effects but we know
much less about how leaders influence these mediators;

attend more systematically to variables moderating (enhancing, reducing) leadership


effects: lack of attention to this category of variables seems likely to be a major
source of conflicting findings in the leadership research literature. Furthermore, when
studies do attend to moderators, their choice has often been difficult to justify and
largely atheoretical.
1 See Leithwood, K., Levin, B. (2005). Assessing Leadership Effects on Pupil Learning:
Methodological Issues. Toronto: Report submitted to the DfES, January.

reflect greater methodological variety than is evident in recently published U.K.


leadership research: this will be essential if a robust body of context - relevant
knowledge is to be developed.

Future leadership programme effects research should:

be guided by conceptual frameworks similar to those we have recommended for


leadership effects research: our review suggests that the vast majority of previous
efforts to evaluate leadership programme effects, in most parts of the world, have not
generated the type and quality of evidence required to confidently answer questions
about their organizational or pupil effects. This problem could be addressed by
introducing, into the conceptual frameworks we have suggested for leadership effects
studies, leadership programmes conceptualized as one category of antecedent
variables stimulating changes in leadership practices.

provide comparative information about programme effects: formal programmes are


just one of many influences on leaders practices; to fully appreciate the value of such
programmes, their effects need to be compared to the effects of such other
antecedents as on-the-job learning, leaders traits and early family experiences, for
example. Such information would inform not only programme improvement efforts
but leadership selection processes, as well. Information of this sort would also assist
with cost-effectiveness judgements in the context of planning for leadership
development.

be funded at levels consistent with the expectations for what is to be accomplished:


few documented programme evaluations provide the type of comprehensive data we
call for here and funding is part of the reason. If future leadership programme
evaluations are to assess the direct and indirect effects of such programmes on pupil
learning as well as leaders practices, then a different level of funding will be required
than has been typical to date.
Implementing these recommendations will require considerable attention to a significant
number of conceptual and technical issues at the core of designing and conducting high
quality, high impact research and evaluation. Just doing more research of the type that is
typically being done in the country now - at least as it is reflected in the published
literature - seems unlikely to significantly advance our understandings about how
leadership programmes and leaders most productively improve pupil learning.

Introduction
Purposes and Methods
The Department for Education and Skills (DfES) commissioned this paper to identify the
pathways by which school leadership programmes and practices influence, or positively
impact on, pupil learning and to inform future research and development in this field.
More precisely, our goals were to develop a model or framework of variables and
relationships clarifying how leadership programmes and practices impact on pupil
outcomes.
In this paper we describe alternative approaches to conceptualizing the relationship
between leadership programmes, changes in leader practices and the effects of such
changed practices on schools and students. We also identify some of the methodological
challenges facing evaluators with an interest in programme and leader effects and offer
some suggestions about how these challenges might be addressed.
Our methods for developing this paper entailed systematic analyses of current research
and theory. In the case of each challenge or issue taken up in the paper, we aimed to
reflect the best of current thinking and to be quite comprehensive about the relevant
literatures to which we paid attention; at least a sample of our sources are cited
throughout the paper. We have grappled with many of these issues in our own recent
research, as well, and our commentary also reflects that experience.
The Meaning of Leadership
We begin by reflecting on the thorny question of Just what is leadership, anyway? not
because we are able to provide a precise answer but because the concept at least deserves
some preliminary exploration in a paper with the focus of this one. It is perfectly
reasonable to ask How could we understand leadership effects if we dont have a clear
notion of what produced those effects? That said, we raise this question in the face of
many claims that there is no agreed definition (e.g., Antonakis, Cianciolo & Sternberg,
2004). Indeed, the leadership literature contains literally hundreds of at least slightly
different conceptions of the concept.
One way often used to clarify the meaning of leadership is to compare it to the concept of
management. Some of these comparisons seem largely unhelpful, as in Bennis and
Nanus(1985) claim that management is doing things right and leadership is doing the
right things. More helpful, we think, is a distinction offered by Kotter (1990). According
to this source, management is about producing order and consistency, whereas leadership
is about generating constructive change. Adopting this perspective, the primary effect of
organizational leadership would be significant change in a direction valued by the
organization. In practice, of course, distinguishing between leadership and management
behaviours can be extremely difficult. This is because the distinction rests not on the
nature of the behaviour but its effects. If behaviour produces order and consistency then it
must be management; if it produces change in a valued direction it must be leadership.

Most conceptions of leadership do associate it with productive change. And at the core of
most of these conceptions are two functions generally considered indispensable to its
meaning:

Direction-setting: helping members of the organization establish a widely agreed on


direction or set of purposes considered valuable for the organization; and

Influence: encouraging organizational members to act in ways that seem helpful in


moving toward the agreed on directions or purposes.
Each of these functions can be carried out in different ways, such differences
distinguishing many models of leadership from one another. As Yukl notes, leadership
influences:
the interpretation of events for followers, the choice of objectives for
the group or organization, the organization of work activities to
accomplish objectives, the motivation of followers to achieve the
objectives, the maintenance of cooperative relationships and teamwork,
and the enlistment of support and cooperation from people outside the
group or organization (1994, p. 3).
One must assume, in the case of this conception, that the objectives being referred to
entail some sort of change.
This is it not a very precise way of defining leadership and may be vulnerable to the
occasionally-heard charge that such lack of precision severely hampers efforts to better
understand the nature and effects of leadership. But leadership is a highly complex
concept. Like health, law, beauty, excellence and countless other equally complex
concepts, efforts to define it too narrowly are more likely to trivialize than help bring
greater clarity to its meaning.
Leadership Effects
Policy makers aiming to improve schools on a large scale in todays context invariably
assume that the success with which their policies are implemented has much to do with
the nature and quality of local leadership, especially leadership at the school level (e.g.,
Caldwell, 2000; Murphy & Datnow, 2003; Young, Peterson & Short, 2001). This has not
always been the case. As David Day (2001) reminds us, it was popular several decades
ago, at least in academic circles, to claim that the effects of those in organizational
leadership roles were substantially outweighed by the effects of other organizational
actors and conditions. Conducted in non-school organizations, for example, Meindls
(1995) research about the romance of leadership argued that leadership was simply the
easiest and simplest explanation for organizational effects that were actually the result of
a host of more complex and harder to understand relationships and conditions. Evers and
Lakomski (2000) have mounted similar arguments about school leadership, in particular.
However, more recently arguments that leadership does not matter have been overtaken
by empirical evidence indicating that it matters a great deal. We summarize this evidence

here very briefly with an exclusive focus on pupil effects. It is important to note that most
of this evidence has come from research on school-level leaders, especially heads, deputy
heads or their equivalent in schools outside of the United Kingdom (U.K.). Local
Education Authority (LEA) or district leadership effects on pupils have, until recently,
been considered too indirect and complex to sort out, and research on teacher leadership
has rarely inquired about pupil effects (Barr & Duke, 2004).2
Claims about the important effects of leadership on pupils are justified by five quite
different types of research evidence:

Large-scale quantitative studies of overall leadership effects on pupil test scores:


evidence of this type reported between 1980 and 1998 (approximately four dozen
studies across all types of schools) has been reviewed in a series of papers by
Hallinger and Heck (1996a, 1996b, 1998). These reviews conclude that the combined
direct and indirect effects of school leadership on pupil test scores (primarily math
and language scores) are small but educationally significant. While leadership
explains only three to five percent of the variation in pupil test scores across schools
(not to be confused with the very large within-school effects that are likely), this is
actually about one quarter of the total across-school variation (10 to 20 percent)
explained by all school-level variables, after controlling for pupil intake or
background factors (Townsend, 1994; Creemers & Reetzig, 1996). The quantitative
school effectiveness studies providing much of these data indicate that classroom
factors explain more than a third of the variation in pupil test scores.

Overall leadership effects on pupil engagement in schools: using designs comparable


to the leadership effects studies summarized above, a second source of evidence
about leadership effects on pupils replaces test scores with pupil engagement in
school as the criterion variable. Building on Finns (1989) early work, participation is
the behavioural component of engagement and includes students actions in the
classroom and school. Identification with school is the psychological component of
engagement, its meaning captured in such terms as affiliation, involvement,
attachment, bonding and commitment. Two overlapping research programmes on
leader effects, one in Canada (e.g., Leithwood & Jantzi, 1999), the other in Australia
(e.g. Silins & Mulford, 2002) have used engagement as their dependent measure and
both have found significant indirect effects of transformational approaches to
leadership on student engagement. Evidence from these programmes and others
(Fredricks, Blumenfeld & Paris, 2004) also suggests significant relationships between
engagement and both retention and a wide range of achievement outcomes in
elementary and secondary schools.

Effects of specific leadership practices on pupil test scores: a third source of evidence
about leadership effects on pupils is also large-scale and quantitative in nature.
However, instead of examining overall leadership effects, it inquires about the effects
of specific leadership practices on pupil test scores. Evidence of this sort can be found
2 This very comprehensive review of empirical evidence was able to locate only five studies assessing the
effects of teacher leadership on pupils. Only four of these studies actually included direct measures of pupil
learning and three of the four found no effects of teacher leadership on such learning. In light of how
popular teacher leadership has become as a strategy for school reform, this is an astonishing demonstration
of just how susceptible are schools to evidence-free claims about what they ought to do.

sporadically in the research alluded to above, but a recent meta-analysis by Waters,


Marzano and McNulty (2003) has significantly extended this type of research. This
study identifies 21 leadership responsibilities and calculates an average correlation
between each and whatever measures of pupil achievement were used in the original
studies. From this data, estimates are calculated of the effects on pupil test scores (e.g.
a 10 percentile point increase in pupil test scores resulting from the work of an
average principal who improved her demonstrated abilities in all 21 responsibilities
by one standard deviation (p. 3).3
Primarily qualitative case study evidence: studies providing this type of evidence
typically are conducted in exceptional school settings (e.g. Gezi, 1990; Reitzug and
Patterson, 1998 ). These are settings believed to be contributing to pupil learning
significantly above or below normal expectations as, for example, effective schools
research based on outlier designs. Such studies usually report very large leadership
effects not only on pupil learning but on an array of school conditions, as well (e.g.
Mortimore, 1993; Scheurich, 1998). What is lacking from this evidence, however, is
external validity or generalizability.
Research on leader succession is the final and, arguably, most compelling source of
evidence about leadership effects. This evidence demonstrates, for example, that few
school improvement initiatives survive a change in principal leadership and that
important attitudes toward leadership influence are significantly and negatively
influenced by frequent changes in such leadership. Teachers come to ignore or
otherwise inoculate themselves against the influence of their schools
administrative leaders when these leaders are perceived to rotate through the school
every two or three years (Hargreaves, Moore, Fink, Brayman, & White, 2003;
Macmillan, 1996). Thus inoculated, it takes considerable time and a prodigious effort
on the part of a committed leader prepared to stay the course to recover the
cooperation of her schools staff in the interests of school improvement. Similarly,
leader succession research in non-school organizations provides powerful evidence of
very large leader effects, especially the effects of leaders at the apex of their
organizations hierarchy. Evidence from Day and Lord (1988) and Thomas (1988)
indicates, for example, that:
When properly interpreted and methodologically sound, the research on
leadership succession---has demonstrated a consistent effect for

3 While this quantitative synthesis of research produces clearly interesting data, estimates from such data to
principal effects on pupil learning in real world conditions must be treated with considerable caution. First
of all, the data are correlational in nature, but cause and effect assumptions are required for the extrapolated
effects of leadership improvement on pupil learning. Second, the illustrative effects on pupil achievement
described in the study depend on a leaders improving their capacities across all 21 practices at the same
time, an extremely unlikely occurrence! Some of these practices are dispositional in nature, or rooted in
deeply held beliefs unlikely to change much, if at all, within adult populations (e.g., ideals/beliefs,
flexibility). And just one of the 21 practices, increasing the extent to which the principal is knowledgeable
about current curriculum, instruction and assessment practices would be a major professional development
challenge, by itself. Nonetheless, this line of research is a useful addition to other lines of evidence which
justify a strong belief in the contributions of successful leadership to pupil learning.

leadership that explained 20 to 45 percent of the variation in relevant


organizational outcomes (Day, 2001).
These five sources of evidence add up to powerful support for the importance widely
attributed to leadership by policy makers and the public at large. However, without
minimizing the considerable progress that has been made over the past 15 years, we still
have a great deal to learn about:

the nature of school leadership practices that are successful in improving pupil
learning;

under what conditions or circumstances are some forms of school leadership more
successful than others?
how successful leadership practices are connected to changes in the school

organization and eventually to improvements in pupil learning; and

how successful leadership is best developed.


Leadership Programme Effects
Because effective leadership is so critical to school improvement, massive amounts of
energy and considerable resources have been devoted to its initial development and
ongoing improvement over the past 15 years, in particular. The National College of
School Leadership is a highly visible manifestation of this attention in England. Formal
training programmes appear to be the most direct strategy for building leadership
capacity but they are by no means uniformly effective. Indeed, calls for more rigorous
programme evaluation have become ubiquitous. These calls have escalated significantly
in the past decade in response to an outcomes-oriented accountability policy environment
for education and a widely shared belief in the importance of effective leadership.
Programme evaluators have struggled to keep up with these calls for more rigorous
evaluation for several reasons. Most obviously, first of all, the funds available for
leadership programme evaluation rarely match the ambition of the expectations. Second,
the most ambitious expectations to assess leadership programme effects on pupil
learning require highly sophisticated theoretical frameworks, frameworks that lie
outside the theoretical repertoire of those typically charged with programme evaluation
responsibilities. Finally, and closely related to this second reason, such highly
sophisticated frameworks (called indirect effects or mediated models) potentially
include all of the variables at the school and classroom level that are themselves the focus
of independent lines of active research with the usual debates and uncertainties about
their effects on pupil learning. Examples of such variables at the school level include
school culture, school improvement planning processes, and shared decision-making. At
the classroom level, such variables include, for example, classroom instructional
processes, opportunity to learn and class size.
In a recent analysis of leadership preparation programmes across the United States,
McCarthy (1999) concluded that we do not actually know whether, or the extent to
which, such programmes actually achieve the goal of producing effective leaders who
create school environments that enhance pupil learning? (p. 133). This gap in our

10

knowledge is not because leadership preparation programmes are never evaluated; rather,
the vast majority of such evaluations do not provide the type and quality of evidence
required to confidently answer questions about their organizational or pupil effects. Most
evaluations are limited to assessing participants satisfaction with their programmes and
sometimes their perception of how such programmes have contributed to participants
work in schools (McCarthy, 2002).

11

A General Framework to Guide Research on Programme


and Leader Effects on Pupil Learning
Most efforts to conceptualize the relationships among leadership programmes, leaders
practices and pupil learning has assumed that the effects of leaders on pupil learning are
largely indirect. Based on this well-justified assumption, then, one of the primary
challenges for research on programme and leaders effects is to locate the most defensible
set of variables mediating and moderating programme and leaders effects. A second
significant challenge is to uncover the nature of the relationships among these variables
and between leaders and such variables.
Figure 1, reflecting these assumptions, is the general framework guiding our account of
how to better understand programme and leader effects. This figure indicates that
leadership practices (overt behaviours - or properties of the organization - aimed at
direction setting and influence) have direct effects on potentially a wide range of
variables; they stand between or mediate the effects of leadership, particularly when
those effects are conceptualized as pupil learning. Below we discuss some defensible
alternatives to pupil outcomes as potential dependent variables in leadership effects
research but also assume that in the current U.K. educational environment pupil learning
is likely to be at least among dependent variables necessarily included in DfES or the
National College for School Leadership (NCSL) endorsed studies of educational
leadership effects.
Figure 1 also includes a set of moderating variables. As we explain more fully below,
these are features of the organizational or wider context in which leaders work that
interact with the dependent and/or mediating variables. These interactions potentially
change the strength or nature of relationships between, for example, the independent and
mediating variables or the mediating and dependent variables. For example, if previous
evidence suggested that male and female teachers respond differently to the same set of
headteachers leadership behaviours, then teachers gender would be a promising
moderating variable to include in a study of headteacher effects.
Some of the antecedents variables in Figure 1 are internal to the leader, including for
example, leaders traits, values, cognitions, and emotions. There are external antecedents,
as well. These would include leadership programmes, of course, but also such variables
as LEA relationships, government educational policies, and leader family and
socialization experiences (Popper & Mayseless, 2002). The next several sections of the
text address issues in the conceptualization and measurement of each of the variables in
Figure 1.

12

Antecedents

Independent
Variables
leadership practices

Moderating Variables
e.g. family background
family culture
gender
formal education
reward structure

Mediating Variables
school conditions
class conditions
individual teachers
professional community

Figure 1: A General Framework for Guiding Leadership Effects Research

13

Dependent
Variables
short term
long term

Leadership Practices: The Independent Variables


Direction-setting and influence, as we argued above, is the generic meaning of
leadership. Both of these functions can be exercised in many different ways and more or
less successfully. In this section we illustrate the range of ways in which the successful or
effective exercise of leadership has been conceptualized both within and outside of the
education sector; these are alternative leadership models or theories. We also identify
some important challenges in the measurement of successful or effective leadership.
Leadership Models Developed in the Education Sector
In their extensive review of research, Leithwood and Duke (1999) developed a five-fold
classification of leadership models reflected in the educational literature. These are
summarized along with related theory and research published since 1999.
Instructional leadership: focuses on the behaviours of teachers as they engage in

activities directly effecting the learning of pupils. The more fully developed models
in this category (e.g., Hallinger, 2003) also include attention to broader sets of
organizational variables, such as school culture or climate, thought to influence
teachers classroom practices.

Transformational leadership: focuses on the commitments and capacities of


organizational members, as well as their willingness to engage in extra effort on
behalf of their organizations. While the bulk of the evidence about this approach to
leadership has been collected in non-school contexts (e.g., Avolio & Yammarino,
2002), educational researchers have recently begun to redress this imbalance (e.g.,
Nguni, 2004; Lunenburg, 2004).

Moral leadership: is concerned with the ethics and values of those exercising
leadership. Specifically, it aims to clarify the nature of the values used by leaders in
their decision making and how conflicts among values are best adjudicated (Begley &
Leonard, 1999; Begley & Johansson, 2003). A strand within this approach to
leadership specifically aims to promote democratic values and the empowerment of a
large proportion of organizational members (e.g. Starratt, 2003: Johansson, 2003).

Participative leadership: shines a spotlight on group decision-making processes.


Educational research inquiring about this approach builds on a strong foundation of
research in other sectors dating back to seminal studies in the early 1930s (e.g.,
Mayo, 1933) about increases in organizational effectiveness associated with greater
participation of employees in meaningful decisions about their work. The extensive
body of research on teacher participation in decision making reasonably can be
viewed as part of the body of evidence about this model of leadership (e.g., Conley,
1991). Rapidly growing literatures on both teacher leadership (York-Barr & Duke,
2004; Harris & Chapman, 2002) and distributed leadership (Gronn, 2002; Spillane,
Halverson & Diamond,2002) are the most recent evolutions of this approach.

Managerial and strategic leadership: encompass a range of tasks or functions found


in the classical management literature (reviewed in Rost, 1991), including tasks such
as coordination, planning, monitoring and the distribution of resources. Educational
literature from the United Kingdom reflects a far greater interest in this form of
leadership than does the North American literature. Also addressed much more

14

extensively in the UK than the North American literature4 is the entrepreneurial,


creative and change oriented strategic leadership sometimes thought to be the
exclusive purview of those occupying senior levels of the organizational hierarchy
(Yukl & Lebsinger, 2004).
Contingent leadership: emphasizes the need for leaders to be responsive to the unique
demands of their organizations and the contexts in which those organizations
function. While this approach is quite mature in both education and non-education
sectors (e.g., Blake and Mouton, 1964), its original conception was limited to a very
small number of dimensions along which leadership styles could vary in response to
context (primarily the initiation of structure and demonstrations of consideration for
employees). Current leadership research continues to call for more sensitivity to the
context in which leaders work and greater flexibility on the part of leaders across a
much larger number of dimensions (Yukl & Lepsinger, 2004).

Leadership Models Developed in Other Sectors


With the exception of instructional leadership, all of the approaches to leadership
explored by the educational research community are also active areas of research in other
sectors. But academic leadership research5 in these other sectors reflects an additional
range of approaches or models. The classification of this research offered by Dansereau,
Yammarino & Markham (1995) illustrates this range. We provide only a cursory
description of their categories here, our aim being to simply alert readers to this larger
field of work.
A total of 13 approaches to leadership appear in the classification system of Dansereau
and his colleagues and these are nested within four superordinate categories:

Classical approaches: including several contingency-oriented, as well as


participative, approaches to leadership. This category assumes that leaders can and
ought to change their styles over time in response to the circumstances in which they
find themselves.

Contemporary approaches: with a more explicit focus on both the leaders and the
development of their followers (p. 254), includes charismatic and transformational
approaches to leadership. This superordinate category also includes Leader-Member
Exchange theory which argues that leaders have unique relationships with individual
organizational members depending on such factors as trust, perceived competence
and the like.6.

Alternative approaches: expand the focus of attention beyond either the individual
leader or followers to relationships and interactions. Included in this superordinate
4 See, for example, the special issue of School Leadership and Management (2004, vol. 24, no. 1) edited by
Brent Davies.
5 We use this term academic leadership research in reference to systematic, theoretically informed,
empirical inquiry about leadership - as distinct from the highly popular genre of leadership literature which
is autobiographical, anecdotal and/or exclusively case based.
6 For one of the few studies of this model of leadership in a school context, see Devereauxs (2004) recent
dissertation.

15

category are information processing, substitutes for leadership, and romance of


leadership models.
New wave approaches: this category includes a quite eclectic set of leadership models
held together as much by their recent emergence as anything more conceptually
coherent. Self leadership, a multiple linkages model, multi-level theory and
individualized approaches to leadership are included.

Measuring Leadership Practices


Some of the education sector leadership models have been specified in detail and tested
with instruments which are quite well developed. This is the case, for example, with
Hallingers instructional leadership model (see Hallinger & Murphy, 1985), Leithwoods
transformational school leadership model (Leithwood & Jantzi, 2000) and Marks and
Printys (2003) synthesis of both of these forms of leadership. Several versions of Bass
(1985) Multifactor Leadership Questionnaire have been used extensively to study
transformational leadership effects primarily in non-school organizations, but in schools
and districts, as well (e.g., Nguni, 2004). Many other instruments are available for
measuring leadership, especially for doing so in non-education organizations (Clark &
Clark, 1990).
Without diminishing their many other contributions, one limitation of some of the most
widely cited reviews of leadership effects on pupil learning is that they confound
estimates of such effects by failing to distinguish among alternative approaches to, or
models of, leadership (e.g., Hallinger & Heck, 1996b). Other reviews seem to infer
comprehensive assessments of leadership effects while actually limiting themselves to the
behaviours associated with a particular model of leadership (e.g., Witzier, Bosker &
Kruger, 2003). Furthermore, some original studies of leadership effects use secondary
data sources originally created for other purposes, typically estimating the effects of
potentially incoherent or incomplete models of leadership.
Virtually all large-scale quantitative leadership effect studies in education restrict their
attention to only part of what it is that leaders do. Such studies are usually guided by a
leadership model which is intentionally reductionist in nature. The aim of this work is to
assess the explanatory power of a particular set of leadership behaviours, which are
associated with the chosen model. But, of course, real leaders do much more in their
schools than provide, for example, instructional or transformational leadership. On
occasion, almost all leaders demonstrate the use of practices associated with strategic
leadership, moral leadership and the like. Such outside-the-model practices are often
captured in the much wider net of qualitative case study research and this is one of the
reasons for the discrepancy between the size of leadership effects reported in qualitative
and quantitative studies.
A review of research (Bell, Bolam & Cubillo, 2003) prepared for the British Evidencebased Policy and Practice Centre (EPPI) started with more than 4000 potential
references but found that only eight met the criteria of a well-specified leadership model
and empirical evidence on pupil outcomes. In addition, many leadership programme

16

evaluations neither specify nor measure the leadership practices which they aim to
improve, electing instead for more global measures of participant satisfaction with the
contribution of the programme to participants personal and implicit leadership efforts or
espoused leadership theories.
These shortcomings in the actual measurement of leadership point to the importance of
clearly specifying those leadership practices which are hypothesized to effect pupil
outcomes. Failure to do this arises from both practical and conceptual sources.
Practically, available resources will often press researchers and evaluators to rely on
existing evidence, evidence that is an imperfect match for their purposes. Conceptually, a
major source of the problem is lack of agreement about the definition of leadership, as we
discussed earlier.
These challenges notwithstanding, subsequent improvements in our understanding of
leadership effects on pupils depends, to a significant degree, on the use of reliable
leadership measures explicitly based on clearly defined and conceptually coherent images
of desirable leadership practices.

17

Leadership Effects: The Dependent Variables


The Big Picture
David Days (2001) views on the assessment of leadership outcomes in the business
sector, are useful stimulants for thinking more comprehensively about the outcomes of
educational leadership. He identifies three levels of leadership outcomes which he
terms financial performance, operational performance, and organizational effectiveness.
Financial performance is the first and narrowest level, the equivalent in the education
sector to pupil test results.
The next broadest level is operational performance. Subsuming financial performance,
operational performance in the business sector is a function of, for example, market
share, product quality, and measures of technological efficiency. Research on educational
leadership has typically conceptualized outcomes of this sort as mediating variables
(discussed more full below) or mechanisms through which the influence of leaders
eventually impact on pupils.
Finally, at what Day terms the broadest and most subsuming level (2001, p. 388) is
organizational effectiveness, the primary indicator being the health or survival of the
organization. In Days words, For those leaders at the highest organizational levels in
which a system perspective is imperative for successful performance[survival] may be
primarily a function of the organizations identity, image, and reputation (2001, p. 389).
In the remainder of this section, we describe the educational equivalent of Days first and
third levels of leadership outcomes, leaving our treatment of the second level (mediating
variables) to the next section.
Level One Effects in Education: Pupil Learning
In this section we consider some of the major challenges associated with determining and
measuring pupil outcomes in inquiries about leadership effects. Academic achievement,
as it is typically measured, is just one of several indicators of pupil outcomes. Others of a
more long-term nature include graduation rates, drop out rates, and engagement in
school, for example. These two sets of outcomes are quite different. Achievement
measures reflect pupils skills and knowledge in a specific curriculum domain. Secondary
school graduation rates, however, reflect not only specific curricular goals but also course
selection decisions, course load, exam difficulty and the like.
Achievement test scores are necessarily informed by pupils entire previous school
careers, as well as their personal lives. Indeed, there is a strong case to be made that the
important outcomes of education are, in fact, the broader and longer-term measures such
as participation in further or higher education, employment, and other measures of social
participation. Many people care far more about these kinds of outcomes than they do
about, for example, science test scores at age 15. Moreover, broader measures tend to
present fewer data problems.

18

As these arguments make clear, our current preoccupation with pupil test scores, as the
dependent measure of choice in inquiries about leadership effects, is open to serious
challenge. That said, the preference for assessing leadership effects on pupil test scores is
not likely to go away anytime soon. In fact, Hallinger and Heck (1996a) decried the
extent of use of such measures in studies of leadership a decade ago. But the press to use
them has grown rather than diminished in the interim. So what are the challenges
associated with this measure of pupil outcomes?
While purpose-built achievement measures could be used by researchers and evaluators
(although they would have their own limitations), in practice, both levels of funding and
national, state or LEA policies mean that most research studies and programme
evaluations end up using existing measures. These measure are typically part of national,
state or LEA pupil testing programmes which have three well-known limitations as
estimates of leadership effects: a narrow focus, questionable or unknown reliability, and
the questionable accuracy with which they are able to estimate change over time. We
have encountered a handful of less pervasive, practical limitations in some of our own
recent work which we also identify in this section.
Narrow focus. First, most large-scale testing programmes confine their focus to
maths and language achievement with occasional forays into science. Only in relatively
rare cases (e.g. Kentucky) have efforts been made to test pupils in most areas of the
curriculum not to mention cross-curricular areas such as problem-solving or teamwork.
Technical measurement challenges, lack of resources and concerns about the amount of
time for testing explain this typically narrow focus of large-scale testing programmes.
But this means that evidence of leaders effects on pupil achievement using these sources
is evidence of effects on pupils literacy and numeracy.

Because improving literacy and numeracy are such pervasive priorities in so many
schools at the moment, this is a limitation that will not concern many researchers and
evaluators. There is evidence, however, that leadership effects are of a different
magnitude for even these two areas of achievement. The lesson for researchers and
programme evaluators is that the size and significance of leadership effects on other areas
of achievement cannot be assumed or extrapolated.
Reliability. Lack of reliability at the school level is a second limitation of many
large-scale testing programmes. Most of these programmes are designed to provide
reliable results only for large groups of pupils. So results aggregated to national, state or
LEA levels are likely to be reliable. But as the number of pupils diminishes, as in the case
of a single school or even a small district or region, few testing systems claim to even
know how reliable their results are (e.g., Wolfe, Childs & Elgie, 2004). The likelihood,
however, is that they are not very reliable, thereby challenging the accuracy of judgments
about leadership effects. Researchers and programme evaluators would do well to limit
analysis of achievement to data aggregated above the level of the individual school or
leader.
Estimating change. Conceptually speaking, monitoring the extent to which a school
improves the achievement of its pupils over time is a much better reflection of a schools

19

(and leaders) effectiveness than is its annual mean achievement scores. Technically
speaking, however, arriving at a defensible estimate of such change is difficult. Simply
attributing the difference between the mean achievement scores of this years and last
years Key Stage Two pupils on the countrys literacy test to changes in a schools
(and/or leaders) effectiveness overlooks a host of other possible explanations:

Cohort differences: This years pupils may be significantly more or less advanced in
their literacy capacities when they entered the cohort. Such cohort differences are
quite common, as any teacher will attest;
Test differences: While most large-scale assessment programmes take pains to ensure

equivalency of test difficulty from year to year, this is an imperfect process and there
are often subtle and not-so subtle adjustments in the tests that can amount to
unanticipated but significant differences in scores;

Test conditions differences: Teachers are almost always in charge of administering


the tests and their classs results on last years tests may well influence the nature of
how they administer this years test (more or less leniently) even within the
guidelines offered by the testing agency;

External environment differences: Perhaps the weather this winter was more severe
than last winter and pupils ended up with six more snow days - six fewer days of
instruction, or a teacher left half way through the year, or was sick for a significant
time;
Regression to the mean: this is a term used by statisticians to capture the highly

predictable tendency for extreme scores on one test administration to change in the
direction of the mean performance on a second administration. So schools scoring
either very low or very high one year can be expected to score less extremely the
second year, quite aside from anything else that might be different.
Linn(2003) has demonstrated that these challenges to change scores become less severe
as change is traced over three or four years. It is the conclusions drawn from simply
comparing this years and last years scores that are especially open to misinterpretation.
Unfortunately, it is the year over year comparisons that are most commonly made by
those who report achievement results. The lesson here for researchers and programme
evaluators is to use, as measures of their dependent variable, changes in pupil
achievement over relatively long periods of time (three or more years).
Two further limitations. While the three limitations, reviewed above, of national
or state achievement data challenging researchers and evaluators have attracted
considerable recent attention, we encountered two others in the course of conducting a
recent leadership programme evaluation (Leithwood et al., 2003) using U.S. state
(Louisiana) achievement data as the measure of programme and leadership effects.

One of these limitations was changing measures over time. Ours was a five year
longitudinal evaluation which was complicated by changes in the states achievement
measures. Given the frequency of policy shifts in pupil assessment practices in many
jurisdictions, it may become impossible to maintain a consistent set of data over several
years. Not only can the tests change, but scoring rubrics and cut-offs may also be
modified.

20

Missing or incorrect data may also be a problem for researchers and evaluators. Indeed,
school records of achievement data, should they be used, are quite likely to be inadequate
for research purposes. Data may be missing, or coded incorrectly, or simply misplaced
over the years. In our own recent evaluation, achievement data for a few schools were
available for one year but not subsequent years, even though the grades to which the tests
were administered were included in the schools. This reduced the number of schools
available for comparison across years. A few schools changed the grade levels tested
further complicating the comparability issue by the lack of data for the same grade(s)
each year.
Level Three Effects In Education: Toward Organizational Survival
Level three effects are the long-term outcomes of schooling. Such effects reflect fairly
closely, a number of the important reasons for societys heavy investment in public
education. These reasons, as we have argued elsewhere (Leithwood, Aitken & Jantzi,
2002), concern both the individual welfare of students and the general public good.
Public investment in primary and secondary schooling is typically justified by the
assumption that it prepares people for life after secondary school employment,
terteriary education and commitments to learning over the life span. So two important
effects suggested by this justification are students commitment to further education
and preparation for work.
The publics investment in schooling is also justified by the contributions to communities
that accrue from a well educated population. These aggregate rather than individual
effects include, for example, contributions to the communitys economic productivity.
Contributions of this sort might be gleaned from employers perceptions of how well
graduates are prepared for the workplace and from claims on the part of employers that
their attraction to the community and their decisions to stay can be at least partly
attributed to the quality of its schools.
Economic productivity is not the only indicator of schools contributions to their
communities. Such contributions may also take the form of:

physical facilities with non-school community organizations;

sponsoring non-credit courses for adults;

participation in community projects; and

contributing time and skill to the political life of the community.


Attitudes toward public education that accrue through the long term effects described
here go some distance toward shaping public support for educational investments and
through such support, influence the very survival of schooling as an institution, the
ultimate effect or definition of organizational effectiveness in Days (2001) treatment
of leadership effects.
The demise or at least the serious decline of public schooling was not viewed by
many as a serious possibility even a short decade ago and so survival of the institution
would not have been considered, at that time, a relevant long-term conception of a

21

leadership effect. But that was before the complete capture of the public sector by an
ideology and ethic of accountability, one which has proven extraordinarily hostile to the
monopolistic provision of public services. By now, competition from private education
alternatives is quite common and public schools on both sides of the Atlantic failing to
meet their performance targets are sometimes closed and reconstituted, after being
placed in special measures, often with new leadership and staff.
Less obvious but perhaps even more pervasive is the growth of private providers of a
wide range of services to public schools and LEAs, services formerly provided within the
public school system itself. This trend is viewed by its advocates as a highly desirable
development within the public school sector allowing it to focus on its core competences
and mission. But it could as easily be viewed as death by a thousand cuts. Wherever
this trend takes us in the future, there can be little doubt that leaders in public education
systems are faced with a challenge to the survival of their systems. So institutional
survival is a rational long term criterion by which to judge their effectiveness.

22

Mediating Variables for Leadership Effects Research


The next three sections concern leadership mediators, moderators and antecedents (see
Figure 1). Evidence used in these sections largely came from three sources: a recent
review of research undertaken for another project; U.K. evidence reported in the ERIC
data base and two U.K. leadership journals; and evidence reported in a prestigious
leadership journal reporting research carried out primarily in non-education contexts.7
What Are They?
It is commonly claimed that the effects of leadership on pupil learning are largely indirect
and Figure 1 was based on this. This is most obviously the case for leadership outside the
classroom the leadership provided by headteachers, for example. In order for these
sources of leadership to effect pupil learning they must exercise some form of positive
influence on the work of other colleagues such as teachers, as well as the status of key
conditions or characteristics of the organization (school culture, for example) that have a
direct influence on pupils. These people and conditions are the mediating variables in
Figure 1. Leaders potentially have a direct relationship or influence on these variables
and these variables, in turn, have a direct influence on pupil learning. As Baron and
Kenny explain, mediating variables:
represent the generative mechanisms through which the focal
independent variable[e.g., leadership] is able to influence the dependent
variable of interest [e.g., pupil learning] (1986, p.1173).
Examples of Mediating Variables
As we mentioned in the Introduction, one of our own recent reviews of literature
(Leithwood, Louis, Anderson & Wahlstrom, 2004) summarized an extensive body of
empirical evidence collected largely in North America about promising mediating
variables for school leadership effects studies. Using results of this review as a starting

7 First, we re-examined evidence summarized in an extensive review of literature carried out for another
major research project (Leithwood, Seashore Louis, Anderson & Wahlstrom, 2004). The recentness and
comprehensiveness of this review allowed us to assume that it adequately represented the current state of
knowledge from North American research sources.
Second, in order to reflect information reported in the U.K. context about variables both mediating and
moderating successful leadership, we conducted a search of relevant empirical evidence - both qualitative
and quantitative - published over the past 12 years using the international ERIC data base (1976-2004) for
U.K. sources only, Education-line (an exclusively British data base) and the Web of Science (U.K sources
only). We also reviewed all empirical studies conducted in the U.K. relevant to leadership effects, very
broadly defined, published in Educational Management, Administration and Leadership between 2000 and
2004.
Finally, to augment our search for variables which moderate leadership effects, we analyzed empirical
studies reported between 2000 and 2004 in The Leadership Quarterly, one of the most widely respected
outlets for research on leadership conducted in non-school organizations. While limited to only one journal,
we assumed that this source would include most, if not all, of the relevant variables that would have
surfaced had an expanded number of comparable journals been included in our analysis. This assumption
was confirmed by scanning chapters in a small number of quite recent edited texts in this field.

23

point for our present work, we then supplemented the results with evidence from the U.K.
sources described above.
Table 1 lists the mediating variables found in these studies, as a whole. It should be
stressed that these variables were often not always conceptualized by the authors of the
U.K. studies as we have conceptualized them for this paper. One of the reasons for this is
that by far the majority of the U.K. studies we were able to locate were qualitative in
nature and carried out using case study methodologies. Research of this sort is not often
designed to explore causal relationships in as direct a manner as the overall framework
for this paper indicates was our intention. Nonetheless, the decision to include a variable
in Table 1 was based on evidence either that leadership had influenced it or that it had
influenced pupil learning. We found no studies that had systematically tested both sets of
relationships. Indeed, we found no U.K. leadership effects research explicitly guided by a
framework approximating Figure 1.
The left hand column of Table 1 names each of the mediating variables (four categories)
and the middle column cites published evidence about each mediator collected in UK
contexts. We provide here a brief explanation of what is included in each category of
mediating variables.
The School Conditions category of mediating variables in Table 1 includes:

school structures: this variable includes school size and reflects evidence about the
value of small structures in building a sense of identity with the school and increased
chances for meaningful relationships between teachers and pupils. Also included is
the extent to which governance is centralized or decentralized and the degree to
which there are opportunities for staff to participate in school-level decision making.
school culture: this mediator encompasses evidence about the impact on students of a

school-wide sense of community, as well as initiatives (such as anti-racist policies)


aimed at ensuring practices within the community that are equitable for all students;
school philosophy or ethos: something close to the vision held by professionals in the

school for what it is they aspire to for both students and the nature of their
organization;

organizational learning processes: these are largely self reflective and critical
reflective processes on the part of teaching staff in the literature in which they
appeared;

school improvement processes: how school professionals go about trying to


accomplish their shared goals for the school;

community culture/ownership: the extent to which the community surrounding the


school identifies with and feels some ownership in the school;

relationships with families: the extent to which parents are considered partners with
schools in the education of pupils;

instructional policies and practices: concerned directly with the schools core
technology, this variable encompasses school-wide policies shaping decisions about
student retention and promotion, the coherence of instructional programmes, and the
nature and availability of extra-curricular programmes;

24

human resources: how teacher time is allocated, as well as a wide range of working
conditions that support the instructional work of teachers are part of this variable;
extra-curricular activities: the types of activities in which students engage outside of
classrooms but within the sponsorship of the school.
Teacher recruitment and retention practices.

Classroom Conditions, as Table 1 indicates, encompasses 6 specific mediators:

class size: suited especially to leadership studies at the primary level, this variable
reflects evidence of the effects on learning in the primary grades of small classes
(approximating 15 students) when teachers adapt their instruction to take advantage
of smaller classes;
teaching loads: suited especially to leadership studies in secondary schools, this

variable reflects evidence of a significant relationship between pupil achievement and


reduced numbers of pupils for which a teacher is responsible.
teaching in areas of formal preparation: also especially promising for secondary

school leadership studies, this variable assumes that pupil learning is significantly
associated with teachers subject matter knowledge;
homework: although difficult to implement in a widely acceptable manner, this

variable is built on considerable evidence of the positive effects of homework on


learning, especially in combination with coordinated parent involvement;
student grouping: a long-standing and extensive line of research provides the

foundation for this variable; this evidence clearly favours heterogeneous over
homogeneous grouping strategies;
curriculum and instruction: this variable reflects the impact, on the learning of most

students, of a rich curriculum, one that is organized around powerful ideas,


encourages deep understanding and builds the meta-cognitive skills pupils need to
become autonomous learners.
There are two categories of mediating variables in Table 1 directly bearing on teachers.
One concerns teachers as individuals, the other concerns teachers working together as
part of a professional community. Mediating variables concerned with the individual
teacher include:

basic skills: especially important to measure are teachers literacy skills in primary
schools;

subject matter content knowledge: especially useful to include in studies of secondary


school leadership effects;

pedagogical content knowledge and skill; this is knowledge and skill about how to
teach particular ideas or concepts;

classroom experience: there is evidence underpinning this mediator of a curvilinear


relationship between teachers length of experience and the effectiveness of their
teaching. Experience is associated with increased effectiveness up to about five years.
Effectiveness often levels off after this length of experience and may drop in the
middle or later stages of a career, depending on many individual differences and
circumstances;

motivation: the nature and extent of commitment staff have toward pupils and their
school.

25

Specific mediating variable concerning teachers professional community include:

shared norms and values among teachers concerning student and teaching standards;

a focus on pupil learning as the criterion for virtually all teacher decisions;

deprivatized practice: an openness to colleagues viewing ones teaching and a


willingness to learn from their feedback;

reflective dialogue: places a high priority on inquiry and collaborative study of ones
own and colleagues practices;
collective professional development: the nature and extent to which professional

development is pursued in collaboration by teachers;

distribution of leadership; the nature and extent to which leadership in the school is
shared.

26

Table 1: A Summary of U.K. Evidence about Variables which Mediate Leaders Effects on
Pupil Learning
Mediating Variables
School Conditions
Structure
School culture (e.g., shared norms and values,
shared decision making, collaboration)

School philosophy or ethos


Organizational learning processes
School improvement processes
Community culture/ownership

Relationships with families

Instructional policies/practices
(e.g., performance targets)
Resources (human, financial)
Extra-curricular activities
Teacher recruitment and retention
Classroom conditions
Class size
Teaching loads
Formal preparation
Homework
Curriculum & Instruction
Teachers: Individual
Basic skills
Content knowledge
Pedagogical Content Knowledge
Experience
Professional development
Motivation
Teachers: Collective
Shared norms
Focus on students
Deprivatized practice
Reflective dialogue
Collective Pro Development

U.K. Research

Barker (2001)
Nicolaidou, Anscow (2002)
Colwell, Hammersley-Fletcher
(2004)
Morris (2000)
Harris, et al (2003)
Morris (2000)
Devos, Verhoeven, (2003)
Rutherford (2002)
Wilson, McPake (2000)
Barker (2001)
Jones (2003)
Harris et al (2003)
Jones (2003)
Caddell (1996)
Harris et al (2003)
James, Colebourne (2004)
Jennings, Lomas (2003)

Wilkins (2002)
Milgram (2003)
Farrell, Morris (2004)

Poulson (2001)
Harris et al (2003)
Farrell, Morris (2004)

Penny (2003)
Durrant (2003)
Wallace (2001)
Adey (2000)

Distribution of leadership

27

We do not claim to have uncovered all potential mediators or all of the relevant evidence
about those we have identified. But we do believe that most mediators conceptually
suited for leadership effects studies are evident in Table 1 and there is sufficient empirical
evidence about the influence of each on pupil learning to warrant their consideration in
the design of leadership effects research.

28

Variables Moderating Leadership Effects


What Are They?
The direct effects of leadership on mediating variables, as well as the
indirect effects of leadership on pupils, is depressed, neutralized, or
enhanced by key features of the situation or context in which leadership is
exercised. For example the same leadership practice may have quite
different effects on teachers depending on their gender, age, amount of
experience or levels of stress; so these are potential moderating variables.
As Baron & Kenny explain:
Moderator variables are typically introduced into a study when]a
relation holds in one setting but not another, or for one subpopulation but
not another (1986, p. 1173).

Since moderating variables help explain how or why certain effects will hold, the careful
selection of moderating variables is a key step in designing leadership effects research
and one that has been badly neglected in educational leadership research to date. The
consequences of this neglect can be easily illustrated. Suppose, for example, that we
design a study to assess the effects of leaders goal-setting practices on teachers
organizational commitment. Reflecting evidence from previous research, we might
decide that teachers sense of professional self-efficacy is a key mediator of the effects of
goal-setting practices on such commitment. So we measure it. But suppose goal-setting
practices only influence teachers sense of efficacy in a positive direction when trust in
leaders is high. Unless we measure trust in leaders, as well, it is likely that at least some
subsequent studies will disconfirm our results. They will do so because the different
teacher samples in those subsequent studies will vary in unknown ways with respect to
their levels of trust in leaders.
As this example makes clear, inadequate attention to moderating variables is one of the
more plausible explanations for contradictory research findings and the general
scepticism that often follows about the potential of research to provide clear guidance for
policy and practice. This charge, it should be noted, is in no way unique to large-scale
quantitative research, which at least has a tradition of worrying about such matters. Case
study, qualitative, research is especially well suited to the task of surfacing promising
moderators for subsequent consideration.8
A final point of clarification about moderators concerns the basis on which a variable is
assigned moderator status. Indeed, the same variable might be defined moderator,
mediator or dependent variable status depending entirely on the theory or framework
used to guide a leadership effects study. For example, employee trust was used above
as an example of a moderating variable in research concerned with the effects of
8 Qualitative research should not be considered exempt from the obligation of providing evidence

about moderating variables, even though some of its advocates eschew even the idea of causal
relationships and the concept of dependent, independent and mediating variables.

29

leadership on teachers efficacy. But trust is viewed as a dependent measure in leadership


studies when researchers are curious about the forms of leader behaviours which promote
its development (e.g., Kouze & Posner, 1995). Trust is also conceived of as a mediating
variable in studies, for example, concerned with the effects of leader behaviours on
employees acceptance of decisions (Tyler & Degoey, 1996). The theory- driven nature
of moderator designation means that the examples of moderating variables outlined next
in this section should be understood as an illustration of what is of theoretical interest to
leadership researchers not as a category of unique variables.
Examples of Moderating Variables
The most frequently used moderating variables in studies of leadership effects on pupils
are pupil and family background characteristics. Indicators of wealth (e.g., pupil
eligibility for free or reduced lunch at schools) and/or socio-economic status (e.g.,
parental occupation) and/or minority status are typically used to represent these variables.
But other moderating variables are evident in recent educational leadership effects
research. For example, Leithwood and his colleagues used family educational culture
as a moderating variable on the grounds that it is the feature of pupil and family
backgrounds that most directly influences the learning of pupils (Leithwood, Jantzi &
Steinbach, 1999; Leithwood & Jantzi, 1999). Marks and Printy (2003) incorporated into
their study, as moderators, classroom compositional variables related to pupil gender and
ethnicity.
Another possible moderator is, school stakeholders including, for example, unions,
community and business groups, and the media. Parents might also be included here,
depending on the role they assume. A recent review of evidence about stakeholders
suggested that:
. . . many stakeholder groups have a direct or indirect interest in schools
and school leadership, and commission reports on the state of education
have lamented the lack of involvement of stakeholders in decisions that
affect them . . . There is, however, little research on how these groups
affect the work of superintendents and principals. Certain themes are
evident in the practitioner literature, most of which look at the ways in
which stakeholders block or impede the work of school leaders, or point to
ways in which their volunteer energy can be corralled to improve the work
of schools (Leithwood, et al., 2004, p. 23)
While evidence about other stakeholder effects is quite limited, we believe that closer
attention to this moderator is warranted in future efforts to assess leaders impact on
pupils. The external accountability requirements that are part of the current policy context
for most educators seem likely to have made them especially sensitive to the views of
such stakeholders. These views seem likely to influence, for example, the conditions
teachers create in their schools and classrooms and the resources that are available to
them, quite apart from the initiatives of those in leadership roles.

30

To extend our illustrative menu of possible moderators for future research beyond those
typically found in our original North American sources we reviewed the same published
U.K. sources used to identify mediating variables. In addition, we analyzed all issues of
the 2000 to 2004 volumes of The Leadership Quarterly; this journal reports research
conducted mostly in non-school contexts.
As Table 2 indicates, our review of empirical studies carried out in the U.K. and studies
published in The Leadership Quarterly identified five categories of moderators (twentyone specific moderators) that may warrant additional attention in future studies of
educational leadership effects on pupils. The first category of moderators, Pupils,
includes a range of features associated with pupils family background, family
educational culture and pupils gender and ethnicity. In the second category, Teachers9, is
included gender, formal education and tenure (age and experience have been included
here). Also part of this category are teachers ethnicity, their beliefs and values, morale,
trust and the confidence they have in their leaders and their leadership prototypes.10 Two
characteristics of Leaders themselves are identified in Table 2; their gender and their
level in the organizational hierarchy
Table 2 identifies five features of the Organization which may moderate responses to
leadership practices, including school size, what it is people are rewarded for, and
opportunities for job enrichment. The difficulty of the tasks people are expected to
accomplish, the interpersonal dynamics among people in the organization, and the
availability and use of information in decision making are also identified as moderators in
this category. Finally, in the category Organizational context, the nature of other
stakeholders in the school or district and their relationships with the school, as well as the
policy environment in which the organization finds itself, moderate the amount and
nature of leaders influence.

9 We exclude from this category teachers in leadership roles. In the non-school leadership research
literature, the equivalent category label would be employees or followers.
10 The citation in Table 2 to Lord and Maher is not found in the issues of The Leadership Quarterly that we
reviewed. But leadership prototypes have gained extensive attention in other publications and should not
go unnoticed, in our view. Leadership prototypes are the mental models people have developed for their
meaning of leadership. Such prototypes serve as the source of criteria used by people in judging whether or
not someone is exercising leadership and the extent to which that leadership is desirable.

31

Table 2: A Summary of U.K. and The Leadership Quarterly Evidence About Variables
Which Moderate Leaders Effects
Moderating Variables
Pupils
Background, mobility, social
identity, class

Family educational culture


Gender
Ethnicity

Teachers (or followers)


Gender

Formal education
Tenure/age/experience
Ethnicity
Beliefs &values
Morale
Trust/confidence in leader
Leaders prototypes
Leaders
Gender

U.K Research

The Leadership Quarterly

Mac an Ghaill (199)


Jones (2003)
Strand (2000)
Smith, Hardman, Mroz
(1999)
Jones (2003)
Harris et al (2003)
Caddell (1996)
Jones (2003)
Gillborn (1997)
Graham, Robinson
(2004)
Moyo-Robbins (2002)

McLay, 2003
Moyo-Robbins (2002)
Moyo-Robbins (2002)
Jones (2003)
McLay (2003)
Oduro (2004)

Lord, Mayer (1993)


Neubert, Taggar (2004)
Vecchio, Boatwright (2002)
Antonakis, Avolio & Sivasubramaniam,
(2003) Study 1&2
Antonakis, Avolio & Sivasubramaniam,
(2003) Study 1&2

Hierarchical level
Organization
Size
Reward structure
Job enrichment opportunities
Goal or task difficulty
Interpersonal
dynamics/norms
Availability/use of
information technology
Organizational context
Stakeholders
Policy environment

Neubert, Taggar (2004)


Vecchio, Boatwright (2002)
Antonakis, Avolio & Sivasubramaniam
(2003) Study 1&2
Vecchio, Boatwright (2002)
Vecchio, Boatwright (2002)

Day, Hadfield, Harris


(1999)
Moyo-Robbins (2002)

Koene et al (2002)
Kahai, Sosik & Avolio (2003)
Whittington, Goodwin & Murray, (2004)
Whittington, Goodwin & Murray(2004)

Johnston (1997)
Pepin (2000)
Olson, Davidson (2003)

Selwyn (2001)
Radnor, Ball, Vincent
(2002)
Leithwood et al (2004)

32

Antonakis, Avolio & Sivasubramaniam


(2003) Study 1&2

The Antecedents of Leaders Practices


What Are They?
Antecedents may be internal or external to the leader and these two sets of antecedents
are interdependent. By this we mean that the extent and nature of influence of an external
antecedent depends on what sense is made of it internally and the importance the leader
attaches to it as a stimulus for their own behaviour. The influence of external antecedents,
in other words, is constructed from the internal cognitive and emotional resources of
the individual leader. Put simply, what leaders do depends on what they think and how
they feel.
The dotted line joining antecedent and moderating variables in Figure 1 acknowledges
that one studys antecedents may be another studys moderators; these may be
theoretically defensible differences. Policy context is an example of a variable that
might be either an antecedent or a moderator. If the research question is: What is the
impact of accountable policy contexts on the frequency with which principals display
transformational leadership behaviours? then policy context is an antecedent. But if the
question is: To what extent do accountable policy contexts enhance or depress the impact
of transformational leadership behaviours on the development of collaborative school
cultures? then policy context is a moderator.
The theory-driven nature of how variables are classified is a point worth a bit more
attention here. The same variable actually might be assigned antecedent, moderator,
mediator or dependent variable status depending entirely on the theory or framework
used to guide a leadership effects study. Employee trust is an example of a moderating
variable included in research about the effects of leadership on teachers efficacy, for
example. However, trust is viewed as a dependent measure in leadership studies when
researchers are curious about the forms of leader behaviours which promote its
development (e.g., Kouze & Posner, 1995). Trust is also conceived of as a mediating
variable in studies, for example, concerned with the effects of leader behaviours on
employees acceptance of decisions (Tyler & Degoey, 1996).
Examples of Antecedents
While our main interest in this paper is in external antecedents, especially leadership
programmes, others external antecedents for which there is evidence include, for
example, on-the-job learning, socialization processes and early family experiences. A
recent review of empirical research on transformational school leadership (Leithwood &
Jantzi, 2005) also found evidence that organizational bureaucracy, organizational values,
and school reform initiatives influenced the development of transformational leadership
practices, as well as formal training experiences. For example, three studies provided
information about how school reform initiatives influence transformational leadership
practices. Creating uncertainty and introducing competition into an otherwise
conservative organizational culture, school-based management was the reform initiative
examined by Eyal & Kark (2004) using data from teachers in 140 Israeli elementary

33

schools. The highest levels of transformational leadership were associated with moderate
(rather than vigorous or conservative) levels of organizational innovativeness prompted
by increased competition. This study also demonstrated a significant relationship between
principal proactivity and the use of transformational practices.
Both Ross (2004) and Leithwood et al (2004 b) found evidence that two substantially
different formal leadership experiences had significant effects on the development of
transformational leadership behaviours among school principals. The training programme
in Rosss study extended over four sessions (one full day and three half days), was
conducted with principals and a team of their teachers and aimed to indirectly improve
students reading and writing achievement by directly changing teachers assessment
practices. Leithwood et als study was based on a five-year longitudinal evaluation of the
effects of a leadership centre programme which provided a variety of sometimes intense
experiences for principals over several years both inside and outside their schools. Both
studies reported significant positive effects of the training experiences on principals
transformational leadership as well as student achievement.

34

Evaluating Leadership Programme Effects


Building from Figure1, and concerned specifically with leadership programmes, Figure 2
captures the range of possible expectations for theoretically framing the evaluation of
programme effects. As this figure indicates, there are six discrete models for such
evaluation, along with many additional hybrids. Each of these models is distinguished by
its choice of dependent variable and the number of additional variables mediating the
effects of leadership programmes and leadership practices on that dependent variable.
This framework is very consistent with Guskeys (2000) general model for evaluating
professional development programmes. Guskeys model consists of five levels of data
to be collected: participants reactions; participants learning; organization support and
change; participants use of new knowledge and skills and pupil learning outcomes.
Models 1, 2 and 3 are direct effects models. They propose no mediating variables
between a leadership programme and either participants satisfaction (model 1) or
participants internal processes (knowledge, skill, dispositions model 3); in the case of
model 2, the qualities and features of the programme are assessed against a set of ideal
features drawn from previous research and/or professional judgment. Model 1 is the
simplest, least valuable, but most commonly used of the six alternatives.
Model 2, a relatively recent alternative, has emerged with accumulations of evidence
about the characteristics of effective programmes; both the University Council for
Educational Administration (Peterson, 2001) and the Texas Principal Preparation
Network (Eddins, 2002). Reflecting UCEAs perspective, for example, Peterson (2001)
argues that programmes must have; a clear mission and purpose linking leadership to
school improvement; a coherent curriculum that provides linkage to state certification
schemes; and an emphasis on the use of information technologies. He also suggests that
programmes should be continuous or long-term rather than one-shot, and that a variety of
instructional methods should be used rather than relying on one or a small set of delivery
mechanisms.

35

Qualities of
Effective
Programs

Leadership
Preparation
Experiences

Participant
Satisfaction

Changes in
Knowledge,
Skills,
Dispositions

Changes in
Leadership
Practices in
Schools

Participant
Satisfaction

Improved
Student
Outcomes

Altered
Classroom
Condition

1
3
4

5
6

Figure 2: Alternative Frameworks to Guide the Evaluation of Leadership Programs

Evaluations guided by models 3 and 4 are, in many respects, highly defensible responses
to the outcomes usually demanded of education programmes in most fields. That is,
programmes change the capacities and/or actual practices of participants. Indeed, it is
reasonable to argue that, given the methodological difficulties associated with models 5
and 6, that this ought to be viewed as the near-term standard for evaluating leadership
preparation programmes. Very few examples of such evaluation can be found, at present.
One such example is Leithwood et als (1996) summative evaluation of 11 universitybased programmes sponsored by the Danforth Foundation. With Wallace Foundation
support, Darling-Hammond (2004), and her colleagues are just now beginning a second
such example.
These first three models may be parts of the more complex models 4 through to 6 and
model 6 potentially subsumes all the others. In case of model 4, the criterion variable for
judging leadership programme effects is change in programme participants actual
practices in their organizations. Model 5 requires, in addition, evidence that such
leadership practices lead to desirable changes in school and classroom conditions.
Finally, model 6 expects everything the criterion for judging a leadership programme
successful is that the students in the participants schools learn more.
By mixing and matching, other hybrid models are possible, although we do not have
much to say about them in this paper. For example, one could easily create a direct

36

effects model in which only leadership programmes and/or leadership practices and
student outcomes were linked. But the likelihood of such a model detecting changes in
pupils learning due to either experience in a leadership programme or only changes in
leaders practices is remote (Hallinger & Heck, 1996).

School and
Classroom
Conditions

School
Leadership
Center
Programs

Participants
Internal
Processes

Participants
Leadership
Practices
in their
Schools

! Mission/Goals
! Culture
! School
Planning
! Instruction
! Structure/
Organization
! Information
collection/
decision
making
! Policies and
procedures

Student
Participation
and
Engagement

Student
Achievement

Figure 3: Framework Guiding the Evaluation of the Greater New Orleans


School Leadership Center Programmes

Three recent studies with which we are associated provide more specific illustration of
the range of alternatives within model 6. Figure 3 summarizes the framework used to
guide a recently completed, five-year evaluation of an annual series of development
initiatives provided for a selected set of practicing school principals in the greater New
Orleans, Louisiana, region of the United States (Leithwood et al, 2003). In this case,
internal processes were primarily cognitive in nature and a transformational model (after
Leithwood, Jantzi, & Steinbach, 1999) was used to conceptualize leadership practices.
Variables in the school and classroom, influenced by organizational design theory, were
derived from evidence reported in Leithwood and Aitken (1995). Two sets of student
outcomes served as dependent variables in this evaluation student participation and
engagement in class and school, and achievement as measured by the states annual tests.

37

1
State
Leadership,
Policies and
Practices
e.g.
standards
testing
funding

2
District
Leadership,
Policies and
Practices
e.g.
standards
curriculum
alignment
use of data

9
Leaders
Professional
Learning
Experiences
e.g.
socialization
mentoring
formal
programmes

3
Student/Family
Background
e.g.
family
educational
culture

4
School
Leadership

6
School Conditions
e.g.
culture/community
school improvement
planning

7
Teachers
e.g.
individuals capacity
professional
community

5
Other
Stakeholders
e.g.
unions
community groups
business
media

8
Classroom
Conditions
e.g.
content of
instruction
nature of
instruction
student
assessment

Figure 4: Framework for Wallace-supported Leadership Study

A second illustration is a Wallace Foundationsupported study presently in the second of


its five-year duration (Leithwood, Seashore Louis, Wahlstrom & Anderson, 2003). This
is a research rather than evaluation project and the framework guiding data collection is
summarized in Figure 4. This framework is based on an extensive review of policyoriented literature by the principal investigators linking district and school leadership to
pupil learning. The study itself inquires about the contribution of leaders formal and
informal professional learning experiences to their leadership practices, among many
other things.

38

10
Student
Learning

In this study, both school and district leadership is conceptualized as a combination of


(potentially distributed) practices useful in all contexts, as well as additional practices
especially helpful in both the contexts of outcomes-oriented school accountability
policies and highly diverse school communities (Leithwood & Riehl, 2002). This
framework explicitly accounts for student and family background variables, as well as the
influence of other stakeholders on school leaders practices. Mediating school leader
effects on pupil learning are a series of classroom and school conditions not unlike those
found in the New Orleans study. However, teachers (their capacities, communities, etc)
are distinguished as a separate set of variables in this study. State and district tests will be
among the measures of student achievement, the dependent variable in this study.
A third illustration is embedded within the recently completed external evaluation of
Englands literacy and numeracy strategies (Earl et al, 2003). This was a sub-study of
school leadership effects on changes in classroom practices and pupil learning
(Leithwood & Jantzi, in press); Figure 5 summarizes the framework for this sub-study.
As in the two previous examples, a transformational model was used to conceptualize
leadership practices which might be distributed across several sources within the school.
Mediating leadership effects on classroom practices and learning were a set of teacherrelated variables - motivation, capacity, work setting - based on a model of workplace
performance from industrial psychology (Rowan, 1996) but adapted and extended for use
in this study (Leithwood, Jantzi, & Mascall, 2003).This study was not intended to address
issues of leadership preparation but the framework easily extended for that purpose,
illustrates yet another model 6 possibility.

Teachers
Motivation

School
Leadership
Practices

Teachers
Capacity

Teachers
Classroom
Practices

Teachers
Work
Setting

Figure 5: Framework Guiding UK External Evaluation

39

Student
Learning

Methodological Challenges
Earlier sections of this paper have identified some of the challenges of measuring both
leadership and student outcomes. Evaluators of leadership programmes, and researchers
inquiring about leadership effects, face additional thorny methodological challenges in
their efforts to demonstrate programme and leader effects on pupil outcomes, in
particular. We describe six such challenges in this section of the paper.
The first four methodological challenges are based on the direct experience of Leithwood
and his colleagues (2003) during the evaluation of the New Orleans programme described
above. These challenges arose from the attempt to use state test data as the basis for
assessing programme and leader effects on student achievement; data of this sort often
will be the cheapest and most accessible achievement measures for evaluators and
researchers. Leithwood et al (2003) describe their four challenges to illustrate what they
believed ought to be the appropriate standards of evidence for evaluating leadership
programme effects on pupil learning plausible evidence of effects - along with a
convincing explanation for such effects. Certainty of effects, they claimed, is an
unrealistic standard for programme evaluations. In the case of the New Orleans
evaluation, while achievement data available was available through state sources, at least
five important limitations on their use had to be addressed.
School Organisation
Considerable variation in the organisation of programme participants schools meant that
there were very small numbers of students available for analysis within any one type of
organization. Few schools had identical sets of data. Across participants schools, at least
eleven different grade configurations could be found. None of these configurations
included enough schools to carry out detailed analyses by configuration using the same
measures. For example, a school mean for the Iowa achievement test was calculated to
develop a gain score from 2000 to 2002. The number of grades contributing to the school
mean varied across schools from one to five. This raised questions about the
comparability of, for example, a mean score for grades 3 and 5, a mean score for a single
grade 3, or a mean score for a grade 6 and 7 combination. But limiting comparisons only
to identical grades would have virtually ruled out most analyses, particularly for schools
within the same participants cohort.
School Type
A mix of state or public (82%) and parochial schools (18%) in the New Orleans sample
meant that approximately one fifth of the schools did not have results for two of the main
tests used by the state. Data for parochial schools could only be obtained directly from
the schools, some of which are not able/willing to provide data without more
encouragement than the external evaluator managed to provide. However, even if data
had been provided, the small number of such schools would have permitted only the
simplest form of analyses.

40

Changing Measure Over Time


Comparison across the five years in the longitudinal New Orleans evaluation (this
includes the baseline measures before the project began) was complicated by the lack of
consistency in the achievement measures used and reported on the state department of
education web site.
Missing Data for Individual Schools.
Achievement data for a few schools were available for one year but not subsequent years,
even though the grades to which the tests were administered were included in the schools.
This reduced the number of schools available for comparison across years. A few schools
that changed the grade levels tested complicated the comparability issue by not having
data for the same grade(s) each year.
Determining the Unit of Analysis
The New Orleans evaluation used the school as the unit of analysis. However,
determining what the unit of analysis should be is a fundamental challenge for any social
science research, one that is often not thought through adequately in study design. In one
sense, leadership development experiences are expected to have an impact on individual
leaders who in turn have an impact on other individuals and on entire organizations.
Impact on individual leaders is relatively easy to assess. Usually this is done through
attitude or opinion measures administered to leaders themselves, a largely unsatisfactory
form of evidence. But this evidence tells us little or nothing about their practices or their
impact since we know that peoples self-reports of behaviour may be quite different from
the way their behaviour is perceived by others. Alternatively, it is certainly possible to
assess leaders acquisition of knowledge and skill in variety of quite direct ways using
tests, case problems, simulations and the like (e.g., Leithwood & Steinbach, 1995).
Whatever the form of evidence about changes in leader practices, however, such change,
as Figure 2 makes clear, is only part of the chain of variables measured through more
sophisticated evaluation designs. Improving leadership only matters, some would argue,
if the result is improved organizational performance. As well, if one embraces a
distributed model of leadership then a focus on individuals may hide important
contributions to development in the school made by others. Some analysts argue that we
focus too much on people in positions of authority and fail to see the way in which key
leadership functions in schools may be exercised by many different people.
Most survey-based research on the nature of school leadership uses data collected from
teachers about their perceptions of the leadership provided by administrative leaders (e.g.,
Day et al, 2002; Leithwood & Jantzi, 2002). Such evidence is a considerable
improvement on leader self-report data and is consistent with the theoretical claim that
leadership is an attribution.

41

There are also problems with using entire schools as the unit of analysis for leadership
impact. A school model assumes that leadership impacts are direct and evenly distributed,
but as our earlier discussion shows, neither of these assumptions is necessarily correct.
Take the issue of staff mobility. One reasonable expectation for good leadership is that
staff stability should be high, since many changes in staff are likely to make it more
difficult to improve learning and teaching. However changes in leadership are associated
with increased turnover of staff as leaders recruit new staff who are more aligned with the
leaders values.
A further difficulty with using school-level results as the measure of leadership
effectiveness relates to the situational nature of leadership. If we want to assess the
impact of leadership in situ, then presumably we need a definition of leadership that is
precise enough so that we can recognize it. Yet we also know that many factors shape
school outcomes, and that these factors also may affect the nature of leadership. For
example, presumably one important task of school leaders is to recruit and retain good
teachers, but that task may be much more difficult in settings that also face other
constraints to good outcomes, such as geographical isolation, high poverty or high ethnic
diversity.
Many other aspects of leader activity, such as strategies for involving parents, relations
with students and approaches to curriculum may also vary considerably within schools. If
this is so, how is it possible to assess leadership impact across settings? Two strategies
are possible. The most commonly used strategy is simply to rely on an average impact
score. While this doesnt tell us much about the areas of low and high impact in a school,
it does tell us the overall level of impact which may be sufficient for some evaluation
purposes; indeed, most of the quantitative evidence we now have of leadership impact is
based on this strategy.
A second strategy used quite extensively in leadership research, is to start with schools
judged to be exemplary on the basis of student outcomes and then determine what leaders
do. Even better is a design that compares outliers exceptionally high vs low scoring
schools. While these designs have some value as research strategies, they are of no value
for programme evaluation purposes. One cannot pick leaders and schools for evaluation;
the programme picks them for you.
Research design
Statistical models used in quantitative research on leader effects typically some path
analytic technique (e.g. LISREL) are capable of measuring both leaders direct effects
on pupil learning, as well as their indirect effects through influence on school and
classroom conditions which, in turn, also influence pupil learning. Statistical models of
this type are extremely useful in the evaluation of leadership programmes, beyond
models 1 to 3, in those vast majority of evaluations in which only data related to
participants, their schools and students can be collected.

42

The use of experimental or quasi-experimental designs would create other opportunities,


something we discuss at some length in our third paper. Essentially, such opportunities
consist of tradeoffs: for example, tradeoffs between the theoretical complexity (and
explanatory power) of a mediated effects model and the difficulties of either random
assignments in real world conditions or locating comparison groups sufficiently similar to
the leadership preparation group to rule out competing hypotheses to the programme
experience as explanation for any observed differences in dependent measure.

43

Conclusions and Recommendations


Our paper described challenges associated with conceptualizing the relationship between
leadership programmes, changes in leader practices and the effects of such changed
practices on schools and pupils. We also grappled with some of the methodological
challenges facing evaluators and researchers with an interest in programme and leader
effects, offering suggestions about how these challenges might be addressed.
Four conceptual challenges were addressed in the paper:

How to most usefully frame the relationship between leadership programmes,


leadership practices and pupil outcomes? We offer a relatively generic response to
this question, suggesting that any comprehensive framework would include
independent, dependent, antecedent, moderating and mediating categories of
variables. A review of research illustrates what we know about each category of
variables and the relationships among them. Frameworks guiding a handful of current
and recently completed studies illustrate variations on the generic framework;

How the nature of leadership practices might be conceptualized and what


intellectual resources are available to assist in such conceptualization? We review a
wide range of alternative leadership models to assist in thinking about this challenge;
these are models developed in both school and non-school organizational contexts;
What can leadership theory and research developed outside of schools contribute to

our understanding of school leadership practices and their effects? Building on our
earlier description of leadership models in non-school contexts, we identified
perspectives on leadership yet to be explored in school-based leadership research but
with promising potential.

How to define and what do we know about - dependent, moderating and mediating
variables in school leader and leadership programme effects studies? We define and
illustrate the state-of-the-art of knowledge about these variables through a review of
leadership research carried out exclusively in U.K. contexts. It is clear from this
review that most recently published U.K. leadership research is almost exclusively
small scale and qualitative in nature.
Although not as extensive as our treatment of conceptual challenges, the paper also
responded to a series of methodological challenges typical of field - based leadership
research. One of problems we took up in the paper is the measurement of leadership
practices; we examined a small set of instruments commonly used for this purpose.
We also discussed such common difficulties in conducting both programme and
leadership effects research as the narrow and unreliable nature of commonly used student
assessment instruments and how to deal with missing data for individual schools. The
paper presents evidence that, while the school is the unit of analysis in much leadership
effects research, there is typically greater within - than across - school variation in
measures of leadership an important, largely unaddressed, challenge for future research.
Promising approaches to the evaluation of leadership programme effects are also
outlined.

44

Our conclusions and recommendations bear on selected aspects of future DfES sponsored research and evaluation that might be carried out in the United Kingdom on
both leadership, and leadership programme, effects. Future leadership effects research
should, in our view:

measure a more comprehensive set of leadership practices than has been included in
most research to date: these measures should be explicitly based on coherent images
of desirable leadership practices. Such research is likely to produce larger estimates
of leadership effects on pupil outcomes than has been provided to date;
measure an expanded set of dependent (outcome) variables: these are variables

beyond just short-term pupil learning including, as well, longer term effects.
Examples of such long - term effects include pupil success in tertiary education,
employment and commitment to learning over the life span.

systematically describe how leaders successfully influence the condition of variables


mediating their effects on pupils: we now have considerable evidence about what are
the potentially most powerful variables mediating school leader effects but we know
much less about how leaders influence these mediators;

attend more systematically to variables moderating (enhancing, reducing) leadership


effects: lack of attention to this category of variables seems likely to be a major
source of conflicting findings in the leadership research literature. Furthermore, when
studies do attend to moderators, their choice has often been difficult to justify and
largely atheoretical.

reflect greater methodological variety than is evident in recently published U.K.


leadership research: this will be essential if a robust body of context - relevant
knowledge is to be developed.
Future leadership programme effects research should:
be guided by conceptual frameworks similar to those we have recommended for

leadership effects research: our review suggests that the vast majority of previous
efforts to evaluate leadership programme effects, in most parts of the world, have not
generated the type and quality of evidence required to confidently answer questions
about their organizational or pupil effects. This problem could be addressed by
introducing, into the conceptual frameworks we have suggested for leadership effects
studies, leadership programmes conceptualized as one category of antecedent
variables stimulating changes in leadership practices.

provide comparative information about programme effects: formal programmes are


just one of many influences on leaders practices; to fully appreciate the value of such
programmes, their effects need to be compared to the effects of such other
antecedents as on-the-job learning, leaders traits and early family experiences, for
example. Such information would inform not only programme improvement efforts
but leadership selection processes, as well. Information of this sort would also assist
with cost-effectiveness judgements in the context of planning for leadership
development.
be funded at levels consistent with the expectations for what is to be accomplished:

few documented programme evaluations provide the type of comprehensive data we


call for here and funding is part of the reason. If future leadership programme
evaluations are to assess the direct and indirect effects of such programmes on pupil

45

learning as well as leaders practices, then a different level of funding will be required
than has been typical to date.
Implementing these recommendations will require considerable attention to a significant
number of conceptual and technical issues at the core of designing and conducting high
quality, high impact research and evaluation. Just doing more research of the type that is
typically being done in the country now - at least as it is reflected in the published
literature - seems unlikely to significantly advance our understandings about how
leadership programmes and leaders most productively improve pupil learning.

46

References
Adey, K. (2000). Professional development priorities: The views of middle managers in
secondary schools. Educational Management and Administration, 28(4), 419-431.
Antonakis, J., Avolio, B., & Sivasubramaniam, N. (2003). Context and leadership: An
examination of the nine-factor full-range leadership theory using the Multifactor
Leadership Questionnaire. The Leadership Quarterly, 14(3), 261-295.
Antonakis, J., Cianciolo, A. T., & Sternberg, R. J. (Eds.). (2004). The nature of
leadership. Thousand Oaks, CA: Sage Publications.
Avolio, B., & Yammarino, F. (2002). Reflections, closing thoughts, and future directions.
In B. Avolio & F. Yammarino (Eds.), Transformational and charismatic
leadership: The road ahead. Oxford: Elsevier Science Ltd.
Barker, B., & Busher, H. (2001). The nub of leadership: Managing the culture and policy
contexts of educational organizations. Paper presented at the British Educational
Research Association Annual Conference, Leeds.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in
social psychological research: Conceptual, strategic and statistical considerations.
Journal of Personality and Social Psychology, 51(6), 1173-1182.
Bass, B. M. (1985). Leadership and performance beyond expectations. New York: The
Free Press.
Begley, P., & Johansson, O. (Eds.). (2003). The ethical dimensions of school leadership.
Dordrecht: Kluwer Academic Publishers.
Begley, P., & Leonard, P. E. (Eds.). (1999). The values of educational administration.
London: Falmer Press.
Bell, L., Bolam, R., & Cubillo, L. (2003). A systematic review of the impact of school
headteachers and principals on student outcomes. Retrieved March 20, 2004, from
http://eppi.ioe.ac.uk/EPPIWebContent/reel/review_groups/leadership/
lea_rv1/lea_rv1.pdf
Bennis, W., & Nanus, B. (1985). Leaders: The strategies for taking charge. New York:
Harper & Row.
Blake, R. R., & Mouton, J. S. (1964). The managerial grid. Houston, TX: Gulf.
Brown, D., & Lord, R. (1999). The utility of research in the study of transformational and
charismatic leadership. The Leadership Quarterly, 10(4), 531-539.
Bryman, A. (2004). Qualitative research on leadership: A critical but appreciative review.
The Leadership Quarterly, 15(6), 729-769.
Caddell, D. (1996). Roles, responsibilities and relationships: Engendering parental
involvement. Paper presented at the Scottish Educational Research Association
Conference, Dundee.
Caldwell, B. J. (2000). Leadership in the creation of world-class schools. In K. A. Riley
& K. S. Louis (Eds.), Leadership for Change and School Reform. London:
Routledge Falmer.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for
research. Boston, MA: Houghton Mifflin.
Chatterji, M. (2004). Evidence on "what works": An argument for extended-term mixed
method (ETMM) evaluation designs. Educational Researcher, 33(9), 3-13.

47

Clark, K. E., & Clark, M. B. (Eds.). (1990). Measures of Leadership. West Orange, NJ:
Leadership Library of America, Inc.
Clark, M., Walter, H., & Madaus, G. (2000). High stakes testing and high school
completion. NBETPPS Statements, 1(3).Chestnut Hill, MA: National Board on
Educational Testing on Public Policy.
Collins, D. (2002) Performance-level evaluation methods used in management
development studies from 1986 to 2000. Human Resource Development Review
1(1), 91-1110.
Colwell, H., & Hammersley-Fletcher, L. (2004). The emotionally literate primary school.
Paper presented at the British Educational Research Association Annual
Conference, Manchester.
Cook, T.D. (1991). Clarifying the warrant for generalized causal inferences in quasiexperimentation. In M. McLaughlin & D. Phillips (Eds.), Evaluation and
education at quarter century (pp. 115-144). Chicago: National Society for the
Study of Education.
Conley, S. (1991). Review of research on teacher participation in school decision making.
In G. Grant (Ed.), Review of research in education. Washington, DC: American
Educational Research Association.
Creemers, B. P. M., & Reezigt, G. J. (1996). School level conditions affecting the
effectiveness of instruction. School Effectiveness and School Improvement(7),
197-228.
Dansereau, F., & Yammarino, F. (Eds.). (1998). Leadership: The multi-level approaches.
Stamford, CT: JAI Press.
Dansereau, F., Yammarino, F., & Markham, S. (1995). Leadership: The multiple-level
approaches. The Leadership Quarterly, 6(3), 251-263.
Davies, B. (Ed.). (2004). School Leadership and Management, 24(1).
Day, C., Hadfield, M., & Harris, A. (1999). Leading schools in times of change. Paper
presented at the European Conference on Educational Research, Lahti, Finland.
Day, C, Harris, A, Hadfield, M. (2001)(a). Grounding knowledge of schools in
stakeholder realities: a multi-perspective study of effective school leaders, School
Leadership and Management, 21, 1, 19-42.
Day, C, Harris, A, Hadfield, M. (2001)(b). Challenging the orthodoxy of effective school
leadership, International Journal of Leadership in Education, 4, 1, 39-56.
Day, D. V. (2001). Assessment of leadership outcomes. In S. J. Zaccaro & R. J. Klimoski
(Eds.), The nature of organizational leadership: Understanding the performance
imperatives confronting today's leaders (pp. 384-410). San Francisco: JosseyBass.
Day, D. V., & Lord, R. G. (1988). Executive leadership and organizational performance:
Suggestions for a new theory and methodology. Journal of Management, 14, 453464.
Devereaux. (2004). Merging role-negotiation and leadership practices that influence
organizational learning. University of Toronto, Toronto.
Devos, G., & Verhoeven, J. C. (2003). School self-evaluation - Conditions and caveats:
The case of secondary schools. Educational Management and Administration,
31(4), 403-420.
Dror, Y. (1986). Policymaking under adversity. New Brunswick, NH: Transaction Press.

48

Earl, L. Watson, N., Levin, B., Leithwood, K. Fullan, M., and Torrance, N. (2003).
Watching and learning 3: Final report of the OISE/UT evaluation of the
implementation of the National Literacy and Numeracy Strategies. Prepared for
the Department for Education and Skills, England. Toronto: OISE/University of
Toronto. www.standards.dfes.gov.uk/literacy/publications.
Evers, C., & Lakomski, G. (2000). A plea for strong practice. Educational Leadership,
61(3), 6-10.
Durrant, J. (2003). Teachers leading learning. Paper presented at the British Educational
Research Association Annual Conference, Edinburgh.
Farrell, C., & Morris, J. (2004). Resigned compliance: Teacher attitudes towards
performance-related pay in schools. Educational Management Administration and
Leadership, 21(1), 81-104.
Finn, J. D. (1989). Withdrawing from school. Review of Educational Research, 59(2),
117-143.
Fredricks, J., Blumenfeld, P., Paris, A., (2004). Student Engagement: Potential of the
Concept, State of Evidence, Review of Educational Research, 74, 1, 59-109.
Gezi, K. (1990). The role of leadership in inner-city schools. Educational Research
Quarterly, 12(4), 4-11.
Gillborn, D. (1997). Ethnicity and educational performance in the United Kingdom:
Racism, ethnicity and variability in achievement. Anthropology and Education
Quarterly, 28(3), 375-393.
Graham, M., & Robinson, G. (2004). The silent catastrophe: Institutional racism in the
British educational system and the underachievement of black boys. Journal of
Black Studies, 34(5), 653-671.
Gronn, P. (2002). Distributed leadership. In K. Leithwood & P. Hallinger (Eds.), Second
international handbook of educational leadership and administration (pp. 653696). Dordrecht: Kluwer Academic Publishers.
Gronn, P. (2000). Distributed properties: A new architecture for leadership. Educational
Management and Administration 28(3), 317-338.
Guskey, T. R. (2000). Evaluating Professional Development. Thousand Oaks, CA:
Corwin Press.
Hallinger, P. (2003). Reflections on the practice of instructional and transformational
leadership. Cambridge Journal of Education.
Hallinger, P., & Heck, R. (1996). The principal's role in school effectiveness: An
assessment of methodological progress, 1980-1995. In K. Leithwood & et al.
(Eds.), International handbook of educational leadership and administration (pp.
723-783). Dordrecht: Kluwer Academic Publishers.
Hallinger, P., & Heck, R. (1996). Reassessing the principal's role in school effectiveness:
A review of empirical research, 1980-1995. Educational Administration
Quarterly, 32(1), 5-44.
Hallinger, P., & Murphy, J. (1985). Assessing the instructional management behavior of
principals. Elementary School Journal(86), 217-247.
Hallinger, P., & Heck, R. H. (1996a). Reassessing the principals role in school
effectiveness: A review of empirical research, 1980-1995. Educational
Administration Quarterly, 32(1), 5-44.

49

Hallinger, P., & Heck, R. H. (1996b). The principals role in school effectiveness: A
review of methodological issues, 1980-1995. In Leithwood, K, Chapman, J.,
Corson, D., Hallinger, P, & Weaver-Hart, A. (Eds.), International handbook of
educational leadership and administration (pp.723-784). New York: Kluwer.
Hallinger, P., & Heck, R. H. (1998). Exploring the principals contribution to school
effectiveness: 1980-1995. School Effectiveness and School Improvement, 9, 157191.
Hargreaves, A., Moore, S., Fink, D., Brayman, C., & White, R. (2003). Succeeding
leaders? A study of principal rotation and succession. Toronto, ON: Ontario
Principals' Council.
Harris, A., & Chapman, C. (2002). Democratic leadership for school improvement in
challenging contexts. Paper presented at the International Congress of School
Effectiveness and Improvement, Copenhagen.
Harris, A., Muijs, D., Chapman, C., Stoll, L., & Russ, J. (2003). Raising attainment in
schools in former coalfield areas. London: Department for Education and Skills.
Heck, R., Marcoulides, G. (1996). School culture and performance: testing the invariance
of an organizational model, School Effectiveness and School Improvement, 7, 1,
76-95.
Hesketh, B. (1997). Dilemmas in training for transfer and retention. Applied Psychology:
An International Review 46(4), 339-361.
James, C., & Colebourne, D. (2004). Managing the performance of staff in LEAs in
Wales. Educational Management Administration and Leadership, 32(1), 45-65.
Jennings, K., & Lomas, L. (2003). Implementing performance management for
headteachers in English secondary schools. Educational Management and
Administration, 31(4), 369-383.
Johansson, O. (2003). School leadership as a democratic arena. In P. Begley & O.
Johansson (Eds.), The ethical dimensions of school leadership. Dordrecht: Kluwer
Academic Publishers.
Johnston, J. (1997). Primary school teachers' perceptions of dynamic process in working
together: A case study. Paper presented at the Educational Research Association
Annual Conference, York, UK.
Jones, S. (2003). School leadership in disadvantaged communities. Paper presented at the
British Educational Research Association Annual Conference, Edinburgh.
Kahai, S. S., Sosik, J. J., & Avolio, B. J. (2003). Effects of leadership style, anonymity,
and rewards on creativity-relevant processes and outcomes in an electronic
meeting system context. The Leadership Quarterly, 14(4-5), 499-524.
Koene, B. A. S., Vogelaar, A. L. W., & Soeters, J. L. (2002). Leadership effects on
organizational climate and financial performance: Local leadership effect in chain
organizations. The Leadership Quarterly, 13(3), 193-215.
Kotter, J. (1990). A force for change: How leadership differs from management. New
York: The Free Press.
Kouzes, J. M., & Posner, B. Z. (1995). The leadership challenge: How to keep getting
extraordinary things done in organizations (Revised ed.). San Francisco: JosseyBass.

50

Leithwood, K., Levin, B. (2004 a). Approaches to the evaluation of leadership effects and
leadership programmes. Toronto: OISE/UT: Paper prepared for the U.K.
Department for Education and Skills, February.
Leithwood, K., Levin, B. (2004 b). Understanding leadership effects on pupil learning.
Toronto: OISE/UT: Paper prepared for the U.K. Department for Education and
Skills, December.
Leithwood, K., Levin, B. (2005). Assessing leadership effects on pupil learning. Part 2:
Methodological issues. Toronto: OISE/UT: Paper prepared for the U.K.
Department for Education and Skills, March.
Leithwood, K., Aitken, R., & Jantzi, D. (2001). Making schools smarter: A system for
monitoring school and district progress (2nd ed.). Thousand Oaks, CA: Corwin
Press.
Leithwood, K., & Duke, D. (1999). A century's quest to understand school leadership. In
J. Murphy & K. Seashore Louis (Eds.), Handbook of research on educational
administration (pp. 45-72). San Francisco: Jossey-Bass.
Leithwood, K., & Jantzi, D. (1999). The relative effects of principal and teacher sources
of leadership on student engagement with school. Educational Administration
Quarterly, 35(Supplemental), 679-706.
Leithwood, K., & Jantzi, D. (2000). The Transformational School Leadership Survey.
Toronto: OISE/University of Toronto.
Leithwood, K., Jantzi, D., & Steinbach, R. (1999). Changing leadership for changing
times. Buckingham, UK: Open University Press.
Leithwood, K., & Levin, B. (2004). Approaches to the evaluation of leadership
programmes and leadership effects: OISE/UT and University of Manitoba.
Leithwood, K., Riedlinger, B., Bauer, S., & Jantzi, D. (2003). Leadership programme
effects on pupil learning: The case of the Greater New Orleans School Leadership
Center. Journal of School Leadership and Management, 13(6), 707-738.
Leithwood, K., Jantzi, D., Earl, L., Watson, N., Levin, B., & Fullan, M. (2004). Strategic
leadership for large-scale reform: The case of England's National Literacy and
Numeracy Strategies. Journal of School Leadership and Management, 24(1), 5780.
Leithwood, K., Seashore-Louis, K., Anderson, S., & Wahlstrom, K. (2004). How
leadership influences pupil learning: A review of research for the Learning from
Leadership Project. New York: The Wallace Foundation.
Leithwood, K., Jantzi, D., Coffin, G., & Wilson, P. (1996). Preparing school leaders:
What works? Journal of School Leadership, 6(3), 316-342.
Leithwood, K., & Steinbach, R. (1995). Expert problem solving: Evidence from school
and district leaders. New York: SUNY.
Leithwood, K., & Levin, B. (2004). Approaches to the evaluation of leadership effects
and leadership programmes: UK Department of Education and Skills.
Levin, B. (in press). Students at-risk: A review of research. Paper prepared for The
Learning Partnership, Toronto.
Linn, R. (2003). Accountability: responsibility and reasonable expectations. Educational
Researcher, 32(7), 3-13.
Lord, R. G., & Maher, K. J. (1993). Leadership and information processing. London:
Routledge.

51

Lunenburg, F. (2004). Transformational leadership: Factor structure of Bass and


Avolio's MLQ in public school organizations. Paper presented at the annual
meeting of the University Council for Educational Administration, Kansas City,
MO.
Mac an Ghaill, M. (1996). Class, culture and difference in England: Deconstructing the
institutional norm. International Journal of Qualitative Studies in Education, 9(3),
297-309.
Macmillan, R. B. (1996). The relationship between school culture and principal's
practices at the time of succession. University of Toronto, Toronto.
Marks, H., & Printy, S. (2003). Principal leadership and school performance: An
integration of transformational and instructional leadership. Educational
Leadership Quarterly, 34(3), 370-397.
Mayo, E. (1933). The human problems of an industrial civilization. Boston, MA: Harvard
Business School.
McCarthy, M. M. (1999). The evolution of educational leadership preparation programs.
In J. Murphy & K. S. Louis (Eds.), Handbook of Research on Educational
Administration 2nd ed. San Francisco: Jossey-Bass.
McCarthy, M. M. (2002). Educational leadership preparation programs: a glance at the
past with an eye toward the future. Leadership and Policy in Schools, 1(3), 201221.
Meindl, J. R. (1995). The romance of leadership as a follower-centric theory: A social
constructionist approach. 6(3), 329-342.
Milgram, R. M. (2003). Challenging out-of-school activities as a predictor of creative
accomplishments in art, drama, dance and social leadership. Scandinavian
Journal of Educational Research, 47(3), 305-315.
Morris, A. (2000). Charismatic leadership and its after-effects in a Catholic school.
Educational Management and Administration, 28(4), 405-418.
Mortimore, P. (1993). School effectiveness and the management of effective learning and
teaching. School Effectiveness and School Improvement, 4(4), 290-310.
Murphy, J. & Datnow, A. (2003). The development of comprehensive school reform. In
J. Murphy & A. Datnow (Eds.), Leadership Lessons from Comprehensive School
Reforms. Thousand Oaks, CA: Corwin Press.
Moyo-Robbins, M. (2002). Early careers of primary school teachers: Age, gender,
ethnicity in graduates' job destinations and career development. Paper presented
at the British Educational Research Association Annual Conference, Exeter.
Nicolaidou, M., & Ainscow, M. (2002). Understanding 'failing' schools: The role of
culture and leadership. Paper presented at the British Education Research
Association Conference, Exeter.
Nguni, S. (2004). Transformational leadership in Tanzanian education.
Neubert, M. J., & Taggar, S. (2004). Pathways to informal leadership: The moderating
role of gender on the relationship of individual differences and team member
network centrality to informal leadership emergence. The Leadership Quarterly,
15(2), 175-194
Oduro, G. K. T. (2004). Distributed leadership in schools: What English headteachers
say about the "pull" and "push" factors. Paper presented at the British
Educational Research Association Annual Conference, Manchester, UK.

52

Olson, M., & Davidson, J. (2003). School leadership in networked schools: Deciphering
the impact of large technical systems on education. International Journal of
Leadership in Education, 6(3).
Penny, R. (2003). Transforming schools, transforming learning: Integrating CPD and
knowledge management strategies to build collegiate practice. Paper presented at
the British Educational Research Association Annual Conference, Edinburgh.
Pepin, B. (2000). Cultures of didactics: Teachers' perceptions of their work and their role
as teachers in England, France and Germany. Paper presented at the European
Conference on Educational Research, Edinburgh.
Peterson, K. D. (2001). The professional development of principals: Innovations and
opportunities. Paper commissioned for the first meeting of the National
Commission for the Advancement of Educational Leadership Preparation,
Rancine, WI.
Popper, M., & Mayseless, O. (2002). Internal world of transformational leaders. In B.
Avolio & F. Yammarino (Eds.), Transformational and charismatic leadership:
The road ahead. Oxford: Elsevier Science Ltd.
Poulson, L. (2001). Paradigm lost? Subject knowledge, primary teachers and education
policy. British Journal of Educational Studies, 49(1), 40-55.
Radnor, H. A., Ball, S. J., & Vincent, C. (2002). Local educational governance,
accountability, and democracy in the United Kingdom. Educational Policy, 12(12).
Reitzug, U., & Patterson, J. (1998). "I'm not going to lose you!" Empowerment through
caring in an urban principal's practice with pupils. Urban Education, 33(2), 150181.
Roland, E., Galloway, D. (2004). Professional cultures in schools with high and low rates
of bullying, School Effectiveness and School Improvement, 15, 3-4, 241-260.
Ross, J. (2004). Effects of a running records assessment on early literacy achievement:
Results of a controlled experiment. Journal of Educational Research, 97, 4, 186194.
Rost, J. C. (1991). Leadership for the twenty-first century. New York: Praeger Publishers.
Rutherford, D. (2002). Changing times and changing roles: The perspectives of primary
headteachers on the senior management teams. Educational Management and
Administration, 30(4), 447-459.
Scheurich, J. J. (1998). Highly successful and loving, public elementary schools
populated mainly by low-SES children of color: Core beliefs and cultural
characteristics. Urban Education, 33(4), 451-491.
Selwyn, N., & Fitz, J. (2001). The politics of connectivity: The role of big business in UK
education technology policy. Policy Studies Journal, 29(4).
Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi experimental
designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Shavelson, R., & Towne, L. (Eds.). (2002). Scientific research in education. Washington,
DC: National Academy Press.
Silins, H., & Mulford, B. (2002). Schools as learning organizations: The case for system,
teacher and pupil learning. Journal of Educational Administration, 40, 425-446.

53

Silins, H., & Mulford, B. (2004). Schools as learning organizations - effects on teacher
leadership and student outcomes. School Effectiveness and School Improvement,
15(3-4), 443-466.
Smith, F., Hardman, F., & Mroz, M. (1999). Evaluating the effectiveness of the National
Literacy Strategy: Identifying indicators of success. Paper presented at the
European Conference on Educational Research, Lahti, Finland.
Spector, P. (1981). Research designs. Beverly Hills, CA: Sage Publications.
Spillane, J., Halverson, R., & Diamond, J. (2000). Toward a theory of leadership
practice: A distributed leadership perspective. Paper presented at the annual
meeting of the American Educational Research Association, New Orleans, LA.
Spillane, J., Halverson, R. & Drummond, J. (2001). Investigating school leadership
practice: A distributed perspective. Educational Researcher 30(3), 23-28.
Stake, R. (1997). Case study methods. In R. Jaeger (Ed.), Complementary methods for
research in education (pp. 4001-4421). Washington, DC: American Educational
Research Association.
Starratt, R. J. (2003). Democratic leadership theory in late modernity: An Oxymoron or
ironic possibility? In P. Begley & O. Johansson (Eds.), The ethical dimensions of
school leadership. Dordrecht: Kluwer Academic Publishers.
Thomas, A. B. (1988). Does leadership make a difference to organizational performance?
Administrative Science Quarterly, 33, 388-400.
Townsend, T. (1994). Goals for effective schools: the view from the filed. School
Effectiveness and School Improvement, 5(2), 127-148.
Tyler, T. R., & Degoey, P. (1996). Trust in organizational authorities: The influence of
motive attributes on willingness to accept decisions. In R. M. Kramer & T. R.
Tyler (Eds.), Trust in organizations: Frontiers of theory and research. Thousand
Oaks, CA: Sage Publications.
Vecchio, R. P., & Boatwright, K. J. (2002). Preferences for idealized styles of
supervision. The Leadership Quarterly, 13(4), 327-342
Viadero, D. (2004). Math program seen to lack a research base. Education Week, 24(1),
1.
Wallace, M. (2001). Sharing leadership of schools through teamwork: A justifiable risk?
Educational Management and Administration, 29(2), 153-167.
Waters, T., Marzano, R. J., & McNulty, B. (2003). Balanced leadership: What 30 years
of research tells us about the effect of leadership on pupil achievement. A working
paper: McREL.
Whittington, J. L., Goodwin, V. L., & Murray, B. (2004). Transformational leadership,
goal difficulty, and job design: Independent and interactive effects on employee
outcomes. The Leadership Quarterly, 15(5), 593-606.
Wilkins, R. (2002). Linking resources to learning: Conceptual and practical problems.
Educational Management and Administration, 30(3), 313-326.
Wilson, V., & McPake, J. (2000). Managing change in small Scottish primary schools.
Educational Management and Administration, 28(2), 119-132.
Witzier, B., Bosker, R., & Kruger, M. (2003). Educational leadership and pupil
achievement: The elusive search for an association. Educational Administration
Quarterly, 34(3), 398-425.

54

Wofford, J. (1999). Laboratory research on charismatic leadership: Fruitful or futile? The


Leadership Quarterly, 10(4), 523-529.
Wolfe, R., Childs, R., & Elgie, S. (2004). Final report of the external evaluation of the
EQAO's assessment process. Toronto: OISE/University of Toronto.
York-Barr, J., & Duke, K. (2004). What do we know about teacher leadership? Findings
from two decades of scholarship. Review of Educational Research, 74(3), 255316.
Young, M. D., Peterson, G. J. & Short, P. M. (2001). The complexity of substantive
reform: a call for interdependence among key stakeholders. Paper commissioned
for the first meeting of the National Commission for the Advancement of
Educational Leadership Preparation, Rancine, WI.
Yukl, G. (1994). Leadership in organizations (3rd ed.). Englewood Cliffs, NJ: PrenticeHall.
Yukl, G., & Lepsinger, R. (2004). Flexible leadership: Creating value by balancing
multiple challenges and choices. San Francisco: Jossey-Bass.
Zaccaro, S. J., & Klimoski, R. J. (2001). The nature of organizational leadership: An
introduction. In S. J. Zaccaro & R. J. Klimoski (Eds.), The nature of
organizational leadership: Understanding the performance imperatives
confronting today's leaders (pp. 3-41). San Francisco: Jossey-Bass.

55

Copies of this publication can be obtained from:


DfES Publications
P.O. Box 5050
Sherwood Park
Annesley
Nottingham
NG15 0DJ
Tel: 0845 60 222 60
Fax: 0845 60 333 60
Minicom: 0845 60 555 60
Oneline: www.dfespublications.gov.uk
Kenneth Leithwood and Ben Levin 2005
Produced by the Department for Education and Skills
ISBN 1 84478 527 0
Ref No: RR662
www.dfes.go.uk/research

You might also like